text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Pandas Data Analysis
## Imports
Import `pandas` and `numpy` into the notebook.
```
import pandas as pd
import numpy as np
```
## Loading data
```
df = pd.read_csv('./data.csv')
df
```
The [`head`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html) method of our data frame returns the top 5 items. This can take a parameter for the number of items you want returned.
```
df.head(3)
```
The `tail` method does the opposite and returns the last 5 items. It can also take in a parameter for the number of items you want returned.
```
df.tail(2)
```
The [`copy`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.copy.html) method returns a new data frame as a copy. This is useful to have a copy of the original data frame around in case you need to revert back to it.
```
df_original = df.copy()
df_original
```
### Read methods
Often you'll be reading in data from the web or from a local file. The `pd.read_*` methods can read in all sorts of data. The two most popular ones you'll use are [`pd.read_csv`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) and [`pd.read_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_table.html).
To look at any documentation on a method, type a `?` at the end of it - `pd.read_table?`, or put the cursor on the method you want to see the documentation for and hit `Shift+Tab`.
Other read functions are:
- [`read_clipboard`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_clipboard.html#pandas.read_clipboard) - Pandas will read in whatever is in the clipboard. Useful for when copying tables from a web page.
- [`read_JSON`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_json.html#pandas.read_json) - Read JSON strings, files, or web calls. Very useful for API calls to get data.
- [`read_HTML`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html#pandas.read_html) - Read in HTML. Mainly used for web scraping. *Note: The [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/) package is required for this as it uses its APIs.*
- [`read_excel`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html#pandas.read_excel) - Similar to reading an CSV, this reads an Excel spreadsheet and can specify what sheet to read in.
Many more ways to read in data can be found at the [API reference page](https://pandas.pydata.org/pandas-docs/stable/api.html#input-output).
```
df.to_
```
Also, there are methods to write to any of these types of files. They can be accessed on the `DataFrame` or `Series` object itself.
### Value types
All values in a `DataFrame` or `Series` is of `ndarray` from `numpy`.
```
print(df.values)
type(df.values)
```
Even the values in a single row are of type `ndarray`.
```
print(df.values[0])
type(df.values[0])
```
## Handling missing and duplicate data
```
df.head()
```
First, let's handle the `-999` with the `replace` method.
```
df = df.replace(-999, 9)
df
```
Now, let's find if there are any missing values in our data frame.
```
df.isnull().values.any()
```
There is missing data - denoted by `NaN`.
Now we are selecting what rows have any values that are considered null by pandas - `NaN` and `None` are both selected. The `axis=1` parameter tells the `any` method to look at rows instead of columns.
```
df[df.isnull().any(axis='columns')]
```
We can do a boolean selection within our data frame to access our data. For example, if I want to find, within my data frame, all the rows in which the "Item Sold" row is less than five:
```
df["Items Sold"] < 5
df[df["Items Sold"] < 5]
```
#### Fix missing data
There are two main ways to deal with missing data:
- Drop the rows or columns entirely.
- Fill in missing data with another value.
Dropping is ok if there are only a very small percentage of items that to be dropped compared to the entire data set, but if that's not the case then filling in the values would be the best option.
You can drop all missing data with the [`df.dropna`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html) function.
```
df.dropna()
```
Filling in values for missing data can be tricky, but there are a few things you can do. You can pick the mean (average), median (middle), or most used values. Depending on the data, having a good foundation in the business knowledge really helps in this case.
Here I'm using the mean to fill in my missing data. Note the `inplace=True` here instead of setting the data frame equal to the results like in the previous cells.
```
df["Items Sold"].fillna(round(df["Items Sold"].mean(), 0), inplace=True)
df["Order Date"].fillna("July 25, 2017", inplace=True)
df
```
#### Duplicates
`pandas` also has some great support for finding duplicate rows and to drop them.
For a boolean array of items that are duplicates just call the `df.duplicated` function.
```
df.duplicated()
```
We can use that as a boolean mask to filter items from our data frame.
```
df[df.duplicated()]
```
To show all items, including the first instance of the duplicate, put the `keep` parameter to `False`.
```
df[df.duplicated(keep=False)]
```
We can then use the `drop_duplicates` function to drop all duplicates from our data frame.
```
df = df.drop_duplicates()
df
```
### Change Column Types
Sometimes, the data will be imported as a `string`, which `pandas` will create as an `object` type.
```
df.dtypes
```
We can convert types of columns pretty easy. For dates, `pandas` offers us a very convenient [`to_datetime`](http://pandas.pydata.org/pandas-docs/version/0.20/generated/pandas.to_datetime.html) method.
```
df["Order Date"] = pd.to_datetime(df["Order Date"])
df
```
For other types, we can call the [`astype`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.astype.html) method and specify the type we want to convert to.
```
df["Items Sold"] = df["Items Sold"].astype(int)
df
```
We can call the `info` method on the data frame to get an idea on the data types.
```
df.info()
```
## Working with Data
### Grouping
The [`groupby`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) method groups by a column or a list of columns. Doing a `groupby` on its own will result in returning a `DataFrameGroupBy` object. In order to show the data frame an aggregation on the grouping must be made.
```
df.groupby("Sales Person").sum()
```
We can call `sort_values` after grouping to sort by a specific column. To sort by descending set the `ascending` to `False`.
```
df.groupby("Sales Person").sum().sort_values("Item Price", ascending=False)
```
Or you can group by multiple columns since `pandas` has the ability to use [multiple indexes]().
```
df_grouped = df.groupby(["Sales Person", "Order Date"]).sum()
print(df_grouped.index)
df_grouped
```
### Concatenating
If you have more than one source of the same type of data, the [`concat`]() operation will append the "new" data to the "old" data into a single data frame.
```
df2 = pd.DataFrame({"Order Date": ["December 22, 2017"],
"Sales Person": ["Mary"],
"Items Sold": 12,
"Item Price": 9.99})
df2
```
The `concat` method takes in a list of data frames to concatenate together.
```
df = pd.concat([df, df2])
df
```
Now the second data frame has been added on to the original. However, this keeps the index of the second data frame when it adds to the first. We can fix this with the `reset_index` method.
```
df = df.reset_index(drop=True)
df
```
### Creating Calculated Columns
```
df.head()
df['Total Price'] = df['Item Price'] * df['Items Sold']
df
```
### Dummy Variables
From statistics, [dummy variables]() are variables that represent categorical variables as numbers.
With no parameters, it produces dummy variables on all objects with a type of `object` and `cateogry`.
```
pd.get_dummies(df)
```
However, you can specify the columns that it uses.
```
pd.get_dummies(df, columns=['Sales Person'])
```
### Descriptive Statistics
Getting descriptive statistics such as mean, standard deviation, min, and max values can tell a lot about your data. This is easily done with the `describe` method on the data frame.
```
df.describe()
```
You can also pin point the descriptive statistics on just the columns you want.
```
df[['Item Price']].describe()
```
`pandas` has access to methods to give a specific statistic instead of a table of a select few. A full list of these methods are on `pandas`' [documentation](http://pandas.pydata.org/pandas-docs/version/0.18/api.html#api-dataframe-stats).
```
df['Item Price'].mean()
df['Item Price'].median()
df['Item Price'].std()
```
Another useful statistical method available on the data frame is the correlation of each column's data to other columns. You can get this with the `corr` method.
```
df.corr()
```
## Visualizations
Import `matplotlib` into our notebook. `matplotlib` is the most popular and battle tested visualization package in Python. Its API is very heavily influenced from MATLAB.
The [`%matplotlib inline`](http://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-matplotlib) tells Jupyter to inline all plots. This prevents any plots from showing up in a separate popup window and so we don't have to always run `plt.show()`.
[Seaborn]() is a package that helps make `matplotlib` graphs look nicer and includes statistical graphs. It is typically imported as `import seaborn as sns` and `sns.set` may be called to use it on plots from `pandas` data frames.
```
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%matplotlib inline
```
[`matplotlib`]() is built right into `pandas`, so if we want to plot all of the numerical values in our data frame, just call the `plot` method.
```
df.plot()
```
Scatter plots are just as easy. Just need to specify the `x` and `y` variables.
```
df.plot.scatter(x='Item Price', y='Items Sold')
```
Bar charts are simple as well in the same way.
```
df.plot.bar(x='Sales Person', y='Items Sold')
```
Box plots are very informative. They show a box that has a line which indicates the median. The top box is the 25th percentile, and the bottom box is the 75th percentile of the data. The top line is the max value, while the bottom line is the minimum value. If there are any outliers that are way outside of this data it is marked by a circle.
```
df.plot.box()
```
There's also a [Kernel Density](https://datavizcatalogue.com/methods/density_plot.html) plot. Can also use the `df.plot.kde()` method to get the same plot.
```
df.plot.density()
```
We can also plot on a single column, by selecting it by name from the data frame.
```
df["Item Price"].plot.box()
```
Grouped data frames can also be plotted.
```
df.groupby("Sales Person").sum().sort_values("Item Price", ascending=False).plot(kind='bar')
```
### Plot Customization
We can customize our plots to make them easier to read and understand.
One simple, yet effective, thing to do is to add a title and name your axes.
```
plot = df.plot(title='Sales Plot')
plot.set_xlabel('Count')
plot.set_ylabel('Price')
```
We can change colors of our plots.
```
df.plot.scatter(x='Item Price', y='Items Sold', c='red')
```
| github_jupyter |
# DefinedAEpTandZ0 media example
```
%load_ext autoreload
%autoreload 2
import skrf as rf
import skrf.mathFunctions as mf
import numpy as np
from numpy import real, log, log10, sum, absolute, pi, sqrt
import matplotlib.pyplot as plt
from matplotlib.ticker import AutoMinorLocator
from scipy.optimize import minimize
rf.stylely()
```
## Measurement of two CPWG lines with different lenghts
The measurement where performed the 21th March 2017 on a Anritsu MS46524B 20GHz Vector Network Analyser. The setup is a linear frequency sweep from 1MHz to 10GHz with 10'000 points. Output power is 0dBm, IF bandwidth is 1kHz and neither averaging nor smoothing are used.
CPWGxxx is a L long, W wide, with a G wide gap to top ground, T thick copper coplanar waveguide on ground on a H height substrate with top and bottom ground plane. A closely spaced via wall is placed on both side of the line and the top and bottom ground planes are connected by many vias.
| Name | L (mm) | W (mm) | G (mm) | H (mm) | T (um) | Substrate |
| :--- | ---: | ---: | ---: | ---: | ---: | :--- |
| MSL100 | 100 | 1.70 | 0.50 | 1.55 | 50 | FR-4 |
| MSL200 | 200 | 1.70 | 0.50 | 1.55 | 50 | FR-4 |
The milling of the artwork is performed mechanically with a lateral wall of 45°.
The relative permittivity of the dielectric was assumed to be approximatively 4.5 for design purpose.

```
# Load raw measurements
TL100 = rf.Network('CPWG100.s2p')
TL200 = rf.Network('CPWG200.s2p')
TL100_dc = TL100.extrapolate_to_dc(kind='linear')
TL200_dc = TL200.extrapolate_to_dc(kind='linear')
plt.figure()
plt.suptitle('Raw measurement')
TL100.plot_s_db()
TL200.plot_s_db()
plt.figure()
t0 = -2
t1 = 4
plt.suptitle('Time domain reflexion step response (DC extrapolation)')
ax = plt.subplot(1, 1, 1)
TL100_dc.s11.plot_z_time_step(pad=2000, window='hamming', z0=50, label='TL100', ax=ax, color='0.0')
TL200_dc.s11.plot_z_time_step(pad=2000, window='hamming', z0=50, label='TL200', ax=ax, color='0.2')
ax.set_xlim(t0, t1)
ax.xaxis.set_minor_locator(AutoMinorLocator(10))
ax.yaxis.set_minor_locator(AutoMinorLocator(5))
ax.patch.set_facecolor('1.0')
ax.grid(True, color='0.8', which='minor')
ax.grid(True, color='0.4', which='major')
plt.show()
```
Impedance from the line and from the connector section may be estimated on the step response.
The line section is not flat, there is some variation in the impedance which may be induced by manufacturing tolerances and dielectric inhomogeneity.
Note that the delay on the reflexion plot are twice the effective section delays because the wave travel back and forth on the line.
Connector discontinuity is about 50 ps long. TL100 line plateau (flat impedance part) is about 450 ps long.
```
Z_conn = 53.2 # ohm, connector impedance
Z_line = 51.4 # ohm, line plateau impedance
d_conn = 0.05e-9 # s, connector discontinuity delay
d_line = 0.45e-9 # s, line plateau delay, without connectors
```
## Dielectric effective relative permittivity extraction by multiline method
```
#Make the missing reflect measurement
#Reflect only affects sign of the corrected
reflect = TL100.copy()
reflect.s[:,0,0] = 1
reflect.s[:,1,1] = 1
reflect.s[:,1,0] = 0
reflect.s[:,0,1] = 0
# Perform NISTMultilineTRL algorithm
cal = rf.NISTMultilineTRL([TL100, reflect, TL200], [1], [100e-3, 200e-3], er_est=3.0, refl_offset=[0])
plt.figure()
plt.title('Corrected lines')
cal.apply_cal(TL100).plot_s_db()
cal.apply_cal(TL200).plot_s_db()
plt.show()
```
Calibration results shows a very low residual noise floor. The error model is well fitted.
```
freq = TL100.frequency
f = TL100.frequency.f
f_ghz = TL100.frequency.f/1e9
L = 0.1
A = 0.0
f_A = 1e9
ep_r0 = 2.0
tanD0 = 0.001
f_ep = 1e9
x0 = [ep_r0, tanD0]
ep_r_mea = cal.er_eff.real
A_mea = 20/log(10)*cal.gamma.real
def model(x, freq, ep_r_mea, A_mea, f_ep):
ep_r, tanD = x[0], x[1]
m = rf.media.DefinedAEpTandZ0(frequency=freq, ep_r=ep_r, tanD=tanD, Z0=50,
f_low=1e3, f_high=1e18, f_ep=f_ep, model='djordjevicsvensson')
ep_r_mod = m.ep_r_f.real
A_mod = m.alpha * log(10)/20
return sum((ep_r_mod - ep_r_mea)**2) + 0.001*sum((20/log(10)*A_mod - A_mea)**2)
res = minimize(model, x0, args=(TL100.frequency, ep_r_mea, A_mea, f_ep),
bounds=[(2, 4), (0.001, 0.013)])
ep_r, tanD = res.x[0], res.x[1]
print('epr={:.3f}, tand={:.4f} at {:.1f} GHz.'.format(ep_r, tanD, f_ep * 1e-9))
m = rf.media.DefinedAEpTandZ0(frequency=freq, ep_r=ep_r, tanD=tanD, Z0=50,
f_low=1e3, f_high=1e18, f_ep=f_ep, model='djordjevicsvensson')
plt.figure()
plt.suptitle('Effective relative permittivity and attenuation')
plt.subplot(2,1,1)
plt.ylabel('$\epsilon_{r,eff}$')
plt.plot(f_ghz, ep_r_mea, label='measured')
plt.plot(f_ghz, m.ep_r_f.real, label='model')
plt.legend()
plt.subplot(2,1,2)
plt.xlabel('Frequency [GHz]')
plt.ylabel('A (dB/m)')
plt.plot(f_ghz, A_mea, label='measured')
plt.plot(f_ghz, 20/log(10)*m.alpha, label='model')
plt.legend()
plt.show()
```
Relative permittivity $\epsilon_{e,eff}$ and attenuation $A$ shows a reasonnable agreement.
A better agreement could be achieved by implementing the Kirschning and Jansen miscrostripline dispersion model or using a linear correction.
## Connectors effects estimation
```
# note: a half line is embedded in connector network
coefs = cal.coefs
r = mf.sqrt_phase_unwrap(coefs['forward reflection tracking'])
s1 = np.array([[coefs['forward directivity'],r],
[r, coefs['forward source match']]]).transpose()
conn = TL100.copy()
conn.name = 'Connector'
conn.s = s1
# delay estimation,
phi_conn = (np.angle(conn.s[:500,1,0]))
z = np.polyfit(f[:500], phi_conn, 1)
p = np.poly1d(z)
delay = -z[0]/(2*np.pi)
print('Connector + half thru delay: {:.0f} ps'.format(delay * 1e12))
print('TDR readed half thru delay: {:.0f} ps'.format(d_line/2 * 1e12))
d_conn_p = delay - d_line/2
print('Connector delay: {:.0f} ps'.format(d_conn_p * 1e12))
# connector model with guessed loss
half = m.line(d_line/2, 's', z0=Z_line)
mc = rf.media.DefinedAEpTandZ0(m.frequency, ep_r=1, tanD=0.025, Z0=50,
f_low=1e3, f_high=1e18, f_ep=f_ep, model='djordjevicsvensson')
left = mc.line(d_conn_p, 's', z0=Z_conn)
right = left.flipped()
check = mc.thru() ** left ** half ** mc.thru()
plt.figure()
plt.suptitle('Connector + half thru comparison')
plt.subplot(2,1,1)
conn.plot_s_deg(1, 0, label='measured')
check.plot_s_deg(1, 0, label='model')
plt.ylabel('phase (rad)')
plt.legend()
plt.subplot(2,1,2)
conn.plot_s_db(1, 0, label='Measured')
check.plot_s_db(1, 0, label='Model')
plt.xlabel('Frequency (GHz)')
plt.ylabel('Insertion Loss (dB)')
plt.legend()
plt.show()
```
Connector + thru plots shows a reasonable agreement between calibration results and model. There is a phase jump in the calibration results.
## Final check
```
DUT = m.line(d_line, 's', Z_line)
DUT.name = 'model'
Check = m.thru() ** left ** DUT ** right ** m.thru()
Check.name = 'model with connectors'
plt.figure()
TL100.plot_s_db()
Check.plot_s_db(1,0, color='k')
Check.plot_s_db(0,0, color='k')
plt.show()
Check_dc = Check.extrapolate_to_dc(kind='linear')
plt.figure()
plt.suptitle('Time domain step-response')
ax = plt.subplot(1,1,1)
TL100_dc.s11.plot_z_time_step(pad=2000, window='hamming', label='Measured', ax=ax, color='k')
Check_dc.s11.plot_z_time_step(pad=2000, window='hamming', label='Model', ax=ax, color='b')
t0 = -2
t1 = 4
ax.set_xlim(t0, t1)
ax.xaxis.set_minor_locator(AutoMinorLocator(10))
ax.yaxis.set_minor_locator(AutoMinorLocator(5))
ax.patch.set_facecolor('1.0')
ax.grid(True, color='0.8', which='minor')
ax.grid(True, color='0.5', which='major')
```
The plots shows a reasonnable agreement between model and measurement up to 4 GHz.
Further works may include implementing CPWG medium or modeling the line by more sections to account the impedance variation vs. position.
| github_jupyter |
# QCodes example with Mercury iPS
## Initial instantiation/connection
```
from qcodes.instrument_drivers.oxford.MercuryiPS_VISA import MercuryiPS
from time import sleep
# Note that the MercuryiPS_VISA is a VISA instrument using
# a socket connection. The VISA resource name therefore
# contains the port number and the word 'SOCKET'
mips = MercuryiPS('mips', 'TCPIP0::192.168.15.106::7020::SOCKET')
```
## Basic driver idea
The driver mainly deals with **field values** in Tesla. The driver is aware of the field values in three coordinate systems, cartesian, spherical, and cylindrical. The driver thus exposes the field coordinates x, y, z, phi, theta, rho, and r. Each coordinate comes in two versions: **target** and **measured**.
The idea is that the magnetic field is always changed in two steps; first a target is set, then the magnet is asked to ramp to said target.
## Safe regions
In addition to the safety limits baked in to the physical instrument, the driver can accept a safety limit function provided by the user. The function checks - upon receiving a new field target - whether the target is inside an allowed region.
The limit function must take input arguments Bx, By, Bz (in Tesla) and return a boolean that tells us whether that field value is safe.
```
# example: the safe region is a sphere
import numpy as np
def spherical_limit(x, y, z):
"""
Safe region is a sphere of radius 1 T
"""
return np.sqrt(x**2 + y**2 + z**2) <= 1
# assign the limit function (this can also be done at init)
mips.set_new_field_limits(spherical_limit)
```
## Two different ramps
The driver can perfom the ramp in two different ways: *simultaneous* ramping or *safe* ramping.
When simultaneously ramping, all three field components are ramped at the same time.
This method is non-blocking, and it is thus possible to query the field while it is ramping. The method does, however, **not** guarantee that the field stays inside the allowed region during the ramp. If the different axes have different ramp speeds, this is a real risk.
When safely ramping, all field components that are ramped *towards* the origin are ramped before those who are ramped *away from* the origin. The ramp is thus sequential and blocking, but if the safe region is convex (and contains the origin), you are guaranteed the the field never exceeds the safe region.
## Parameter overview
```
mips.print_readable_snapshot(update=True)
```
## Ramp examples
### First example: invalid targets
```
mips.x_target(1) # so far, so good
try:
mips.y_target(0.5) # this takes us out of the unit sphere
except ValueError as e:
print("Can not set that")
# reset and try in a different coordinate system
mips.x_target(0)
try:
mips.r_target(1.1)
except ValueError as e:
print("Can not set that")
```
### Second example: simul ramps to the origin
First we ramp the field to Bx = 1, By = 0, Bz = 0, then rotate out to thea=46, phi=30, then finally ramp it down to zero while measuring r, theta, and phi.
#### STEP A
```
mips.GRPX.field_ramp_rate(0.01)
mips.GRPY.field_ramp_rate(0.01)
mips.GRPZ.field_ramp_rate(0.01)
mips.x_target(0.1)
mips.y_target(0)
mips.z_target(0)
mips.ramp(mode='simul')
# since simul mode is non-blocking,
# we can read out during the ramp
while mips.is_ramping():
print(f'Ramping X to {mips.x_target()} T, now at {mips.x_measured()} T')
sleep(1)
sleep(1)
print(f'Done ramping, now at {mips.x_measured()} T')
```
#### STEP B
Note that since the magnet itself has no notion of any other coordinate system than cartesian coordinates, it does **NOT** follow a path where r is constant. The user must **MANUALLY** ensure to break up a ramp where r is meant to be constant into sufficiently many small steps.
```
mips.theta_target(45)
mips.phi_target(30)
mips.r_target(0.1)
mips.ramp(mode='simul')
while mips.is_ramping():
print(f"Ramping... r: {mips.r_measured():.6f} T, "
f"theta: {mips.theta_measured():.2f}, "
f"phi: {mips.phi_measured():.2f}")
sleep(1)
print(f"Done... r: {mips.r_measured():.6f} T, "
f"theta: {mips.theta_measured():.2f}, "
f"phi: {mips.phi_measured():.2f}")
```
#### STEP C
```
mips.theta_target(45)
mips.phi_target(30)
mips.r_target(0)
mips.ramp(mode='simul')
# since simul mode is non-blocking,
# we can read out during the ramp
while mips.is_ramping():
print(f"Ramping... r: {mips.r_measured():.6f} T, "
f"theta: {mips.theta_measured():.2f}, "
f"phi: {mips.phi_measured():.2f}")
sleep(1)
print(f"Done... r: {mips.r_measured():.6f} T, "
f"theta: {mips.theta_measured():.2f}, "
f"phi: {mips.phi_measured():.2f}")
```
### Third example: safe ramp away from the origin
At the origin, we can not meaningfully **measure** what theta and phi is, but the target values are persistent.
If we ramp up again and measure, we should thus get back to our target values. We use blocking safe ramp for this (just to also test/show a blocking ramp).
```
mips.r_target(0.05)
mips.ramp(mode='safe')
print('Ramped back out again.')
print(f'Field values are: theta: {mips.theta_measured()}, phi: {mips.phi_measured()}')
```
### That's it for now! Happy sweeping.
```
# sweep back down for good measures
mips.x_target(0)
mips.y_target(0)
mips.z_target(0)
mips.ramp(mode='safe')
mips.close()
```
| github_jupyter |
#### Define your project and region below. If you are not authenticated to GCP, do it by oncommenting the line below the definitions.
```
PROJECT_ID = "SOME_PROJECT"
REGION = "YOUR_REGION" #though us-central is cheaper
PIPELINE_ROOT = "gs://SOME_BUCKET/SOME_FOLDER"
#!gcloud auth login
```
#### Imports
Our imports:
* Artifact,
* Dataset,
* Input,
* Model,
* Output,
* Metrics,
* ClassificationMetrics,
Are powerful, metadata rich handles for objects "Artifacts", or its inherited classes. By using them, as shown below, we can manage paths, save and download them. The paths used are actually system path, as it is saved and shared between components via [GCS Fuse](https://cloud.google.com/storage/docs/gcs-fuse).
`component` is a decorator used for transforming a function into a KFP component. It allows us, for example, to set dependencies and base images for each of our components, with a easy-to-use and simple API.
```
from typing import NamedTuple
from kfp.v2 import dsl
from kfp.v2.dsl import (Artifact,
Dataset,
Input,
Model,
Output,
Metrics,
ClassificationMetrics,
component)
from kfp.v2 import compiler
```
As from (GCP AI Platform Official Github)[https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/ai-platform-unified/notebooks/unofficial/pipelines/lightweight_functions_component_io_kfp.ipynb], accessed in 2021-05-19:
KFP Python function-based components
A Kubeflow pipeline component is a self-contained set of code that performs one step in your ML workflow. A pipeline component is composed of:
The component code, which implements the logic needed to perform a step in your ML workflow.
A component specification, which defines the following:
* The component’s metadata, its name and description.
* The component’s interface, the component’s inputs and outputs.
* The component’s implementation, the Docker container image to run, how to pass inputs to your component code, and how to get the component’s outputs.
Lightweight Python function-based components make it easier to iterate quickly by letting you build your component code as a Python function and generating the component specification for you. This notebook shows how to create Python function-based components for use in Vertex AI Pipelines.
Python function-based components use the Kubeflow Pipelines SDK to handle the complexity of passing inputs into your component and passing your function’s outputs back to your pipeline.
There are two categories of inputs/outputs supported in Python function-based components: artifacts and parameters.
* Parameters are passed to your component by value and typically contain int, float, bool, or small string values.
* Artifacts are passed to your component as a reference to a path, to which you can write a file or a subdirectory structure. In addition to the artifact’s data, you can also read and write the artifact’s metadata. This lets you record arbitrary key-value pairs for an artifact such as the accuracy of a trained model, and use metadata in downstream components – for example, you could use metadata to decide if a model is accurate enough to deploy for predictions.
#### Our use case
Is to create three components, that will let us create and save a dataset, train a model, and evaluate it, saving beautiful, meaningfull classification plots for them.
As you will see, our components have dependencies on `pandas`, `sklearn`, and `xgboost`.
We use `Output[Dataset]` (or Model, or ClassificationMetrics) objects to create unique filepaths to save objects during the component's execution. We can then access them as:
```python
some_op = component()
some_output_object = some_op.outputs["some_object_name"].
```
Below we create two `Output[Dataset]` objects to save the train and test split of our model. The next operators will receive some inputs and handle the previously saved files on their processing steps.
```
@component(
packages_to_install = [
"pandas",
"sklearn"
],
)
def get_data(
dataset_train: Output[Dataset],
dataset_test: Output[Dataset]
):
from sklearn import datasets
from sklearn.model_selection import train_test_split as tts
import pandas as pd
# import some data to play with
data_raw = datasets.load_breast_cancer()
data = pd.DataFrame(data_raw.data, columns=data_raw.feature_names)
data["target"] = data_raw.target
train, test = tts(data, test_size=0.3)
train.to_csv(dataset_train.path)
test.to_csv(dataset_test.path)
```
#### The training component
Will receive an `Input[Dataset]` object, will be used from the outputs of the `get_data()` operator. It outputs an `Output[Model]` object, which will have some metadata written about itself.
We will use the `Output[Model]` from component train and `Output[Dataset]` (the test one) from `get_data()` to evaluate the model.
```
@component(
packages_to_install = [
"pandas",
"sklearn",
"xgboost"
],
)
def train_xgb_model(
dataset: Input[Dataset],
model_artifact: Output[Model]
):
from xgboost import XGBClassifier
import pandas as pd
data = pd.read_csv(dataset.path)
model = XGBClassifier(
objective="binary:logistic"
)
model.fit(
data.drop(columns=["target"]),
data.target,
)
score = model.score(
data.drop(columns=["target"]),
data.target,
)
model_artifact.metadata["train_score"] = float(score)
model_artifact.metadata["framework"] = "XGBoost"
model.save_model(model_artifact.path)
```
#### To evaluate the model
We will receibe the inputs and create some specific outputs. `Output[ClassificationMetrics]` lets us create beautiful plots on the UI, and `Output[Metrics]` lets us log arbitrary metrics onto it. We will use `sklearn` for the metric gathering and then convert everything to list for Vertex AI runner to be able to plot.
```
@component(
packages_to_install = [
"pandas",
"sklearn",
"xgboost"
],
)
def eval_model(
test_set: Input[Dataset],
xgb_model: Input[Model],
metrics: Output[ClassificationMetrics],
smetrics: Output[Metrics]
):
from xgboost import XGBClassifier
import pandas as pd
data = pd.read_csv(test_set.path)
model = XGBClassifier()
model.load_model(xgb_model.path)
score = model.score(
data.drop(columns=["target"]),
data.target,
)
from sklearn.metrics import roc_curve
y_scores = model.predict_proba(data.drop(columns=["target"]))[:, 1]
fpr, tpr, thresholds = roc_curve(
y_true=data.target.to_numpy(), y_score=y_scores, pos_label=True
)
metrics.log_roc_curve(fpr.tolist(), tpr.tolist(), thresholds.tolist())
from sklearn.metrics import confusion_matrix
y_pred = model.predict(data.drop(columns=["target"]))
metrics.log_confusion_matrix(
["False", "True"],
confusion_matrix(
data.target, y_pred
).tolist(), # .tolist() to convert np array to list.
)
xgb_model.metadata["test_score"] = float(score)
smetrics.log_metric("score", float(score))
```
#### The final step is to create the Pipeline
Notice that we get outputs from previous steps here. We then compile it into a `.json` file.
```
@dsl.pipeline(
# Default pipeline root. You can override it when submitting the pipeline.
pipeline_root=PIPELINE_ROOT,
# A name for the pipeline. Use to determine the pipeline Context.
name="pipeline-test-1",
)
def pipeline():
dataset_op = get_data()
train_op = train_xgb_model(dataset_op.outputs["dataset_train"])
eval_op = eval_model(
test_set=dataset_op.outputs["dataset_test"],
xgb_model=train_op.outputs["model_artifact"]
)
compiler.Compiler().compile(pipeline_func=pipeline,
package_path='xgb_pipe.json')
```
#### If you are authenticated to GCP and set everything up there
This snipped should create the run and a link for you to get to it.
Also, be sure your Vertex AI API is activated.
```
from kfp.v2.google.client import AIPlatformClient
api_client = AIPlatformClient(
project_id=PROJECT_ID,
region=REGION
)
response = api_client.create_run_from_job_spec(
'xgb_pipe.json',
)
```
| github_jupyter |
# Knowledge Graph Embeddings
Word embeddings aim at capturing the meaning of words based on very large corpora; however, there are decades of experience and approaches that have tried to capture this meaning by structuring knowledge into semantic nets, ontologies and graphs.
| | Neural | Symbolic |
| ------------- |-------------| -----|
| **representation** | vectors | symbols (URIs) |
| **input** | large corpora | human editors (Knowledge engineers) |
| **interpretability** | linked to model and training dataset | requires understanding of schema |
| **alignability** | parallel (annotated) corpora | heuristics + manual |
| **composability** | combine vectors | merge graphs |
| **extensibility** | fixed vocabulary | need to know how to link new nodes |
| **certainty** | fuzzy | exact |
| **debugability** | 'fix' training data? | edit graph |
In recent years, many new approaches have been proposed to derive 'neural' representations for existing knowledge graphs. Think of this as trying to capture the knowledge encoded in the KG to make it easier to use this in deep learning models.
- [TransE (2013)](http://papers.nips.cc/paper/5071-translating-embeddings-for-modeling-multi-relational-data.pdf): try to assign an embedding to nodes and relations, so that $h + r$ is close to $t$, where $h$ and $t$ are nodes in the graph and $r$ is an edge. In the RDF world, this is simply an RDF triple where $h$ is the subject $r$ is the property and $t$ is the object of the triple.
- [HolE (2016)](http://arxiv.org/abs/1510.04935): Variant of TransE, but uses a different operator (circular correlation) to represent pairs of entities.
- [RDF2Vec(2016)](https://ub-madoc.bib.uni-mannheim.de/41307/1/Ristoski_RDF2Vec.pdf): applies word2vec to random walks on an RDF graph (essentially paths or sequences of nodes in the graph).
- [Graph convolutions(2018)](http://arxiv.org/abs/1703.06103): apply convolutional operations on graphs to learn the embeddings.
- [Neural message passing(2018)](https://arxiv.org/abs/1704.01212): merges two strands of research on KG embeddings: recurrent and convolutional approaches.
For more background: [Nickel, M., Murphy, K., Tresp, V., & Gabrilovich, E. (2016). A review of relational machine learning for knowledge graphs. Proceedings of the IEEE, 104(1), 11–33. https://doi.org/10.1109/JPROC.2015.2483592](http://www.dbs.ifi.lmu.de/~tresp/papers/1503.00759v3.pdf) provides a good overview (up to 2016).
# Creating embeddings for WordNet
In this section, we go through the steps of generating word and concept embeddings using WordNet, a lexico-semantic knowledge graph.
0. Choose (or implement) a KG embedding algorithm
1. Convert the KG into format required by embedding algorithm
2. Execute the training
3. Evaluate/inspect results
## Choose embedding algorithm: HolE
We will use an [existing implementation of the `HolE` algorithm available on GitHub](https://github.com/mnick/holographic-embeddings).
### Install `scikit-kge`
The `holographic-embeddings` repo is actually just a wrapper around `scikit-kge` or [SKGE](https://github.com/mnick/scikit-kge), a library that implements a few KG embedding algorithms. First, we need to install `scikit-kge` as a library in our environment. Execute the following cells to clone the repository and install the library.
```
# make sure we are in the right folder to perform the git clone
%cd /content/
!git clone https://github.com/hybridNLP2018/scikit-kge
%cd scikit-kge
# install a dependency of scikit-kge on the colaboratory environment, needed to correclty build scikit-kge
!pip install nose
# now build a source distribution for the project
!python setup.py sdist
```
Executing the previous cell should produce a lot of output as the project is built. Towards the end you should see something like:
```
Writing scikit-kge-0.1/setup.cfg
creating dist
Creating tar archive
```
This should have created a `tar.gz` file in the `dist` subfolder:
```
!ls dist/
```
which we can install on the local environment by using `pip`, the python package manager.
```
!pip install dist/scikit-kge-0.1.tar.gz
%cd /content
```
### Install and inspect `holographic_embeddings` repo
Now that `skge` is installed on this environment, we are ready to clone the [holographic-embeddings](https://github.com/mnick/holographic-embeddings) repository, which will enable us to train `HolE` embeddings.
```
# let's go back to the main \content folder and clone the holE repo
%cd /content/
!git clone https://github.com/mnick/holographic-embeddings
```
If you want, you can browse the contents of this repo on github, or execute the following to see how you can start training embeddings for the WordNet 1.8 knowledge graph. In the following sections we'll go into more detail about how to train embeddings, so there is no need to actually execute this training just yet.
```
%less holographic-embeddings/run_hole_wn18.sh
```
You should see a section on the bottom of the screen with the contents of the `run_hole_wn18.sh` file. The main execution is:
```
python kg/run_hole.py --fin data/wn18.bin \
--test-all 50 --nb 100 --me 500 \
--margin 0.2 --lr 0.1 --ncomp 150
```
which is just executing the `kg/run_hole.py` script on the input data `data/wn18.bin` and passing various arguments to control how to train and produce the embeddings:
* `me`: states the number of epochs to train for (i.e. number of times to go through the input dataset)
* `ncomp`: specifies the dimension of the embeddings, each embedding will be a vector of 150 dimensions
* `nb`: number of batches
* `test-all`: specifies how often to run validation of the intermediate embeddings. In this case, every 50 epochs.
## Convert WordNet KG to required input
### KG Input format required by SKGE
SKGE requires a graph to be represented as a serialized python dictionary with the following structure:
* `relations`: a list of relation names (the named edges in the graph)
* `entities`: a list of entity names (the nodes in the graph),
* `train_subs`: a list of triples of the form `(head_id, tail_id, rel_id)`, where `head_id` and `tail_id` refer to the index in the `entities`list and `rel_id` refers to the index in the `relations` list. This is the list of triples that will be used to train the embeddings.
* `valid_subs`: a list of triples of the same form as `train_subs`. These are used to validate the embeddings during training (and thus to tune hyperparameters).
* `test_subs`: a list of triples of the same form as `test_subs`. These are used to test the learned embeddings.
The `holographic-embeddings` GitHub repo comes with an example input file: `data/wn18.bin` for WordNet 1.8. In the following executable cell, we show how to read and inspect data:
```
import pickle
import os
with open('holographic-embeddings/data/wn18.bin', 'rb') as fin:
wn18_data = pickle.load(fin)
for k in wn18_data:
print(k, type(wn18_data[k]), len(wn18_data[k]), wn18_data[k][-3:])
```
The expected output should be similar to:
```
relations <class 'list'> 18 ['_synset_domain_region_of', '_verb_group', '_similar_to']
train_subs <class 'list'> 141442 [(5395, 37068, 9), (5439, 35322, 11), (28914, 1188, 10)]
entities <class 'list'> 40943 ['01164618', '02371344', '03788703']
test_subs <class 'list'> 5000 [(17206, 33576, 0), (1179, 11861, 0), (30287, 1443, 1)]
valid_subs <class 'list'> 5000 [(351, 25434, 0), (3951, 2114, 7), (756, 14490, 0)]
```
This shows that WordNet 1.8 has been represented as a graph of 40943 nodes (which we assume correspond to the synsets) interlinked using 18 relation types. The full set of relations has been split into 141K triples for training, and 5K triples each for testing and validation.
### Converting WordNet 3.0 into the required input format
WordNet 1.8 is a bit dated and it will be useful to have experience converting your KG into the required input format. Hence, rather than simply reusing the `wn18.bin` input file, we will generate our own directly from the [NLTK WordNet API](http://www.nltk.org/howto/wordnet.html).
First we need to download WordNet:
```
import nltk
nltk.download('wordnet')
```
#### Explore WordNet API
Now that we have the KG, we can use the WordNet API to explore the graph. Refer to the [howto doc](http://www.nltk.org/howto/wordnet.html) for a more in depth overview, here we only show a few methods that will be needed to generate our input file.
```
from nltk.corpus import wordnet as wn
```
The main nodes in WordNet are called synsets (synonym sets). These correspond roughly to *concepts*. You can find all the synstes related to a word like this:
```
wn.synsets('dog')
```
The output from the cell above shows how synsets are identified by the NLTK WordNet API. They have the form `<main-lemma>.<POS-code>.<sense-number>`. As far as we are aware, this is a format chosen by the implementors of the NLTK WordNet API and other APIs may choose diverging ways to refer to synsets.
You can get a list of all the synsets as follows (we only show the first 5):
```
for synset in list(wn.all_synsets())[:5]:
print(synset.name())
```
Similarly, you can also get a list of all the lemma names (again we only show 5):
```
for lemma in list(wn.all_lemma_names())[5000:5005]:
print(lemma)
```
For a given synset, you can find related synsets or lemmas, by calling the functions for each relation type. Below we provide a couple of examples for the first sense of adjective *adaxial*. In the first example, we see that this synset belongs to `topic domain` `biology.n.01`, which is again a synset. In the second example, we see that it has two lemmas, which are relative to the synset. In the third example, we retrieve the lemmas in a form that are not relative to the synset, which is the one we will use later on.
```
wn.synset('adaxial.a.01').topic_domains()
wn.synset('adaxial.a.01').lemmas()
wn.synset('adaxial.a.01').lemma_names()
```
#### Entities and relations to include
The main nodes in WordNet are the syncons, however, lemmas can also be considered to be nodes in the graph. Hence, you need to decide which nodes to include. Since we are interested in capturing as much information as can be provided by WordNet, we will include both synsets and lemmas.
WordNet defines a large number of relations between synsets and lemmas. Again, you can decide to include all or just some of these. One particularity of WordNet is that many relations are defined twice: e.g. hypernym and hyponym are the exact same relation, but in reverse order. Since this is not really providing additional information, we only include such relations once. The following cell defines all the relations we will be taking into account. We represent these as python dictionaries, where the keys are the name of the relation and the values are functions that accept a `head` entity and produce a list of `tail` entities for that specific relation:
```
syn_relations = {
'hyponym': lambda syn: syn.hyponyms(),
'instance_hyponym': lambda syn: syn.instance_hyponyms(),
'member_meronym': lambda syn: syn.member_meronyms(),
'has_part': lambda syn: syn.part_meronyms(),
'topic_domain': lambda syn: syn.topic_domains(),
'usage_domain': lambda syn: syn.usage_domains(),
'_member_of_domain_region': lambda syn: syn.region_domains(),
'attribute': lambda syn: syn.attributes(),
'entailment': lambda syn: syn.entailments(),
'cause': lambda syn: syn.causes(),
'also_see': lambda syn: syn.also_sees(),
'verb_group': lambda syn: syn.verb_groups(),
'similar_to': lambda syn: syn.similar_tos()
}
lem_relations = {
'antonym': lambda lem: lem.antonyms(),
'derivationally_related_form': lambda lem: lem.derivationally_related_forms(),
'pertainym': lambda lem: lem.pertainyms()
}
syn2lem_relations = {
'lemma': lambda syn: syn.lemma_names()
}
```
#### Triple generation
We are now ready to generate triples by using the WordNet API. Recall that `skge` requires triples of the form `(head_id, tail_id, rel_id)`, hence we will need to have some way of mapping entity (synset and lemma) names and relations types to unique ids. We therefore assume we will have an `entity_id_map` and a `rel_id_map`, which will map the entity name (or relation type) to an id. The following two cells implement functions which will iterate through the synsets and relations to generate the triples:
```
def generate_syn_triples(entity_id_map, rel_id_map):
result = []
for synset in list(wn.all_synsets()):
h_id = entity_id_map.get(synset.name())
if h_id is None:
print('No entity id for ', synset)
continue
for synrel, srfn in syn_relations.items():
r_id = rel_id_map.get(synrel)
if r_id is None:
print('No rel id for', synrel)
continue
for obj in srfn(synset):
t_id = entity_id_map.get(obj.name())
if t_id is None:
print('No entity id for object', obj)
continue
result.append((h_id, t_id, r_id))
for rel, fn in syn2lem_relations.items():
r_id = rel_id_map.get(rel)
if r_id is None:
print('No rel id for', rel)
continue
for obj in fn(synset):
lem = obj.lower()
t_id = entity_id_map.get(lem)
if t_id is None:
print('No entity id for object', obj, 'lowercased:', lem)
continue
result.append((h_id, t_id, r_id))
return result
def generate_lem_triples(entity_id_map, rel_id_map):
result = []
for lemma in list(wn.all_lemma_names()):
h_id = entity_id_map.get(lemma)
if h_id is None:
print('No entity id for lemma', lemma)
continue
_lems = wn.lemmas(lemma)
for lemrel, lrfn in lem_relations.items():
r_id = rel_id_map.get(lemrel)
if r_id is None:
print('No rel id for ', lemrel)
continue
for _lem in _lems:
for obj in lrfn(_lem):
t_id = entity_id_map.get(obj.name().lower())
if t_id is None:
print('No entity id for obj lemma', obj, obj.name())
continue
result.append((h_id, t_id, r_id))
return result
```
#### Putting it all together
Now that we have methods for generating lists of triples, we can generate the input dictionary and serialise it. We need to:
* create our lists of entities and relations,
* derive a map from entity and relation names to ids
* generate the triples
* split the triples into training, validation and test subsets
* write the python dict to a serialised file
We implement this in the following method:
```
import random # for shuffling list of triples
def wnet30_holE_bin(out):
"""Creates a skge-compatible bin file for training HolE embeddings based on WordNet31"""
synsets = [synset.name() for synset in wn.all_synsets()]
lemmas = [lemma for lemma in wn.all_lemma_names()]
entities = list(synsets + list(set(lemmas)))
print('Found %s synsets, %s lemmas, hence %s entities' % (len(synsets), len(lemmas), len(entities)))
entity_id_map = {ent_name: id for id, ent_name in enumerate(entities)}
n_entity = len(entity_id_map)
print("N_ENTITY: %d" % n_entity)
relations = list( list(syn_relations.keys()) + list(lem_relations.keys()) + list(syn2lem_relations.keys()))
relation_id_map = {rel_name: id for id, rel_name in enumerate(relations)}
n_rel = len(relation_id_map)
print("N_REL: %d" % n_rel)
print('relations', relation_id_map)
syn_triples = generate_syn_triples(entity_id_map, relation_id_map)
print("Syn2syn relations", len(syn_triples))
lem_triples = generate_lem_triples(entity_id_map, relation_id_map)
print("Lem2lem relations", len(lem_triples))
all_triples = syn_triples + lem_triples
print("All triples", len(all_triples))
random.shuffle(all_triples)
test_triple = all_triples[:500]
valid_triple = all_triples[500:1000]
train_triple = all_triples[1000:]
to_pickle = {
"entities": entities,
"relations": relations,
"train_subs": train_triple,
"test_subs": test_triple,
"valid_subs": valid_triple
}
with open(out, 'wb') as handle:
pickle.dump(to_pickle, handle, protocol=pickle.HIGHEST_PROTOCOL)
print("wrote to %s" % out)
```
#### Generate `wn30.bin`
Now we are ready to generate the `wn30.bin` file which we can feed to the `HolE` algorithm implementation.
```
out_bin='/content/holographic-embeddings/data/wn30.bin'
wnet30_holE_bin(out_bin)
```
Notice, that the resulting dataset now contains 265K entities, compared to 41K in WordNet 1.8 (to be fair, only 118K of the entities are synsets).
## Learn the embeddings
Now, we will use the WordNet 3.0 dataset to learn embeddings for both synsets and lemmas. Since this is fairly slow, we only train for 2 epochs, which can take up to 10 minutes (In the exercises at the end of this notebook, we provide a link to download pre-computed embeddings which have been trained for 500 epochs.)
```
wn30_holE_out='/content/wn30_holE_2e.bin'
holE_dim=150
num_epochs=2
!python /content/holographic-embeddings/kg/run_hole.py --fin {out_bin} --fout {wn30_holE_out} \
--nb 100 --me {num_epochs} --margin 0.2 --lr 0.1 --ncomp {holE_dim}
```
The output should look similar to:
```
INFO:EX-KG:Fitting model HolE with trainer PairwiseStochasticTrainer and parameters Namespace(afs='sigmoid', fin='/content/holographic-embeddings/data/wn30.bin', fout='/content/wn30_holE_2e.bin', init='nunif', lr=0.1, margin=0.2, me=2, mode='rank', nb=100, ncomp=150, ne=1, no_pairwise=False, rparam=0, sampler='random-mode', test_all=10)
INFO:EX-KG:[ 1] time = 120s, violations = 773683
INFO:EX-KG:[ 2] time = 73s, violations = 334894
INFO:EX-KG:[ 2] time = 73s, violations = 334894
INFO:EX-KG:[ 2] VALID: MRR = 0.11/0.12, Mean Rank = 90012.28/90006.14, Hits@10 = 15.02/15.12
DEBUG:EX-KG:FMRR valid = 0.122450, best = -1.000000
INFO:EX-KG:[ 2] TEST: MRR = 0.11/0.12, Mean Rank = 95344.42/95335.96, Hits@10 = 15.74/15.74
```
## Inspect resulting embeddings
Now that we have trained the model, we can retrieve the embeddings for the entities and inspect them.
### `skge` output file format
The output file is again a pickled serialisation of a python dictionary. It contains the `model` itself, and results for the test and validation runs as well as execution times.
```
with open(wn30_holE_out, 'rb') as fin:
hole_model = pickle.load(fin)
print(type(hole_model), len(hole_model))
for k in hole_model:
print(k, type(hole_model[k]))
```
We are interested in the model itself, which is an instance of a `skge.hole.HolE` class and has various parameters. The entity embeddings are stored in parameter `E`, which is essentially a matrix of $n_e \times d$, where $n_e$ is the number of entities and $d$ is the dimension of each vector.
```
model = hole_model['model']
E = model.params['E']
print(type(E), E.shape)
```
### Converting embeddings to more inspectable format
Unfortunately, `skge` does not provide methods for exploring the embedding space. (KG embedding libraries are more geared towards prediction of relations) So, we will convert the embeddings into an easier to explore format. We first convert them into a pair of files for the vectors and the vocabulary and we will then use the `swivel` library to explore the results.
We first read the list of entities, this is our **vocabulary** (i.e. names of synsets and lemmas for which we have embeddings).
```
with open('/content/holographic-embeddings/data/wn30.bin', 'rb') as fin:
wn30_data = pickle.load(fin)
entities = wn30_data['entities']
len(entities)
```
Next, we generate a vocab file and a `tsv` file where each line contains the word and a list of $d$ numbers.
```
vec_file = '/content/wn30_holE_2e.tsv'
vocab_file = '/content/wn30_holE_2e.vocab.txt'
with open(vocab_file, 'w', encoding='utf_8') as f:
for i, w in enumerate(entities):
word = w.strip()
print(word, file=f)
with open(vec_file, 'w', encoding='utf_8') as f:
for i, w in enumerate(entities):
word = w.strip()
embedding = E[i]
print('\t'.join([word] + [str(x) for x in embedding]), file=f)
!wc -l {vec_file}
```
Now that we have these files, we can use `swivel`, which we used in the first notebook to inspect the embeddings.
#### Download tutorial materials and `swivel` (if necessary)
Download swivel, although you may already have it on your environment if you already executed the first notebook of this tutorial.
```
%cd /content
!git clone https://github.com/HybridNLP2018/tutorial
```
Use the `swivel/text2bin` script to convert the `tsv` embeddings into `swivel`'s binary format.
```
vecbin = '/content/wn30_holE_2e.tsv.bin'
!python /content/tutorial/scripts/swivel/text2bin.py --vocab={vocab_file} --output={vecbin} \
{vec_file}
```
Next, we can load the vectors using `swivel`'s `Vecs` class, which provides easy inspection of neighbors.
```
from tutorial.scripts.swivel import vecs
vectors = vecs.Vecs(vocab_file, vecbin)
```
#### Inspect a few example lemmas and synsets
```
import pandas as pd
pd.DataFrame(vectors.k_neighbors('california'))
wn.synsets('california')
pd.DataFrame(vectors.k_neighbors('california.n.01'))
pd.DataFrame(vectors.k_neighbors('conference'))
pd.DataFrame(vectors.k_neighbors('semantic'))
pd.DataFrame(vectors.k_neighbors('semantic.a.01'))
```
As you can see, the embeddings do not look very good at the moment. In part this is due to the fact we only trained the model for 2 epochs. We have pre-calculated a set of HolE embeddings for 500 epochs, which you can download and inspect as part of an optional excercise below. Results for these are much better:
| cosine sim | entity |
| ------------- |-------------|
| 1.0000 | lem_california |
| 0.4676 | lem_golden_state |
| 0.4327 | lem_ca |
| 0.4004 | lem_californian |
| 0.3838 | lem_calif. |
| 0.3500 | lem_fade |
| 0.3419 | lem_keystone_state |
| 0.3375 | wn31_antilles.n.01 |
| 0.3356 | wn31_austronesia.n.01 |
| 0.3340 | wn31_overbalance.v.02 |
For the synset for california, we also see 'sensible' results:
| cosine sim | entity |
| ------------- |-------------|
| 1.0000 | wn31_california.n.01 |
| 0.4909 | wn31_nevada.n.01 |
| 0.4673 | wn31_arizona.n.01 |
| 0.4593 | wn31_tennessee.n.01 |
| 0.4587 | wn31_new_hampshire.n.01 |
| 0.4555 | wn31_sierra_nevada.n.02 |
| 0.4073 | wn31_georgia.n.01 |
| 0.4048 | wn31_west_virginia.n.01 |
| 0.3991| wn31_north_carolina.n.01 |
| 0.3977 | wn31_virginia.n.01 |
One thing to notice here is that all of the top 10 closely related entities for `california.n.01` are also synsets. Similarly for lemma `california`, the most closely related entities are also lemmas, although some synsets also made it into the top 10 neighbours. This may indicate a tendency of `HolE` to keep lemmas close to other lemmas and synsets close to other synsets. In general, choices about how nodes in the KG are related will affect how their embeddings are interrelated.
# Conclusion and exercises
In this notebook we provided an overview of recent knowledge graph embedding approaches and showed how to use existing implementations to generate word and concept embeddings for WordNet 3.0.
## Excercise: train embeddings on your own KG
If you have a KG of your own, you can adapt the code shown above to generate a graph representation as expected by `skge` and you can train your embeddings in this way. Popular KGs are Freebase and DBpedia.
## Excercise: inspect embeddings for pre-calculated WordNet 3.0
We have used code similar to the one shown above to train embeddings for 500 epochs using HolE. You can execute the following cells to download and explore these embeddings. The embeddings are about 142MB, so dowloading them may take a few minutes.
```
!mkdir /content/vec/
%cd /content/vec/
!wget https://zenodo.org/record/1446214/files/wn-en-3.0-HolE-500e-150d.tar.gz
!tar -xzf wn-en-3.0-HolE-500e-150d.tar.gz
%ls /content/vec
```
The downloaded tar contains a `tsv.bin` and a `vocab` file like the one we created above. We can use it to load the vectors using `swivel`'s `Vecs`:
```
vocab_file = '/content/vec/wn-en-3.1-HolE-500e.vocab.txt'
vecbin = '/content/vec/wn-en-3.1-HolE-500e.tsv.bin'
wnHolE = vecs.Vecs(vocab_file, vecbin)
```
Now you are ready to start exploring. The only thing to notice is that we have added a prefix to `lem_` to all lemmas and `wn31_` to all synsets, as shown in the following examples:
```
pd.DataFrame(wnHolE.k_neighbors('lem_california'))
pd.DataFrame(wnHolE.k_neighbors('wn31_california.n.01'))
```
| github_jupyter |
.. _nb_repair:
## Repair Operator
The repair operator is mostly problem dependent. Most commonly it is used to make sure the algorithm is only searching in the feasible space. It is applied after the offsprings have been reproduced. In the following, we are using the knapsack problem to demonstrate the repair operator in *pymoo*.
In the well-known **Knapsack Problem**. In this problem, a knapsack has to be filled with items without violating the maximum weight constraint. Each item $j$ has a value $b_j \geq 0$ and a weight $w_j \geq 0$ where $j \in \{1, .., m\}$. The binary decision vector $z = (z_1, .., z_m)$ defines, if an item is picked or not. The aim is to maximize the profit $g(z)$:
\begin{eqnarray}
max & & g(z) \\[2mm] \notag
\text{s.t.} & & \sum_{j=1}^m z_j \, w_j \leq Q \\[1mm] \notag
& & z = (z_1, .., z_m) \in \mathbb{B}^m \\[1mm] \notag
g(z) & = & \sum_{j=1}^{m} z_j \, b_j \\[2mm] \notag
\end{eqnarray}
A simple GA will have some infeasible evaluations in the beginning and then concentrate on the infeasible space.
```
from pymoo.factory import get_algorithm, get_crossover, get_mutation, get_sampling
from pymoo.optimize import minimize
from pymoo.problems.single.knapsack import create_random_knapsack_problem
problem = create_random_knapsack_problem(30)
algorithm = get_algorithm("ga",
pop_size=200,
sampling=get_sampling("bin_random"),
crossover=get_crossover("bin_hux"),
mutation=get_mutation("bin_bitflip"),
eliminate_duplicates=True)
res = minimize(problem,
algorithm,
termination=('n_gen', 10),
verbose=True)
```
Because the constraint $\sum_{j=1}^m z_j \, w_j \leq Q$ is fairly easy to satisfy. Therefore, we can make sure before evaluating the objective function, that this constraint is not violated by repairing the individual.
A repair class has to be defined and the population are given as input. The repaired population has to be returned.
```
import numpy as np
from pymoo.model.repair import Repair
class ConsiderMaximumWeightRepair(Repair):
def _do(self, problem, pop, **kwargs):
# maximum capacity for the problem
Q = problem.C
# the packing plan for the whole population (each row one individual)
Z = pop.get("X")
# the corresponding weight of each individual
weights = (Z * problem.W).sum(axis=1)
# now repair each indvidiual i
for i in range(len(Z)):
# the packing plan for i
z = Z[i]
# while the maximum capacity violation holds
while weights[i] > Q:
# randomly select an item currently picked
item_to_remove = np.random.choice(np.where(z)[0])
# and remove it
z[item_to_remove] = False
# adjust the weight
weights[i] -= problem.W[item_to_remove]
# set the design variables for the population
pop.set("X", Z)
return pop
algorithm.repair = ConsiderMaximumWeightRepair()
res = minimize(problem,
algorithm,
termination=('n_gen', 10),
verbose=True)
```
As it can be seen, the repair operator makes sure no infeasible solution is evaluated. Even though this example seems to be quite easy, the repair operator makes especially sense for more complex constraints where domain specific knowledge is known.
| github_jupyter |
# Creation of the Alternative Classification for Modeling
In this notebook, we create a csv file containing the alternative classification of crimes, in 7 categories.
<br>
We also clean and segment the data according to time, localization and neighborhoods.
# Cleaning of the Data from clean_data.csv
```
data = pd.read_csv('data_clean.csv')
data.columns
data
df_data_cat = data[['Incident Datetime', 'Incident Date', 'Incident Time', 'Incident Year',
'Incident Day of Week', 'Report Datetime', 'Row ID', 'Incident ID',
'Incident Number', 'CAD Number', 'Report Type Code',
'Report Type Description', 'Incident Code',
'Incident Category', 'Incident Subcategory', 'Incident Description',
'Resolution', 'Intersection', 'CNN',
'Analysis Neighborhood', 'Latitude', 'Longitude']]
def clean_incident_category(df):
df['Incident Category'].replace('Offence','Offense',regex=True, inplace = True)
df['Incident Category'].replace('Offenses','Offense',regex=True, inplace = True)
#df['Incident Category'].replace('Offense Against The Family And Children', 'Family Offense', regex=False, inplace = True)
df['Incident Category'].replace('Human Trafficking (A), Commercial Sex Acts', 'Human Trafficking', regex=False, inplace = True)
df['Incident Category'].replace('Human Trafficking, Commercial Sex Acts', 'Human Trafficking', regex=False, inplace = True)
df['Incident Category'].replace('Human Trafficking (B), Involuntary Servitude', 'Human Trafficking', regex=False, inplace = True)
df['Incident Category'].replace('Motor Vehicle Theft?', 'Motor Vehicle Theft', regex=False, inplace = True)
df['Incident Category'].replace('Suspicious Occ', 'Suspicious', regex=False, inplace = True)
return
clean_incident_category(df_data_cat)
df_data_cat['Incident Category'].value_counts()
```
# Categorize to 4 groups
4 ways to localise crimes:
* Felonies
Murder (PC 187)
Homicide
Manslaughter
Rape (PC 261)
Assault with a deadly weapon (PC 245(a)(1))
Voluntary Manslaughter (PC 192(a))
Involuntary Manslaughter (PC 192(b))
Aggravated Battery (PC 243(d))
Gross Manslaughter while Intoxicated (PC 191.5(a))
Negligent Manslaughter while Intoxicated (PC 191.5(b))
Sexual battery (PC 243.4)
Kidnapping (PC 207)
False Imprisonment (PC 236)
Hate Crimes
Torture (PC 206)
Mayhem (PC 203)
Aggravated Mayhem (PC 205)
Child Pornography (PC 311.11)
Fraud
Internet Crimes
Drug Possession
Drug Distribution
Three strikes cases
Gang Cases
Burglary (PC 459)
Robbery (PC 211)
Carjacking (PC 215)
Grand Theft (PC 487)
Auto Theft
Domestic violence
DUI
Obstructing justice
Perjury (PC 118)
Criminal Threats (PC 422)
* Misdemeanors: DUI—VC 23152(a)
Driving on a suspended license—VC 14601.1(a)
Disorderly conduct—PC 415
Public drunkenness—647(f) pc m
Petty theft—PC 484/488
Shoplifting—PC 459.5
Soliciting for an act of prostitution PC 647(b)
Probation violations—PC code 1203
Domestic violence—PC 273.5
Reckless driving California —VC 23103 b
* Felony-misdemeanors
* Infractions
However, some records, namely 'fire report', 'stolen property', 'warrant', etc do not fit in any of these categories. We thus add three categories:
* risk_non_criminal, comprising records that are risky for the users.
* no_risk, comprising records that are no risk.
* unsure, comprising records that lack precision.
These three last categories will be used during the modelization but dropped for the visualization.
```
felony = ['Arson','Burglary', 'Motor Vehicle Theft', 'Robbery', 'Sex Offense', 'Offense Against The Family And Children', 'Family Offense','Weapons Offense', 'Fraud', 'Homicide', 'Human Trafficking']
#felony = ['Burglary', 'Motor Vehicle Theft', 'Robbery', 'Offense Against The Family And Children', 'Suspicious', 'Rape', 'Human Trafficking', 'Homicide', 'Family Offense (?)']
felony_misdemeanor = ['Weapons Carrying', 'Forgery and Counterfeiting', 'Embezzlement', 'Drug Violation', 'Non-Criminal']
#felony_misdemeanor = ['Larceny Theft', 'Assault', 'Fraud', 'Juvenile Offense (?)']
misdemeanor = ['Disorderly Conduct', 'Liquor Laws', 'Assault', 'Civil Sidewalks', 'Prostitution', 'Gambling', 'Vandalism']
#misdemeanor = ['Gambling', 'Prostitution' ]
infractions = ['Traffic Violation Arrest', 'Malicious Mischief', 'Suspicious', 'Other Offense', 'Stolen Property', 'Forgery and Counterfeiting', 'Traffic Collision', 'Juvenile Offense']
#infractions = ['Liquor Laws', 'Drug Violation', 'Drug Offense', 'Embezzlement', 'Vandalism']
risk_non_criminal = ['Fire Report', 'Stolen Property']
no_risk = ['Warrant', 'Recovered Vehicle', 'Lost Property','Vehicle Misplaced', 'Suicide', 'Vehicle Impounded', 'Case Closure', 'Courtesy Report']
unsure = ['Missing Person', 'Other Miscellaneous', 'Miscellaneous Investigation', 'Other']
groups = [felony, felony_misdemeanor, misdemeanor, infractions, risk_non_criminal, no_risk, unsure]
def categorize_incident(x, groups):
for i in range(len(groups)):
if x in groups[i-1]:
return i
df_data_cat.loc[:, ['Incident Level']] = df_data_cat['Incident Category'].apply(lambda x: categorize_incident(x, groups))
df_data_cat['Incident Level'].value_counts(normalize=True)
```
# Localize
```
def round_nearest(x):
a=0.0025
return round(x / a) * a
df_data_cat['NewLat'] = round_nearest(df_data_cat['Latitude'])
df_data_cat['NewLon'] = round_nearest(df_data_cat['Longitude'])
df_data_cat.head()
```
# Add time segments
```
# visualize counts of crimes in 24 hr
df_data_cat['Whole Time'] = df_data_cat['Incident Time'].apply(lambda x: x[:2])
plt.plot(df_data_cat['Whole Time'].value_counts().sort_index())
df_clean = df_data_cat
# visualize counts of crimes in 24 hr, ordered by counts
plt.plot(df_clean['Whole Time'].value_counts())
# 00 seems a little inconsistant because both 20-23 1-2 has much lower counts
# visualize 00 to see that almost all the crimes "happened" at 00:00
# guess it might be reported like this for simplicity
plt.figure(figsize=(20,5))
plt.plot(df_clean[df_clean['Whole Time']=='00']['Incident Time'].value_counts())
plt.xticks(rotation = 90)
plt.show()
# based on visulization, catogerize times to 4 equal length periods
morning = ['08','09','10','11','12','13'] # med
afternoon = ['14','15','16','17','18','19'] # high
evening = ['20','21','22','23','00','01'] # med
night = ['02','03','04','05','06','07'] # low
times = [morning, afternoon, evening, night]
def categorize_time(x, times):
if x in times[0]:
return 'Morning'
if x in times[1]:
return 'Afternoon'
if x in times[2]:
return 'Evening'
if x in times[3]:
return 'Night'
df_clean.loc[:, ['Time Seg']] = df_clean['Whole Time'].apply(lambda x: categorize_time(x, times))
df_clean.head()
```
# Add neighborhood
```
def find_neighborhood(x):
l = x.value_counts(normalize=True).index.values
if len(l)==0:
return np.nan
else:
return l[0]
df_nb = df_data_cat.groupby(['NewLat','NewLon'])['Analysis Neighborhood'].apply(lambda x: find_neighborhood(x)).reset_index()
df_nb
```
# Add together
```
df_final = df_data_cat.groupby(['NewLat','NewLon','Time Seg','Incident Level']
).count().sort_values('Incident ID', ascending=False)['Incident ID'].reset_index()
df_final = df_final.pivot(index=['NewLat','NewLon','Time Seg'], columns='Incident Level', values='Incident ID')
df_final = pd.DataFrame(df_final.to_records()).fillna(0)
#df_final['Total'] = df_final.iloc[:, 3:].sum(axis=1)
#df_final['Weighted'] = df_final['0.0']*16 + df_final['1.0']*8 + df_final['2.0']*4 + df_final['3.0']*2 + df_final['4.0']*1 + df_final['5#.0']*0
df_final = df_final.merge(df_nb, how='left', on=['NewLat','NewLon'])
df_final
df_final[df_final['Analysis Neighborhood'].isna()]
df_final['Analysis Neighborhood'].fillna('Oceanview/Merced/Ingleside', inplace=True)
df_final.to_csv('alternative_classification_data_localized.csv')
from google.colab import files
files.download('alternative_classification_data_localized.csv')
file = '/content/drive/MyDrive/NavSafe/Copy of data_localized.csv'
df_data = pd.read_csv(file, index_col=0).drop(['Total','Weighted'],axis=1)
df_data.head()
df_data.sample(n=5, random_state=10)
neighborhood_file = '/content/drive/MyDrive/NavSafe/Copy of data_neighborhood_safety.csv'
neighborhood = pd.read_csv(neighborhood_file)
neighborhood.head()
df_all = df_data.merge(neighborhood, how='left', left_on='Analysis Neighborhood', right_on='Neighborhood').drop(['Analysis Neighborhood','Neighborhood'],axis=1)
df_all.head()
def safety_calc(row):
if row['Time Seg'] == 'Morning':
return row['Average of safe_day']
elif row['Time Seg'] == 'Afternoon':
return row['Average of safe_rate']
else:
return row['Average of safe_night']
df_all['Safe'] = df_all.apply(lambda row: safety_calc(row), axis=1)
df_all = df_all.drop(['Average of safe_day','Average of safe_night','Average of safe_rate'],axis=1)
df_all.head()
group_data = df_all[['1.0','2.0','3.0','4.0','5.0','6.0']]
group_data.describe()
# df_all.loc[(df_all['Average of safe_rate']<3.67) & (df_all['1.0']>10)]
df_all.loc[(df_all['1.0']>50) | (df_all['2.0']>100) | (df_all['3.0']>150)]
df_all['Avoid'] = 0
# df_all.loc[(df_all['Average of safe_rate']<3.67) & (df_all['1.0']>10), 'Avoid'] = 1
df_all.loc[(df_all['1.0']>75) | (df_all['2.0']>100) | (df_all['3.0']>200), 'Avoid'] = 1
df_all.head()
time = pd.get_dummies(df_all['Time Seg'],drop_first=True)
df_train = pd.concat([time, df_all.drop(['NewLat','NewLon','Time Seg'],axis=1)], axis=1)
# df_train[['NewLat','NewLon','Evening','Morning','Night','1.0','2.0','3.0','4.0','5.0','6.0','Safe','Avoid']].head()
df_train.head()
x_train, x_test, y_train, y_test = train_test_split(df_train.drop('Avoid',axis=1), df_train['Avoid'], test_size=0.3, random_state=10)
```
## Logistic Regression with Cross Validation
```
def plot_cv_curve(hyperparm_grid,train_scores,val_scores):
ax = plt.subplot(111)
ax.errorbar(hyperparm_grid,np.mean(train_scores,axis=1),yerr=np.std(train_scores,axis=1),label="train")
ax.errorbar(hyperparm_grid,np.mean(val_scores,axis=1),yerr=np.std(val_scores,axis=1),label="validation")
ax.set_xlabel('Hyperparameter')
ax.set_ylabel('Score')
ax.legend()
ax.grid()
return ax
kf = KFold(5, shuffle=True, random_state=10)
C_grid = np.logspace(-2,2,10)
features = ['1.0','2.0','3.0','4.0','5.0','6.0']
logit_pipe = Pipeline([('columns', ColumnTransformer([('keep', StandardScaler(with_mean=False), features)],
remainder='passthrough')),
('logit', LogisticRegression(max_iter=5000, solver='newton-cg'))])
train_scores, val_scores = validation_curve(logit_pipe, x_train, y_train,
param_name='logit__C', param_range=C_grid, cv=kf)
ax = plot_cv_curve(C_grid,train_scores,val_scores)
ax.set_xlabel('C')
ax.set_ylabel('Accuracy')
ax.set_xscale('log')
logit_final = Pipeline([('columns', ColumnTransformer([('keep', StandardScaler(with_mean=False), features)], remainder='passthrough')),
('logit', LogisticRegression(max_iter=5000, solver='newton-cg', C=10))])
logit_final.fit(x_train, y_train)
pred = logit_final.predict_proba(x_test)[:,1]
y_pred = [1 if i >=0.5 else 0 for i in pred]
cm = confusion_matrix(y_test, y_pred)
tn, fp, fn, tp = cm.ravel()
cm
print ("\nPrecision:", tp/(tp+fp))
print ("\nRecall:", tp/(tp+fn))
!pip install gmaps
!pip install ipywidgets
!pip install widgetsnbextension
import gmaps
import ipywidgets as widgets
from ipywidgets.embed import embed_minimal_html
import IPython
gmaps.configure(api_key='AIzaSyDgJrLjmtTKlpLjwAfmseJJ-w8ZEy_YHeM')
```
## Map Visualization
```
# x = df_group1[['Latitude', 'Longitude']].copy()
# kmeans = KMeans(n_clusters=5, random_state=0).fit(x)
# kmeans.cluster_centers_
# x['label'] = kmeans.labels_
# x
# #filter rows of original data
# filtered_label1 = x[x['label'] == 1]
# filtered_label2 = x[x['label'] == 2]
# filtered_label3 = x[x['label'] == 3]
# filtered_label4 = x[x['label'] == 4]
# filtered_label5 = x[x['label'] == 5]
# #Plotting the results
# plt.scatter(filtered_label1['Latitude'] , filtered_label1['Longitude'] , color = 'red')
# plt.scatter(filtered_label2['Latitude'] , filtered_label2['Longitude'] , color = 'black')
# plt.scatter(filtered_label3['Latitude'] , filtered_label3['Longitude'] , color = 'green')
# plt.scatter(filtered_label4['Latitude'] , filtered_label4['Longitude'] , color = 'yellow')
# plt.scatter(filtered_label5['Latitude'] , filtered_label5['Longitude'] , color = 'blue')
# plt.show()
centers = df_data.groupby(['NewLat','NewLon','Incident Level']).count().sort_values('Incident ID', ascending=False)['Incident ID'].reset_index()
centers['Level Weight'] = 7 - centers['Incident Level']
centers['Weight'] = centers['Level Weight'] * centers['Incident ID']
centers = centers.groupby(['NewLat','NewLon']).sum().reset_index()[['NewLat','NewLon','Weight']]
centers.sort_values('Weight', ascending=False).head(20)
locations = centers[['NewLat', 'NewLon']]
weights = centers['Weight']
fig = gmaps.figure()
heatmap_layer = gmaps.heatmap_layer(locations, weights=weights)
fig.add_layer(gmaps.heatmap_layer(locations, weights=weights))
embed_minimal_html('export.html', views=[fig])
IPython.display.HTML(filename="export.html")
```
## Areas to Avoid
```
centers.head()
centers = df_all[(df_all['Avoid']==1) & ((df_all['Time Seg']=='Afternoon'))][['NewLat','NewLon']].drop_duplicates()
centers['Weight'] = 100
locations = centers[['NewLat', 'NewLon']]
weights = centers['Weight']
fig = gmaps.figure()
heatmap_layer = gmaps.heatmap_layer(locations, weights=weights)
fig.add_layer(gmaps.heatmap_layer(locations, weights=weights))
embed_minimal_html('export.html', views=[fig])
IPython.display.HTML(filename="export.html")
# plot time segment points
```
| github_jupyter |
Training and Testing Data
=====================================
To evaluate how well our supervised models generalize, we can split our data into a training and a test set:
<img src="../images/train_test_split.svg" width="80%">
```
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
```
* Thinking about how machine learning is normally performed, the idea of a train/test split makes sense.
* Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally *new* data.
* We can simulate this during training using a train/test split - the test data is a simulation of "future data" which will come into the system during production.
Specifically for iris, the labels in iris are sorted, which means that if we split the data using a proportional split, we will get all of specific labels (0 and 1) and very little of another (2). We want to split as illustrated above, but *after* the data has been randomly shuffled.
```
y
```
To get an accurate simulation of the real world, we will shuffle our data then split.
```
import numpy as np
rng = np.random.RandomState(0)
permutation = rng.permutation(len(X))
X, y = X[permutation], y[permutation]
print(y)
```
* Now we need to split the data into training and testing.
* Luckily, this is a common pattern in machine learning and scikit-learn has a prebuilt function to split data into training and testing for you.
* Here we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules.
* The most important thing is to fairly evaluate your system on data it *has not* seen during training!
```
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y, test_size=0.5, random_state=1999)
print("Labels for training and testing data")
print(train_y)
print(test_y)
```
By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the power of our system.
This might lead to putting a system into production which *fails* at predicting new data! It is much better to use a train/test split in order to properly see how your trained model is doing on new data.
```
classifier.fit(train_X, train_y)
pred_y = classifier.predict(test_X)
print("Fraction Correct")
print(np.sum(pred_y == test_y) / float(len(test_y)))
```
We can also visualize the correct and failed predictions
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
correct_idx = np.where(pred_y == test_y)[0]
print(correct_idx)
incorrect_idx = np.where(pred_y != test_y)[0]
print(incorrect_idx)
# Plot two dimensions
from itertools import combinations
colors = ["darkblue", "darkgreen", "gray"]
for x, y in combinations(range(4), 2):
for n, color in enumerate(colors):
idx = np.where(test_y == n)[0]
plt.scatter(test_X[idx, x], test_X[idx, y], color=color,
label="Class %s" % str(n))
plt.scatter(test_X[incorrect_idx, x], test_X[incorrect_idx, y],
color="darkred")
# Make xlim larger to accommodate legend
plt.xlim(3, 9)
plt.legend(loc='best')
plt.title("Iris Classification results")
plt.show()
```
We can see that the errors occur in the area where green (class 1) and gray (class 2) overlap. This gives us insight about what features to add - any feature which helps separate class 1 and class 2 should improve classifier performance.
| github_jupyter |
# SVM classification/SMOTE oversampling for an imbalanced data set
Date created: Oct 14, 2016
Last modified: Nov 16, 2016
Tags: SVM, SMOTE, ROC/AUC, oversampling, imbalanced data set, semiconductor data
About: Rebalance imbalanced semicondutor manufacturing dataset by oversampling the minority class using SMOTE. Classify using SVM. Assess the value of oversampling using ROC/AUC.
<h3>I. Introduction</h3>
The [SECOM dataset](http://archive.ics.uci.edu/ml/datasets/SECOM) in the [UCI Machine Learning Repository](http://archive.ics.uci.edu/ml) is semicondutor manufacturing data. There are 1567 records, 590 anonymized features and 104 fails. This makes it an imbalanced dataset with a 14:1 ratio of pass to fails. The process yield has a simple pass/fail response (encoded -1/1).
<h4>Objective</h4>
We consider some of the different approaches to classify imbalanced data. In the [previous example](https://github.com/Meena-Mani/SECOM_class_imbalance/blob/master/secomdata_ocsvm.ipynb) we looked at one-class SVM.
Another strategy is to rebalance the dataset by oversampling the minority class and/or undersampling the majority class. This is done to improve the sensitivity (i.e the true positive rate) of the minority class. For this exercise, we will look at:
- rebalancing the dataset using SMOTE (which oversamples the minority class)
- ROC curves for different oversampling ratios
<h4>Methodology</h4>
The *sklearn* [imblearn toolbox](http://contrib.scikit-learn.org/imbalanced-learn/index.html) has many methods for oversamplng/undersampling. We will use the SMOTE (Synthetic Minority Over-sampling Technique) method introduced in 2002 by Chawla et al. <a href="#ref1">[1]</a>, <a href="#ref2">[2]</a>. With SMOTE, synthetic examples are interpolated along the line segments joining some/all of the <i>k</i> minority class nearest neighbors.
In the experiment, the oversampling rate is varied between 10-70%, in 10% increments. The percentage represents the final minority class fraction after oversampling: if the majority class has 1000 data points (and the minority class 50), at 10% the minority class will have 100 data points after oversampling (not 5 or 50+5 = 55).
The rebalanced data is classified using an SVM. The *imblearn* toolbox has a *pipeline* method which will be used to chain all the steps. The SMOTE+SVM method is evaluated by the area under the Receiver Operating Characteristic curve (AUC).
<h4>Preprocessing</h4>
The data represents measurements from a large number of processes or sensors and many of the records are missing. In addition some measurements are identical/constant and so not useful for prediction. We will remove those columns with high missing count or constant values.
The Random Forest variable importance is used to rank the variables in terms of their importance. For the random forest, we will impute the remaining missing values with the median for the column.
We will additionally scale the data that is applied to the SVM. We will use the <i>sklearn preprocessing</i> module for both imputing and scaling.
These are the same steps used for the [one-class SVM](https://github.com/Meena-Mani/SECOM_class_imbalance/blob/master/secomdata_ocsvm.ipynb) and a more detailed explanation can be seen there.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.preprocessing import Imputer
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split as tts
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import Pipeline
from sklearn.metrics import roc_curve, auc
from __future__ import division
import warnings
warnings.filterwarnings("ignore")
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/secom/secom.data"
secom = pd.read_table(url, header=None, delim_whitespace=True)
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/secom/secom_labels.data"
y = pd.read_table(url, header=None, usecols=[0], squeeze=True, delim_whitespace=True)
print 'The dataset has {} observations/rows and {} variables/columns.' \
.format(secom.shape[0], secom.shape[1])
print 'The ratio of majority class to minority class is {}:1.' \
.format(int(y[y == -1].size/y[y == 1].size))
```
<h3>II. Preprocessing </h3>
We process the missing values first, dropping columns which have a large number of missing values and imputing values for those that have only a few missing values.
The Random Forest variable importance is used to rank the variables in terms of their importance. The [one-class SVM](https://github.com/Meena-Mani/SECOM_class_imbalance/blob/master/secomdata_ocsvm.ipynb) exercise has a more detailed version of these steps.
```
# dropping columns which have large number of missing entries
m = map(lambda x: sum(secom[x].isnull()), xrange(secom.shape[1]))
m_200thresh = filter(lambda i: (m[i] > 200), xrange(secom.shape[1]))
secom_drop_200thresh = secom.dropna(subset=[m_200thresh], axis=1)
dropthese = [x for x in secom_drop_200thresh.columns.values if \
secom_drop_200thresh[x].std() == 0]
secom_drop_200thresh.drop(dropthese, axis=1, inplace=True)
print 'The SECOM data set now has {} variables.'\
.format(secom_drop_200thresh.shape[1])
# imputing missing values for the random forest
imp = Imputer(missing_values='NaN', strategy='median', axis=0)
secom_imp = pd.DataFrame(imp.fit_transform(secom_drop_200thresh))
# use Random Forest to assess variable importance
rf = RandomForestClassifier(n_estimators=100, random_state=7)
rf.fit(secom_imp, y)
# sorting features according to their rank
importance = rf.feature_importances_
ranked_indices = np.argsort(importance)[::-1]
```
<h3>III. SVM Classification </h3>
<h4> Preprocessing </h4>
The SVM is sensitive to feature scale so the first step is to center and normalize the data. The train and test sets are scaled separately using the mean and variance computed from the training data. This is done to estimate the ability of the model to generalize.
```
# split data into train and holdout sets
# stratify the sample used for modeling to preserve the class proportions
X_train, X_holdout, y_train, y_holdout = tts(secom_imp[ranked_indices[:40]], y, \
test_size=0.2, stratify=y, random_state=5)
print 'Train data: The majority/minority class have {} and {} elements respectively.'\
.format(y_train[y_train == -1].size, y_train[y_train == 1].size)
print 'The maj/min class ratio is: {0:2.0f}' \
.format(round(y_train[y_train == -1].size/y_train[y_train == 1].size))
print 'Holdout data: The majority/minority class have {} and {} elements respectively.'\
.format(y_holdout[y_holdout == -1].size, y_holdout[y_holdout == 1].size)
print 'The maj/min class ratio for the holdout set is: {0:2.0f}' \
.format(round(y_holdout[y_holdout == -1].size/y_holdout[y_holdout == 1].size))
# scaling the split data. The holdout data uses scaling parameters
# computed from the training data
standard_scaler = StandardScaler()
X_train_scaled = pd.DataFrame(standard_scaler.fit_transform(X_train), \
index=X_train.index)
X_holdout_scaled = pd.DataFrame(standard_scaler.transform(X_holdout))
# Note: we convert to a DataFrame because the plot functions
# we will use need DataFrame inputs.
```
<h4> Finding parameters </h4>
The usual way to select parameters is via grid-search and cross-validation (CV). The scoring is based on the accuracy. When the classes are imbalanced, the true positive of the majority class dominates. Often, there is a high cost associated with the misclassification of the minority class, and in those cases alternative [scoring measures](http://scikit-learn.org/stable/modules/model_evaluation.html) such as the [F1](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) and [$F_{\beta}$](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html) scores or the [Matthews Correlation Coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.matthews_corrcoef.html) (which uses all four values of the confusion matrix) are used.
In CV experiments on this data, the majority class still dominates so that for the best CV F1-scores, the True Negative Rate (TNR - the rate at which the minority class is correctly classified) is zero.
Instead of automating the selection of hyperparameters, I have manually selected <i>C</i> and $\gamma$ values for which the precision/recall/F1 values as well as the TNR are high.
An example is shown below.
```
# oversampling
ratio = 0.5
smote = SMOTE(ratio = ratio, kind='regular')
smox, smoy = smote.fit_sample(X_train_scaled, y_train)
print 'Before resampling: \n\
The majority/minority class have {} and {} elements respectively.'\
.format(y_train[y == -1].size, y_train[y == 1].size)
print 'After oversampling at {}%: \n\
The majority/minority class have {} and {} elements respectively.'\
.format(ratio, smoy[smoy == -1].size, smoy[smoy == 1].size)
# plotting minority class distribution after SMOTE
# column 4 displayed
from IPython.html.widgets import interact
@interact(ratio=[0.1,1.0])
def plot_dist(ratio):
sns.set(style="white", font_scale=1.3)
fig, ax = plt.subplots(figsize=(7,5))
smote = SMOTE(ratio = ratio, kind='regular')
smox, smoy = smote.fit_sample(X_train_scaled, y_train)
smox_df = pd.DataFrame(smox)
ax = sns.distplot(smox_df[4][smoy == 1], color='b', \
kde=False, label='after')
ax = sns.distplot(X_train_scaled[4][y_train == 1], color='r', \
kde=False, label='before')
ax.set_ylim([0, 130])
ax.set(xlabel='')
ax.legend(title='Ratio = {}'.format(ratio))
plt.title('Minority class distribution before and after oversampling')
plt.show()
# classification results
from sklearn.metrics import confusion_matrix, matthews_corrcoef,\
classification_report, roc_auc_score, accuracy_score
# manually selected parameters
clf = SVC(C = 2, gamma = .0008)
clf.fit(smox, smoy)
y_predicted = clf.predict(X_holdout_scaled)
print 'The accuracy is: {0:4.2} \n' \
.format(accuracy_score(y_holdout, y_predicted))
print 'The confusion matrix: '
cm = confusion_matrix(y_holdout, y_predicted)
print cm
print '\nThe True Negative rate is: {0:4.2}' \
.format(float(cm[1][1])/np.sum(cm[1]))
print '\nThe Matthews correlation coefficient: {0:4.2f} \n' \
.format(matthews_corrcoef(y_holdout, y_predicted))
print(classification_report(y_holdout, y_predicted))
print 'The AUC is: {0:4.2}'\
.format(roc_auc_score(y_holdout, y_predicted))
```
For these manually selected parameters, the TNR is 0.38, the Matthews correlation coefficient is 0.21 and the precision/recall/F1 is in the 0.86 - 0.90 range. Selecting the best CV score (usually in the 0.90 range), on the other hand, would have given a TNR of 0 for all the scoring metrics I looked at.
<h4>The Pipeline -- Oversampling, classification and ROC computations </h4>
The *imblearn package* includes a [pipeline](http://contrib.scikit-learn.org/imbalanced-learn/generated/imblearn.pipeline.Pipeline.html#imblearn.pipeline.Pipeline) module which allows one to chain transformers, resamplers and estimators. We compute the ROC curves for each of the oversampling ratios and corresponding hyperparameters C and gamma and for this we use the pipeline to oversample with SMOTE and classify with the SVM.
```
# oversampling, classification and computing ROC values
fpr = dict()
tpr = dict()
roc_auc = dict()
ratio = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7]
C = [3, 3, 3, 2, 2, 2, 2]
gamma = [.02, .009, .009, .005, .0008, .0009, .0007]
estimators = [('smt', SMOTE(random_state=42)),
('clf', SVC(probability=True, random_state=42))]
pipe = Pipeline(estimators)
print pipe
for i, ratio, C, gamma in zip(range(7), ratio, C, gamma):
pipe.set_params(smt__ratio = ratio, clf__C = C, clf__gamma = gamma)
probas_ = pipe.fit(X_train_scaled, y_train).predict_proba(X_holdout_scaled)
fpr[i], tpr[i], _ = roc_curve(y_holdout, probas_[:,1])
roc_auc[i] = auc(fpr[i], tpr[i])
# plotting the ROC curves
def plot_roc(fpr, tpr, roc_auc):
colors = ['darkorange', 'deeppink', 'red', 'aqua', 'cornflowerblue','navy', 'blue']
plt.figure(figsize=(10,8.5))
for i, color in zip(range(7), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=2, linestyle=':',
label='{0} (area = {1:0.2f})'
''.format((i+1)/10, roc_auc[i]))
plt.plot([0, 1], [0, 1], 'k--', lw=1)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curves: SMOTE oversampled minority class', fontsize=14)
plt.legend(title='Class ratio after oversampling', loc="lower right")
plt.show()
plt.savefig('ROC_oversampling.png')
plot_roc(fpr, tpr, roc_auc)
```
<h3>IV. Discussion</h3>
There is a trend in the ROC curves (the ROC convex hull) in the figure above with the higher oversampling ratios --0.7 vs 0.1-- having a higher AUC. An obvious question then is whether increasing the oversampling ratio to get a balanced data set would give the best results.
In the experiment, no significant improvements were seen in the 0.5 - 0.8 (0.8 not plotted) regime. Oversampling with SMOTE broadens the decision region around the minority points (so we would expect better results) but the coverage may exceed the decision surface<sup>**</sup>. The level of oversampling therefore needs to be experimentally determined.
Another strategy to balance the classes is to combine oversampling (the minority class) with undersampling (the majority class). Chawla et al. had reported <a href="#ref1">[1]</a> that a combination of oversampling and undersampling gave the best results. We will experiment with this combination in a future exercise.
<sup>**</sup>It should also be noted that oversampling results in a significant increase (and bias) in the minority class. For instance, for a 0.5 ratio, the minority class is increased seven-fold (from 83 to 585). A completely balanced data set would involve a fourteen-fold increase in the minority class and this would alter the decision surface.
<h3>V. References and Further Reading </h3>
<a name="ref1"></a>[1] [Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. SMOTE: synthetic minority over-sampling technique. J. Artif. Int. Res. 16, 1 (June 2002), 321-357. ](https://www.cs.cmu.edu/afs/cs/project/jair/pub/volume16/chawla02a-html/chawla2002.html)
<a name="ref2"></a>[2] [Chawla, Nitesh V. Data Mining for Imbalanced Datasets: An Overview. In: Maimon, Oded; Rokach, Lior (Eds) Data Mining and Knowledge Discovery Handbook, Springer, (2010), 875-886.](http://www3.nd.edu/~dial/publications/chawla2005data.pdf)
<a name="ref3"></a>[3] [Altini, Marco. "Dealing with Imbalanced Data: Undersampling, Oversampling and Proper Cross-validation." Web log post. Marco Altini Blog. N.p., 17 Aug. 2015. Web.](http://www.marcoaltini.com/blog/dealing-with-imbalanced-data-undersampling-oversampling-and-proper-cross-validation)
<div style="background-color: #FAAC58; margin-left: 0px; margin-right: 20px; padding-bottom: 8px; padding-left: 8px; padding-right: 8px; padding-top: 8px;">
Author: Meena Mani <br>
email: meenas.mailbag@gmail.com <br>
twitter: @meena_uvaca <br>
</div>
| github_jupyter |
```
import pandas as pd
import numpy as np
import catboost as cat
def reduce_mem_usage(df):
""" iterate through all the columns of a dataframe and modify the data type
to reduce memory usage.
"""
start_mem = df.memory_usage().sum()
for col in df.columns:
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype('category')
end_mem = df.memory_usage().sum()
return df
def load_data(path):
user = reduce_mem_usage(pd.read_csv(path + 'user.csv',header=None, engine='c'))
item = reduce_mem_usage(pd.read_csv(path + 'item.csv',header=None, engine='c'))
data = pd.read_csv(path + 'user_behavior.csv',header=None, engine='c')
data.columns = ['userID','itemID','behavior','timestamp']
data['day'] = data['timestamp'] // 86400
data['hour'] = data['timestamp'] // 3600 % 24
## 生成behavior的onehot
for i in ['pv','fav','cart','buy']:
data[i] = 0
data.loc[data['behavior'] == i, i] = 1
## 生成behavior的加权
data['day_hour'] = data['day'] + data['hour'] / float(24)
data.loc[data['behavior']=='pv','behavior'] = 1
data.loc[data['behavior']=='fav','behavior'] = 2
data.loc[data['behavior']=='cart','behavior'] = 3
data.loc[data['behavior']=='buy','behavior'] = 1
max_day = max(data['day'])
min_day = min(data['day'])
data['behavior'] = (1 - (max_day-data['day_hour']+2)/(max_day-min_day+2)) * data['behavior']
item.columns = ['itemID','category','shop','brand']
user.columns = ['userID','sex','age','ability']
data = reduce_mem_usage(data)
data = pd.merge(left=data, right=item, on='itemID',how='left', sort=False)
data = pd.merge(left=data, right=user, on='userID',how='left', sort=False)
return user, item, data
user, item, data = load_data(path = '../ECommAI_EUIR_round2_train_20190816/')
user['age'] = user['age'] // 10
data['age'] = data['age'] // 10
#########需要修改!!!!!!!!
###路径也要改
recall_train_list = []
for i in range(7):
recall_train_list.append(
reduce_mem_usage(pd.read_csv(str(i) + 'recall_list_round2_15day_300lenth-Copy1.csv', engine='c')))
recall_train = pd.concat(recall_train_list, sort=False)
recall_train = recall_train.fillna(0)
def downsample(df, percent=10):
'''
percent:多数类别下采样的数量相对于少数类别样本数量的比例
'''
data1 = df[df['label'] != 0]
data0 = df[df['label'] == 0]
index = np.random.randint(len(data0), size = percent * len(data1))
lower_data0 = data0.iloc[list(index)]
return(pd.concat([lower_data0, data1]))
recall_train = downsample(recall_train,10 )
recall_train = pd.merge(left=recall_train, right=item, on='itemID',how='left', sort=False)
recall_train = pd.merge(left=recall_train, right=user, on='userID',how='left', sort=False)
feature_path = '../Step2 Generate_feature_for_Ranking/'
underline_features_files = [
'brand_count.csv',
'brand_sum.csv',
'category_count.csv',
'category_sum.csv',
'itemID_count.csv',
'itemID_sum.csv',
'shop_count.csv',
'shop_sum.csv',
'category_lower.csv',
'item_rank.csv',
'category_higher.csv',
'itemID_higher.csv',
]
underline_features = []
for f in underline_features_files:
underline_features.append(pd.read_csv(feature_path+f, engine='c'))
for f in underline_features:
recall_train = pd.merge(left=recall_train, right=f, on=f.columns[0], how='left', sort=False)
## 注意这个线下训练时 是underline
double_underline_features_files = [
'item_to_ability_count_underline.csv',
'item_to_sex_count_underline.csv',
'item_to_age_count_underline.csv',
]
double_underline_features = []
for f in double_underline_features_files:
double_underline_features.append(pd.read_csv(feature_path+f, engine='c'))
for f in double_underline_features:
recall_train = pd.merge(left=recall_train, right=f, on=list(f.columns[0: 2]), how='left', sort=False)
## 注意这个线下训练时 是underline
time_features_files = [
'itemID_last_time_underline.csv',
'brand_last_time_underline.csv',
'shop_last_time_underline.csv'
]
time_features = []
for f in time_features_files:
time_features.append(pd.read_csv(feature_path+f, engine='c'))
for f in time_features:
recall_train = pd.merge(left=recall_train, right=f, on=f.columns[0], how='left', sort=False)
online_features_files = ['user_to_brand_count.csv',
'user_to_brand_sum.csv',
'user_to_category_count.csv',
'user_to_category_sum.csv',
'user_to_shop_count.csv',
'user_to_shop_sum.csv',]
online2 = ['user_to_category_count_pv.csv',
'user_to_category_count_buy.csv',
'user_to_shop_count_pv.csv',
'user_to_shop_count_buy.csv',
'user_to_brand_count_pv.csv',
'user_to_brand_count_buy.csv']
online3 = ['user_to_category_count_yestday.csv',
'user_to_category_count_pv_yestday.csv',
'user_to_category_count_buy_yestday.csv',
'user_to_shop_count_pv_yestday.csv',
'user_to_shop_count_buy_yestday.csv',
'user_to_brand_count_pv_yestday.csv',
'user_to_brand_count_buy_yestday.csv']
online4 = [
'user_to_category_count_5days.csv',
'user_to_category_count_pv_5days.csv',
'user_to_category_count_buy_5days.csv',
'user_to_shop_count_pv_5days.csv',
'user_to_shop_count_buy_5days.csv',
'user_to_brand_count_pv_5days.csv',
'user_to_brand_count_buy_5days.csv']
online5 = [
'user_to_shop_lasttime.csv',
'user_to_category_lasttime.csv',
'user_to_brand_lasttime.csv' ,
]
online_features_files = online_features_files + online2 + online3 + online4 + online5
online_features = []
for f in online_features_files:
online_features.append(pd.read_csv(feature_path+f, engine='c'))
for f in online_features:
recall_train = pd.merge(left=recall_train, right=f, on=list(f.columns[0: 2]), how='left', sort=False)
def transfer_label(x):
if x == 0:
return 0
else:
return 1
recall_train['label'] = recall_train['label'].apply(transfer_label)
features = [x for x in recall_train.columns if x not in ['itemID','userID','category','shop','brand','label','apriori_rank','apriori_top']]
cbt_model = cat.CatBoostClassifier(iterations=300,learning_rate=0.1,depth=5,verbose=True,thread_count=12
,random_seed=1024)
cbt_model.fit(recall_train[features], recall_train['label'])
cbt_model.save_model('model0924_base.file')
importance = dict(zip(features,
cbt_model.feature_importances_))
sorted(importance.items(), key=lambda x:x[1], reverse=True)
#####要有LGB和融合的代码!!!!!!!!
```
| github_jupyter |
# Find when a piece of text appears in an archived web page
This notebook helps you find when a particular piece of text appears in, or disappears from, a web page. Using Memento Timemaps, it gets a list of available captures from the selected web archive. It then searches each capture for the desired text, displaying the results.
You can select the direction in which the notebook searches:
* **First occurrence** – find the first capture in which the text appears (start from the first capture and come forward in time)
* **Last occurrence** – find the last capture in which the text appears (start from present and go backwards in time)
* **All occurrences** – find all matches (start from the first capture and continue until the last)
If you select 'All occurrences' the notebook will generate a simple chart showing how the number of matches changes over time.
By default, the notebook displays possible or 'fuzzy' matches as well as exact matches, but these are not counted in the totals.
```
import requests
from IPython.display import display, HTML
import re
import arrow
from bs4 import BeautifulSoup, Tag
import ipywidgets as widgets
import json
import time
from fuzzysearch import find_near_matches
import altair as alt
import pandas as pd
# This is to restyle the standard html table output from difflib
HTML('<style>.x-match {background-color: #ccffcc;} .p-match {background-color: #ffffcc;}</style>')
%%javascript
// This is necessary in Jupyter notebook to stop the output area folding up
// Will give an error in Jupyter Lab
IPython.OutputArea.prototype._should_scroll = function(lines) {return false}
# Default list of repositories -- you could add to this
TIMEGATES = {
'nla': 'https://web.archive.org.au/awa/',
'nlnz': 'https://ndhadeliver.natlib.govt.nz/webarchive/wayback/',
'bl': 'https://www.webarchive.org.uk/wayback/archive/',
'ia': 'https://web.archive.org/web/'
}
def get_html(url):
'''
Get html from a capture url.
'''
response = requests.get(url)
# Sometimes the Mementos don't go to captures?!
# Eg https://web.archive.org.au/awa/20090912180610id_/http://www.discontents.com.au/
try:
timestamp = re.search(r'/(\d{14})id_/', response.url).group(1)
except AttributeError:
return None
return {'url': response.url, 'html': response.content}
def format_date(url):
'''
Extract timestamp from url and format in a human readable way.
'''
timestamp = re.search(r'/(\d{14})id_/', url).group(1)
return arrow.get(timestamp, 'YYYYMMDDHHmmss').format('D MMMM YYYY')
def format_date_as_iso(url):
'''
Extract timestamp from url and format as ISO.
'''
timestamp = re.search(r'/(\d{14})id_/', url).group(1)
return arrow.get(timestamp, 'YYYYMMDDHHmmss').format('YYYY-MM-DD')
def convert_lists_to_dicts(results):
'''
Converts IA style timemap (a JSON array of arrays) to a list of dictionaries.
Renames keys to standardise IA with other Timemaps.
'''
if results:
keys = results[0]
results_as_dicts = [dict(zip(keys, v)) for v in results[1:]]
else:
results_as_dicts = results
# Rename keys
for d in results_as_dicts:
d['status'] = d.pop('statuscode')
d['mime'] = d.pop('mimetype')
d['url'] = d.pop('original')
return results_as_dicts
def get_capture_data_from_memento(url, request_type='head'):
'''
For OpenWayback systems this can get some extra cpature info to insert in Timemaps.
'''
if request_type == 'head':
response = requests.head(url)
else:
response = requests.get(url)
headers = response.headers
length = headers.get('x-archive-orig-content-length')
status = headers.get('x-archive-orig-status')
status = status.split(' ')[0] if status else None
mime = headers.get('x-archive-orig-content-type')
mime = mime.split(';')[0] if mime else None
return {'length': length, 'status': status, 'mime': mime}
def convert_link_to_json(results, enrich_data=False):
'''
Converts link formatted Timemap to JSON.
'''
data = []
for line in results.splitlines():
parts = line.split('; ')
if len(parts) > 1:
link_type = re.search(r'rel="(original|self|timegate|first memento|last memento|memento)"', parts[1]).group(1)
if link_type == 'memento':
link = parts[0].strip('<>')
timestamp, original = re.search(r'/(\d{14})/(.*)$', link).groups()
capture = {'timestamp': timestamp, 'url': original}
if enrich_data:
capture.update(get_capture_data_from_memento(link))
data.append(capture)
return data
def get_timemap_as_json(timegate, url):
'''
Get a Timemap then normalise results (if necessary) to return a list of dicts.
'''
tg_url = f'{TIMEGATES[timegate]}timemap/json/{url}/'
response = requests.get(tg_url)
response_type = response.headers['content-type']
# pywb style Timemap
if response_type == 'text/x-ndjson':
data = [json.loads(line) for line in response.text.splitlines()]
# IA Wayback stype Timemap
elif response_type == 'application/json':
data = convert_lists_to_dicts(response.json())
# Link style Timemap (OpenWayback)
elif response_type in ['application/link-format', 'text/html;charset=utf-8']:
data = convert_link_to_json(response.text)
return data
def display_chart(matches):
'''
Visualise matches over time.
'''
df = pd.DataFrame(matches)
chart = alt.Chart(df).mark_line(point=True).encode(
x = 'date:T',
y = 'matches:Q',
tooltip = ['date:T', 'matches:Q']
)
with chart_display:
display(chart)
def process_text(html):
'''
Extract text from an HTML page and return it as a list of lines.
Removes blank lines.
'''
lines = [l for l in BeautifulSoup(html).get_text().splitlines() if not re.match(r'^\s*$', l)]
return lines
def format_date_link(url):
'''
Extract date from url, format, and display as link.
'''
date = format_date(url)
return f'<a href="{url.replace("id_", "")}">{date}</a>'
def format_context(text, match):
'''
Extract, markup, and format context around a match.
'''
style = 'p-match' if match.dist > 0 else 'x-match'
marked_up = f'{text[:match.start]}<span class="{style}">{text[match.start:match.end]}</span>{text[match.end:]}'
result_string = marked_up[max(0, match.start - 40):match.end + 40 + 22 + 7]
result_string = result_string[result_string.index(' '):result_string.rindex(' ')].strip()
return f'...{result_string}...'
def search_page(capture_data, pattern):
'''
Search for a text string in the html of a page.
'''
found = 0
text = BeautifulSoup(capture_data['html']).get_text()
date = format_date_link(capture_data['url'])
matches = find_near_matches(pattern.casefold(), text.casefold(), max_l_dist=1)
if matches:
results = f'<h4><a href="{capture_data["url"]}">{date}</a></h4><ul>'
for match in matches:
results += f'<li>\'{format_context(text, match)}\'</li>'
if match.dist == 0:
found += 1
results += '</ul>'
with out:
display(HTML(results))
return found
def update_status(i, total_matches):
'''
Display numbers of documents processed and matches found.
'''
with status:
status.clear_output(wait=True)
display(HTML(f'Captures processed: {i + 1}'))
display(HTML(f'Exact matches found: {total_matches}'))
def find_text(timegate, url, pattern, direction):
'''
Get all captures for a page from a Timemap, then search for requested text in each page,
aggregating the results.
'''
total_matches = 0
matches = []
with out:
key = '<b>Key</b><ul><li><span class="x-match">exact match</li><li><span class="p-match">possible match</span></li></ul>'
display(HTML(key))
timemap = get_timemap_as_json(timegate, url)
if direction == 'last':
timemap.reverse()
for i, capture in enumerate(timemap):
capture_url = f'{TIMEGATES[timegate]}{capture["timestamp"]}id_/{capture["url"]}'
if timegate == 'nlnz' or (capture['digest'] != timemap[i-1]['digest'] and capture['status'] == '200'):
capture_data = get_html(capture_url)
if capture_data:
found = search_page(capture_data, pattern)
total_matches += found
if found > 0:
matches.append({'date': format_date_as_iso(capture_url), 'matches': found})
if direction in ['first', 'last']:
break
update_status(i, total_matches)
if direction in ['first', 'last']:
update_status(i, total_matches)
else:
display_chart(matches)
def start(e):
clear('e')
find_text(repository.value, target_url.value, search_string.value, search_direction.value)
def clear(e):
status.clear_output()
chart_display.clear_output()
out.clear_output()
out = widgets.Output()
status = widgets.Output()
chart_display = widgets.Output()
repository = widgets.Dropdown(
options=[('---', ''), ('UK Web Archive', 'bl'), ('National Library of Australia', 'nla'), ('National Library of New Zealand', 'nlnz'), ('Internet Archive', 'ia')],
description='Archive:',
disabled=False,
value=''
)
search_direction = widgets.Dropdown(
options=[('First occurrence', 'first'), ('Last occurrence', 'last'), ('All occurrences', 'all')],
description='Find:',
disabled=False,
value='first'
)
target_url = widgets.Text(description='URL:')
search_string = widgets.Text(description='Search text:')
tc_button = widgets.Button(description='Find text', button_style='primary')
tc_button.on_click(start)
clear_button = widgets.Button(description='Clear all')
clear_button.on_click(clear)
display(widgets.HBox([repository, target_url], layout=widgets.Layout(padding='10px')))
display(widgets.HBox([search_string, search_direction], layout=widgets.Layout(padding='10px')))
display(widgets.HBox([tc_button, clear_button], layout=widgets.Layout(padding='10px')))
display(status)
display(chart_display)
display(out)
```
----
Created by [Tim Sherratt](https://timsherratt.org) for the [GLAM Workbench](https://glam-workbench.github.io).
Work on this notebook was supported by the [IIPC Discretionary Funding Programme 2019-2020](http://netpreserve.org/projects/)
| github_jupyter |
Objectives
- Order the rows of a table using a chosen column
- Convert to long format to plot multiple columns at the same time
- Switch between short/long table format
Content to cover
- sort_values
- pivot, pivot_table
- melt
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
titanic = pd.read_csv("../data/titanic.csv")
titanic.head()
```
Air quality data about $NO_2$ and Particulate matter less than 2.5 micrometers is used, made available by [openaq](https://openaq.org) and using the [py-openaq](http://dhhagan.github.io/py-openaq/index.html) package. The `air_quality_long.csv` data set provides $NO_2$ and $pm25$ values for the measurement stations _FR04014_, _BETR801_ and _London Westminster_ in respectively Paris, Antwerp and London. In this case, the data set is provided in a so-called long data format representation.
```
air_quality = pd.read_csv("../data/air_quality_long.csv", index_col="date.utc", parse_dates=True)
air_quality.head()
```
The `no2` data set contains only the measurements of $NO_2$:
```
no2 = air_quality[air_quality["parameter"] == "no2"]
```
## Reshape the layout of tables
### Sort table rows
> I want to arrange the titanic date according to the age of the passengers.
```
titanic.sort_values(by="Age").head()
```
> I want to arrange the titanic date according to the cabin class and age in descending order.
```
titanic.sort_values(by=['Pclass', 'Age'], ascending=False).head()
```
With `sort_values`, the rows in the table are sorted according to the defined column(s). The index will follow the row order. Sorting is also possible acccording to the index labels or a combination of the values and index.
__To user guide:__ More details about sorting of tables is provided in :ref:`basics.sorting`.
### Long to wide table format
Let's use a small subset of the air quality data set, for each location the first two measurements (i.e. the head of each group):
```
no2_subset = no2.sort_index().groupby(["location"]).head(2)
no2_subset
```

> I want the values for the three stations as separate columns next to each other to plot them together
```
no2_subset.pivot(columns="location", values="value")
```
The `pivot` function is purely restructering of the data: a single value for each index/column combination is required.
As Pandas support plotting of multiple columns (see [plotting tutorial](./4_plotting.ipynb)) out of the box, the conversion from long to wide format enables the plotting of the different time series at the same time:
```
no2.head()
no2.pivot(columns="location", values="value").plot()
```
<div class="alert alert-info">
__Note__: When the `index` parameter is not defined, the existing index (row labels) is used.
</div>
__To user guide:__ For more information about `pivot`, see :ref:`reshaping.reshaping`
### Pivot table

> I want the mean concentrations for $NO_2$ and $PM_{2.5}$ in each of the stations in table form
```
air_quality.pivot_table(values="value", index="location",
columns="parameter", aggfunc="mean")
```
In the case of `pivot`, the data is only rearranged. When multiple values need to be aggregated (in this specific case, the values on different time steps) `pivot_table` can to be used, providing an aggregation function (e.g. mean) on how to combine these values.
Pivot table is a well known concept in spreadsheet software. When interested in summary columns for each variable separately as well, put the `margin` parameter to `True`:
```
air_quality.pivot_table(values="value", index="location",
columns="parameter", aggfunc="mean",
margins=True)
```
__To user guide:__ For more information about `pivot_table`, see :ref:`reshaping.pivot`
<div class="alert alert-info">
__Note__: If you're wondering, `pivot_table` is indeed directly linked to `groupby`. The same values can be calculated by grouping on both `parameter` and `location`:
air_quality.groupby(["parameter", "location"]).mean()
__To user guide:__ Have a look at `groupby` in combination with `unstack` at [:ref:`TODO LABEL`](https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html#combining-with-stats-and-groupby)
</div>
### Wide to long format
Starting again from the wide format table created in the previous section:
```
no2_pivoted = no2.pivot(columns="location", values="value").reset_index()
no2_pivoted.head()
```

> I want to collect all air quality $NO_2$ measurements in a single column (long format)
```
no_2 = no2_pivoted.melt(id_vars="date.utc")
no_2.head()
```
The solution is the short version on how to apply `melt`. The method will _melt_ all columns NOT mentioned in `id_vars` together into two columns: A columns with the column header names and a column with the values itself. The latter column gets by default the name `value`.
The `melt` method can be defined in more detail:
```
no_2 = no2_pivoted.melt(id_vars="date.utc",
value_vars=["BETR801", "FR04014", "London Westminster"],
value_name="NO_2",
var_name="id_location")
no_2.head()
```
The result in the same, but in more detail defined:
- `value_vars` defines explicitly which columns to _melt_ together
- `value_name` provides a custom column name for the values column instead of the default columns name `value`
- `var_name` provides a custom olumn name for the columns collecting the column header names. Otherwise it takes the index name or a default `variable`
Hence, the arguments `value_name` and `var_name` are just user-defined names for the two generated columns. The columns to melt are defined by `id_vars` and `value_vars`.
<div class="alert alert-info">
__Note__: The long format is also referred to as [_tidy_ data format](https://www.jstatsoft.org/article/view/v059i10). The representation defines that each observation is on a separate line and each variable a separate column.
</div>
__To user guide:__ Conversion from wide to long format with `melt` is explained in :ref:`reshaping.melt`.
## REMEMBER
- Sorting by one or more columns is supported by `sort_values`
- The `pivot` function is purely restructering of the data, `pivot_table` supports aggregations
- The reverse of `pivot` (long to wide format) is `melt` (wide to long format)
__To user guide:__ More information on reshaping and pivoting is provided in :ref:`reshaping`.
| github_jupyter |
# Morisita-Horn similarity calculation
```
from __future__ import print_function
from collections import Counter
from datetime import datetime
import itertools
import multiprocessing as mp
import os
import subprocess as sp
import sys
import tempfile
import time
import numpy as np
import pandas as pd
from abutils.utils.jobs import monitor_mp_jobs
from abutils.utils.pipeline import list_files, make_dir
from abutils.utils.progbar import progress_bar
```
### User-defined options
By default, the size of the bootstrap samples will increase exponentially, as the similarity plots will be drawn with a logarithmic x-axis. However, the option is also given to draw bootstrap samples that increase linearly in size rather than exponentially. The following options may be adjusted depending on the desired output:
* `iterations` is the number of replicate samplings for each subsample size. Default is `10`.
* `max_power_of_10` is the highest exponent of 10 for which subsamples will be drawn. For example, the default value of `7` means that the largest bootstrap sample will be `10^7`, or 10 million, sequences. The lowest acceptable value is `2`, as subsampling fewer than 100 sequences is not especially useful.
* `subsample_fraction` is the fraction of each `power_of_10` multiple at which the bootstrap sample size increases. For example, the default value of `0.3` results in the following multipliers: `[1.0, 1.3, 1.6, 1.9]`. For a `power_of_10` of 10^6, for example, a `subsample_fraction` of `0.3` would result in the following bootstrap sample sizes: `1.0x10^6, 1.3x10^6, 1.6x10^6, and 1,9x10^6`.
* `subsample_size` is the size multiple for each subsample. By default (if `subsample_fraction` is provided), this will not be used. This option is only provided in case you would prefer the subsample size pools to increase in linear fashion, rather than exponentially.
Note that the data directory (`'./data/techrep-merged_vj-cdr3len_no-header/'`) is not present in this Github repo, as the size of the files far exceeds what is allowed by Github. You can download a compressed archive containing the appropriate data files [**here**](http://burtonlab.s3.amazonaws.com/GRP_github_data/techrep-merged_vj-cdr3len_no-header.tar.gz). Decompressing the archive inside of `./data` (the "data" directory found in the same parent directory as this notebook) should allow you to run the following code without alteration.
```
iterations = 10
max_power_of_10 = 7
subsample_fraction = 0.3
subsample_size = 25000
data_dir = './data/techrep-merged_vj-cdr3len_no-header/'
temp_dir = './data/temp/'
output_dir = './data/user-calculated_mh_similarity/'
```
### Subjects and directories
```
individual_files_dir = os.path.join(output_dir, 'individual_comparisons')
make_dir(temp_dir)
make_dir(output_dir)
make_dir(individual_files_dir)
with open('../data_processing/data/subjects.txt') as f:
subjects = sorted(f.read().split())
```
### Morisita-Horn similarity
```
def mh_similarity(sample1, sample2):
'''
Calculates the Marista-Horn similarity for two samples.
.. note:
sample1 and sample2 should be the same length, and
the sum of each sample should be greater than 0.
Args:
sample1 (list): list of frequencies for sample 1
sample2 (list): list of frequencies for sample 2
Returns:
float: Marista-Horn similarity (between 0 and 1)
'''
X = sum(sample1)
Y = sum(sample2)
XY = X * Y
sumXiYi = 0
sumXiSq = 0
sumYiSq = 0
for x, y in zip(sample1, sample2):
sumXiYi += x * y
sumXiSq += x * x
sumYiSq += y * y
num = 2 * sumXiYi
denom = (float(sumXiSq) / (X * X) + float(sumYiSq) / (Y * Y)) * XY
return 1. * num / denom
def load_data(files):
'''
Loads VJ-CDR3len data from a list of files corresponding to a single subject.
Args:
files (list): a list of files containing data to load. Data will be
loaded for all files and returned as a single list.
Returns:
list: a combined list of data from all of the files
'''
data = []
for f in files:
with open(f) as of:
for line in of:
_d = '_'.join(line.strip().split())
if _d:
data.append(_d)
return data
def get_subsample_sizes():
'''
Returns a list of subsample sizes, based on user-defined subsampling options.
'''
if subsample_fraction is not None:
sizes = []
for mpt in range(2, max_power_of_10 + 1):
start = 10**(mpt - 1)
end = 10**mpt
step = int(10**mpt * float(subsample_fraction))
sizes += list(range(start, end, step))
sizes.append(10**mpt)
else:
sizes = range(subsample_step, 10 ** max_power_of_10, subsample_step)
return sizes
def compute_frequencies(data, iterations, size):
'''
Subsamples a dataset (with replacement) and computes the VJ-CDR3len
frequency for each bootstrap sample.
Args:
data (list): a list of antibody sequences collapsed to just VJ-CDR3len
iterations (int): the number of bootstrap samplings to be performed
size (int): the size (in sequences) of each bootstrap sample
Returns:
list(Counter): a list of VJ-CDR3len frequencies (as Counter objects)
'''
subsamples = np.random.choice(data, size=(iterations, size), replace=True)
freqs = []
for subsample in subsamples:
freqs.append(Counter(subsample))
return freqs
def compute_similarity_for_single_size(sub1_data, sub2_data, iterations, size):
'''
For a single bootstrap sampling size, computes Morisita-Horn similarity of
two datasets.
Args:
sub1_data (list): a list of VJ-CDR3len values
sub2_data (list): a list of VJ-CDR3len values
iterations (int): the number of iterations to be performed
size (int): size (in sequences) of each bootstrap sampling
Returns:
int: the size of each bootstrap sampling
list: a list of Morisita-Horn similarities, of length `iterations`
'''
similarities = []
sub1_freqs = get_frequencies(sub1_data, iterations, size)
sub2_freqs = get_frequencies(sub2_data, iterations, size)
for s1, s2 in zip(sub1_freqs, sub2_freqs):
freq_df = pd.DataFrame({'sub1': s1, 'sub2': s2}).fillna(0)
similarities.append(mh_similarity(freq_df['sub1'], freq_df['sub2']))
return size, similarities
def calculate_similarities(subject1, subject2, iterations, sizes):
'''
Performs Morisita-Horn similarity calculations on VJ-CDR3len data for two subjects.
Args:
subject1 (str): name of subject 1
subject2 (str): name of subject 2
iterations (int): number of iterations to be performed for each bootstrap sample size
sizes (list(int)): a list of bootstrap sample sizes
Returns:
sub_header (str): a header line containing subject information
similarities (dict): similarity scores, with the dict keys being sample sizes
and values being lists of similarity scores of length `iterations`
'''
sub_header = '#{} {}'.format(subject1, subject2)
sub1_dir = os.path.join(data_dir, subject1)
sub2_dir = os.path.join(data_dir, subject2)
sub1_files = list_files(sub1_dir)
sub2_files = list_files(sub2_dir)
similarities = {}
output_data = [sub_header, ]
output_file = os.path.join(individual_files_dir, '{}-{}'.format(subject1, subject2))
# load all of the files into memory
sub1 = os.path.basename(os.path.dirname(sub1_files[0]))
sub1_data = load_data(sub1_files)
sub2 = os.path.basename(os.path.dirname(sub2_files[0]))
sub2_data = load_data(sub2_files)
for size in sizes:
similarities[size] = []
sub1_freqs = compute_frequencies(sub1_data, iterations, size)
sub2_freqs = compute_frequencies(sub2_data, iterations, size)
for s1, s2 in zip(sub1_freqs, sub2_freqs):
freq_df = pd.DataFrame({'sub1': s1, 'sub2': s2}).fillna(0)
similarities[size].append(mh_similarity(freq_df['sub1'], freq_df['sub2']))
output_data.append(' '.join([str(v) for v in [size] + similarities[size]]))
with open(output_file, 'w') as f:
f.write('\n'.join(output_data))
return sub_header, similarities
```
### Calculate similarity
Morisita-Horn similarity will be calculated for each pairwise combination of subjects (including self-comparisons). The process is multiprocess (one pairwise comparison per process) and will use as many cores as necessary to perform all comparisons (there are a total of 55 comparisons from 10 subjects, so the max number of cores that will be used is 55).
```
# get a list of subsample sizes, based on user-defined options
sizes = get_subsample_sizes()
# get a list of all pairwise combinations of subjects (including self-comparison)
combinations = list(itertools.combinations_with_replacement(subjects, 2))
p = mp.Pool(processes=7, maxtasksperchild=1)
start = datetime.now()
async_results = []
# initialize the progress bar
jobs = len(combinations)
progress_bar(0, jobs, start_time=start)
# calculate the similarity score for each pairwise combination of subjects
for subject1, subject2 in combinations:
async_results.append(p.apply_async(calculate_similarities, args=(subject1, subject2, iterations, sizes)))
monitor_mp_jobs(async_results, start_time=start)
results = [ar.get() for ar in async_results]
p.close()
p.join()
```
### Combine similarity files
Each pairwise comparison resulted in a separate output file. Here we combine them into a single similarities file.
```
combined_output_file = os.path.join(output_dir, 'mh-similarities_combined.txt')
individual_files = list_files(individual_files_dir)
with open(combined_output_file, 'w') as f:
f.write('')
cat_cmd = 'for f in {}/*; do (cat "${{f}}"; echo) >> {}; done'.format(individual_files_dir.rstrip('/'), combined_output_file)
p = sp.Popen(cat_cmd, stdout=sp.PIPE, stderr=sp.PIPE, shell=True)
stdout, stderr = p.communicate()
for s in subjects:
print(s)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set_style('whitegrid')
%matplotlib inline
default = pd.read_csv('../data/credit_card_default.csv')
default.rename(columns=lambda x: x.lower(), inplace=True)
default.rename(columns={'pay_0':'pay_1','default payment next month':'default'}, inplace=True)
default['male'] = (default['sex']==1).astype('int')
default.drop('sex', axis=1, inplace=True)
default['married'] = (default['marriage'] == 1).astype('int')
default.drop('marriage', axis=1, inplace=True)
# For pay_n features if >0 then it means the customer was delayed on that month
pay_features = ['pay_' + str(i) for i in range(1,7)]
for p in pay_features:
default[p] = (default[p] > 0).astype(int)
def transform_education(x):
if x==1: # 1==graduate school, give it a 2
return 2
elif x==2: # 2==university, give it a 1
return 1
else:
return -1 # give a negative value to all other levels of education
default['education'] = default['education'].apply(transform_education)
default.groupby(['married','male'])['default'].mean().unstack()
for i in range(1,7):
i = str(i)
new_var_name = 'bill_minus_pay' + i
default[new_var_name] = default['bill_amt'+i] - default['pay_amt'+i]
bill_minus_pay_features = ['bill_minus_pay'+str(i) for i in range(1,7)]
default[bill_minus_pay_features].hist(figsize=(11,5), layout=(2,3), bins=30);
from sklearn.decomposition import PCA
bill_amt_features = ['bill_amt'+str(i) for i in range(1,7)]
bill_amt_pca = PCA(n_components=1)
default['bill_amt_new_feat'] = bill_amt_pca.fit_transform(default[bill_amt_features])[:,0]
pay_features = ['pay_'+str(i) for i in range(2,7)]
pay_features_pca = PCA().fit(default[pay_features])
pay_features_pca.explained_variance_ratio_
pay_features_pca = PCA(n_components=2).fit_transform(default[pay_features])
default['new_pay1'] = pay_features_pca[:,0]
default['new_pay2'] = pay_features_pca[:,1]
# importing data
data_path= '../data/diamonds.csv'
diamonds = pd.read_csv(data_path)
diamonds = pd.concat([diamonds, pd.get_dummies(diamonds['cut'], prefix='cut', drop_first=True)],axis=1)
diamonds = pd.concat([diamonds, pd.get_dummies(diamonds['color'], prefix='color', drop_first=True)],axis=1)
diamonds = pd.concat([diamonds, pd.get_dummies(diamonds['clarity'], prefix='clarity', drop_first=True)],axis=1)
diamonds.drop(['cut','color','clarity'], axis=1, inplace=True)
diamonds.head()
sns.pairplot(diamonds[['x','y','z','price']], plot_kws={'s':2});
diamonds['volume'] = diamonds['x']*diamonds['y']*diamonds['z']
diamonds['density'] = diamonds['carat']/diamonds['volume']
diamonds[diamonds['volume']<600].plot.scatter(x='volume', y='price', s=1, alpha=0.1);
fig, ax = plt.subplots(figsize=(8,5))
diamonds.plot.scatter(x='density', y='price', s=2, alpha=0.1, ax=ax)
ax.set_xlim(0,0.015);
diamonds[['price','carat','volume','density']].corr()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from qiskit import QuantumCircuit, Aer, transpile, assemble
from qiskit.visualization import plot_histogram
from math import gcd
from numpy.random import randint
import pandas as pd
from fractions import Fraction
print("Imports Successful")
def c_amod15(a, power):
"""Controlled multiplication by a mod 15"""
if a not in [2,7,8,11,13]:
raise ValueError("'a' must be 2,7,8,11 or 13")
U = QuantumCircuit(4)
for iteration in range(power):
if a in [2,13]:
U.swap(0,1)
U.swap(1,2)
U.swap(2,3)
if a in [7,8]:
U.swap(2,3)
U.swap(1,2)
U.swap(0,1)
if a == 11:
U.swap(1,3)
U.swap(0,2)
if a in [7,11,13]:
for q in range(4):
U.x(q)
U = U.to_gate()
U.name = "%i^%i mod 15" % (a, power)
c_U = U.control()
return c_U
# Specify variables
n_count = 8 # number of counting qubits
a = 7
def qft_dagger(n):
"""n-qubit QFTdagger the first n qubits in circ"""
qc = QuantumCircuit(n)
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-np.pi/float(2**(j-m)), m, j)
qc.h(j)
qc.name = "QFT†"
return qc
# Create QuantumCircuit with n_count counting qubits
# plus 4 qubits for U to act on
qc = QuantumCircuit(n_count + 4, n_count)
# Initialise counting qubits
# in state |+>
for q in range(n_count):
qc.h(q)
# And auxiliary register in state |1>
qc.x(3+n_count)
# Do controlled-U operations
for q in range(n_count):
qc.append(c_amod15(a, 2**q),
[q] + [i+n_count for i in range(4)])
# Do inverse-QFT
qc.append(qft_dagger(n_count), range(n_count))
# Measure circuit
qc.measure(range(n_count), range(n_count))
qc.draw(fold=-1) # -1 means 'do not fold'
qasm_sim = Aer.get_backend('qasm_simulator')
t_qc = transpile(qc, qasm_sim)
qobj = assemble(t_qc)
results = qasm_sim.run(qobj).result()
counts = results.get_counts()
plot_histogram(counts)
rows, measured_phases = [], []
for output in counts:
decimal = int(output, 2) # Convert (base 2) string to decimal
phase = decimal/(2**n_count) # Find corresponding eigenvalue
measured_phases.append(phase)
# Add these values to the rows in our table:
rows.append([f"{output}(bin) = {decimal:>3}(dec)",
f"{decimal}/{2**n_count} = {phase:.2f}"])
# Print the rows in a table
headers=["Register Output", "Phase"]
df = pd.DataFrame(rows, columns=headers)
print(df)
Fraction(0.666)
# Get fraction that most closely resembles 0.666
# with denominator < 15
Fraction(0.666).limit_denominator(15)
rows = []
for phase in measured_phases:
frac = Fraction(phase).limit_denominator(15)
rows.append([phase, f"{frac.numerator}/{frac.denominator}", frac.denominator])
# Print as a table
headers=["Phase", "Fraction", "Guess for r"]
df = pd.DataFrame(rows, columns=headers)
print(df)
def a2jmodN(a, j, N):
"""Compute a^{2^j} (mod N) by repeated squaring"""
for i in range(j):
a = np.mod(a**2, N)
return a
a2jmodN(7, 2049, 53)
N = int(input("Enter an integer to check if it's prime or not"))
np.random.seed(1) # This is to make sure we get reproduceable results
a = randint(2, 15)
from math import gcd # greatest common divisor
# gcd(a, N)
def qpe_amod15(a):
n_count = 8
qc = QuantumCircuit(4+n_count, n_count)
for q in range(n_count):
qc.h(q) # Initialise counting qubits in state |+>
qc.x(3+n_count) # And auxiliary register in state |1>
for q in range(n_count): # Do controlled-U operations
qc.append(c_amod15(a, 2**q),
[q] + [i+n_count for i in range(4)])
qc.append(qft_dagger(n_count), range(n_count)) # Do inverse-QFT
qc.measure(range(n_count), range(n_count))
# Simulate Results
qasm_sim = Aer.get_backend('qasm_simulator')
# Setting memory=True below allows us to see a list of each sequential reading
t_qc = transpile(qc, qasm_sim)
obj = assemble(t_qc, shots=1)
result = qasm_sim.run(qobj, memory=True).result()
readings = result.get_memory()
print("Register Reading: " + readings[0])
phase = int(readings[0],2)/(2**n_count)
print("Corresponding Phase: %f" % phase)
return phase
phase = qpe_amod15(a) # Phase = s/r
Fraction(phase).limit_denominator(15) # Denominator should (hopefully!) tell us r
frac = Fraction(phase).limit_denominator(15)
s, r = frac.numerator, frac.denominator
guesses = [gcd(a**(r//2)-1, N), gcd(a**(r//2)+1, N)]
a = 7
factor_found = False
attempt = 0
if(1 not in guesses):
while not factor_found:
attempt += 1
print("\nAttempt %i:" % attempt)
phase = qpe_amod15(a) # Phase = s/r
frac = Fraction(phase).limit_denominator(N) # Denominator should (hopefully!) tell us r
r = frac.denominator
print("Result: r = %i" % r)
if phase != 0:
# Guesses for factors are gcd(x^{r/2} ±1 , 15)
guesses = [gcd(a**(r//2)-1, N), gcd(a**(r//2)+1, N)]
print("Guessed Factors: %i and %i" % (guesses[0], guesses[1]))
for guess in guesses:
if guess not in [1,N] and (N % guess) == 0: # Check to see if guess is a factor
print("*** Non-trivial factor found: %i ***" % guess)
factor_found = True
else:
print(N,"is a prime number")
```
| github_jupyter |
# Import library
```
import os, csv
import pandas as pd
from os import path
import plotly.graph_objs as go
from plotly.offline import plot, init_notebook_mode, iplot
%matplotlib inline
```
# Configure directory
```
userhome = os.path.expanduser('~')
txt_file = open(userhome + r"/DifferentDiffAlgorithms/SZZ/code_document/project_identity.txt", "r")
pid = txt_file.read().split('\n')
project = pid[0]
bugidentifier = pid[1]
proj = project.upper()
analyze_dir = userhome + r'/DifferentDiffAlgorithms/SZZ/projects_analyses/' + project + '/'
print ("Project name = %s" % project)
print ("Project key = %s" % bugidentifier)
```
# Load Dataset
```
mcolumns = ['bug_id','bugfix_commitID','parent_id','filepath','myers_#validbugline']
hcolumns = ['bug_id','bugfix_commitID','parent_id','filepath','histogram_#validbugline']
ds_myers = pd.read_csv(analyze_dir + "05_validation/02_validfiles/myers_valid_files.csv")
ds_myers = ds_myers[mcolumns]
ds_hist = pd.read_csv(analyze_dir + "05_validation/02_validfiles/histogram_valid_files.csv")
ds_hist = ds_hist[hcolumns]
ds_hist
ds_myers
```
# Merge the two datasets
```
data_merge = ds_hist.merge(ds_myers, on=['bug_id','bugfix_commitID','parent_id','filepath'], how='outer')
data_merge.fillna(0, inplace=True)
data_merge = data_merge.reset_index(drop=True)
data_merge
```
# Capture only different data
Number of valid files having different number of bug-related lines
```
df_diffcid = data_merge[data_merge.iloc[:,-2:].nunique(1).gt(1)]
#Save to CSV file
df_diffcid.to_csv(analyze_dir + '05_validation/02_validfiles/validfiles_with_different_numberofbugline.csv', index=False)
df_diffcid
```
Number of bug-fix commit ID having different number of bug-related lines
```
df_bugfix = data_merge.groupby('bugfix_commitID', as_index=False).agg({"histogram_#validbugline":"sum",
"myers_#validbugline":"sum"})
df_bugfix
df_diffbugfix = df_bugfix[df_bugfix.iloc[:,-2:].nunique(1).gt(1)]
#Save to CSV file
df_diffbugfix.to_csv(analyze_dir + '05_validation/03_validbugfixcommitid/validbugfixcid_with_different_numberofbugline.csv', index=False)
df_diffbugfix
```
# Counting the percentage and visualizing the result
```
percent = (len(df_diffbugfix)/len(df_bugfix))*100
rest = 100 - percent
print ("Project name: {}".format(proj))
print ("Number of valid bug-fix commit ids having different result: {}".format(len(df_diffbugfix)))
print ("Number of valid bug-fix commit ids having same result: {}".format(len(df_bugfix)-len(df_diffbugfix)))
print ("Total valid bug-fix commit ids: {}".format(len(df_bugfix)))
print ("The percentage of different valid bug-fix commit id: {0:.2f}%".format(percent))
labels = ['different number of valid bug-fix commit ids','same number of valid bug-fix commit ids']
values = [percent, rest]
colors = ['#E1396C','#96D38C']
trace = go.Pie(
labels=labels,
values=values,
hoverinfo='label+percent', textinfo='value',
textfont=dict(size=15),
marker=dict(colors=colors,
line=dict(color='#000000', width=2))
)
data = [trace]
layout = go.Layout(
title = "The percentage of valid bug-fix commit id based on the different number of diffs in " + proj + " Project"
)
init_notebook_mode(connected=True)
fig = go.Figure(data=data, layout=layout)
iplot(fig, show_link=False)
```
| github_jupyter |
```
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
#load operation
img = Image.open('mouse.png').convert('RGB')
np.shape(img)
# resize opeartion
img = img.resize((14, 14))
plt.imshow(img)
img.size
# opem my write file(should be empty)
f = open('mouse_array.c', 'w')
# fill loop
for y in range(np.shape(img)[0]):
for x in range(np.shape(img)[1]):
r, g, b = np.array(img)[y, x, :]
# shift value for 16 bit color
data = ((r >> 3) << 11) + ((g >> 2) << 5) + (b >> 3)
#change it to hex
f.write('%s,' % hex(data))
#change line (can be ignored)
if(24 == x % 30):
f.write('\n')
f.write('\n')
f.close()
img = Image.open('min.png').convert('RGB')
np.shape(img)
img = img.resize((16, 16))
plt.imshow(img)
# opem my write file(should be empty)
f = open('min_array.c', 'w')
# fill loop
for y in range(np.shape(img)[0]):
for x in range(np.shape(img)[1]):
r, g, b = np.array(img)[y, x, :]
# shift value for 16 bit color
data = ((r >> 3) << 11) + ((g >> 2) << 5) + (b >> 3)
#change it to hex
f.write('%s,' % hex(data))
#change line (can be ignored)
if(24 == x % 30):
f.write('\n')
f.write('\n')
f.close()
img = Image.open('terminal-icon.png').convert('RGB')
img = img.resize((14, 14))
plt.imshow(img)
f = open('terminal_array.c', 'w')
# fill loop
for y in range(np.shape(img)[0]):
for x in range(np.shape(img)[1]):
r, g, b = np.array(img)[y, x, :]
# shift value for 16 bit color
data = ((r >> 3) << 11) + ((g >> 2) << 5) + (b >> 3)
#change it to hex
f.write('%s,' % hex(data))
#change line (can be ignored)
if(24 == x % 30):
f.write('\n')
f.write('\n')
f.close()
r, g, b = 100,100,100
print("%x"%(((r >> 3) << 11) + ((g >> 2) << 5) + (b >> 3)))
img = Image.open('terminal_icon_ver2.png').convert('RGB')
img = img.resize((12, 12))
plt.imshow(img)
f = open('terminal_array_ver2.c', 'w')
# fill loop
for y in range(np.shape(img)[0]):
for x in range(np.shape(img)[1]):
r, g, b = np.array(img)[y, x, :]
# shift value for 16 bit color
data = ((r >> 3) << 11) + ((g >> 2) << 5) + (b >> 3)
#change it to hex
f.write('%s,' % hex(data))
#change line (can be ignored)
if(24 == x % 30):
f.write('\n')
f.write('\n')
f.close()
img = Image.open('pokemon.jpg').convert('RGB')
img = img.resize((720, 400))
plt.imshow(img)
f = open('pokemon_data.c', 'w')
# fill loop
for y in range(np.shape(img)[0]):
for x in range(np.shape(img)[1]):
r, g, b = np.array(img)[y, x, :]
# shift value for 16 bit color
data = ((r >> 3) << 11) + ((g >> 2) << 5) + (b >> 3)
#change it to hex
f.write('%s,' % hex(data))
#change line (can be ignored)
if(24 == x % 30):
f.write('\n')
f.write('\n')
f.close()
```
| github_jupyter |
## Haensel AMS Homework
#### Paul Teehan
#### June 6, 2016
We are asked to solve the following task:
* Two listings of product-session pairs are provided; one for search results, and one for viewings.
* For each product that was viewed, find which three products are most often viewed or displayed in the same session.
I interpret this as follows. Suppose a product appears in $m$ sessions across both datasets. For each session, we should collect the list of other products that also appeared in that session, either viewed or displayed, so that we have a full set of products associated with that session. Then, across each of these $m$ sessions, we should identify the three products that appeared in the most number of sessions.
I approach this problem by constructing a similarity matrix where the value of each cell $(i,j)$ is the number of times product $i$ appeared in a session that also included product $j$. The matrix is square with each dimension equal to the total number of unique products that were either viewed or displayed. For example, consider a scenario with four products 'a', 'b', 'c', and 'd', across three sessions, as follows:
Session 1: 'a', 'b', 'c'
Session 2: 'b', 'c', 'd'
Session 3: 'c', 'd'
The similarity matrix is 4 x 4 and has the following values:
[1 1 1 0
1 2 2 1
1 2 3 2
0 1 2 2]
The diagonal entries record the total number of times each product appears. The most frequently co-occuring products can be identified by reading off each row. For example, the third row corresponds to product 'c' and has entires [1, 2, 3, 2]. This indicates that across all sessions that included 'c', 'a' appeared once, 'b' twice, 'c' three times, and 'd' twice. To solve the problem, we need to construct the full similarity matrix, and then for each product, identify the three highest occuring products.
I wrote a class to handle constructing and populating the matrix and returning the output in a useful form. Here is the example output for product 12:
```
import pickle
import product_similarity
from product_similarity import SimilarityTable
table = pickle.load(open('similarity_table.p', 'rb'))
table._export_product(16, max_rank=3)
```
In this case, product 16 appeared in seven sessions. The most similar product, 3014, appeared in four of those sessions. There were six products that tied for second with three appearances. Ties are common, especially for less common products with small numbers of appearances. Where ties result in more than three products occupying the top three ranks, I have reported the complete set including ties; whoever consumes the output can reduce to three products using the method of their choice. The full results for all products are in task3_output.csv.
As a final brief analysis, here are the ten most similar product pairs among all products that appeared at least ten times. For reference product 171, there were five products that appeared in a remarkable 90 of the 91 sessions, indicating very high similarity.
```
import pandas as pd
output = pd.read_csv('task3_output.csv')
output[output['appearances'] > 10].sort_values('appearances_pct', ascending=False)[0:10]
```
| github_jupyter |
```
import cirq
import numpy as np
import tensorflow as tf
import tensorflow_quantum as tfq
import pandas as pd
from qite import QITE
from qbm import QBM
from circuit import build_ansatz, initialize_ansatz_symbols
from problem import build_ising_model_hamiltonian
from hamiltonian import Hamiltonian
from utils import evaluate_exact_state, plot_density_matrix_heatmap, get_ancillary_qubits, save_circuit_to_svg, circuit_to_state
from dataset import bars_and_stripes_probability, bars_and_stripes, samples_from_distribution, plot_dataset
```
## Define dataset
Define dataset, that is the bar and stripes on 2x2 grid (simplified by considering empty and fully filled as invalid values)
```
p_data = tf.convert_to_tensor(
bars_and_stripes_probability(bars_and_stripes(n=100000, no_fills_or_empties=True))
)
print(np.around(p_data.numpy(),2))
```
## Define Hamiltonian and qubits
Define Hamiltonian of which coefficients are to be trained so that the thermal state's sampling probabilties would be as close as possible to the data distribution
```
ising_model, problem_qubits = build_ising_model_hamiltonian(
[4], transverse=None
)
ancillary_qubits = get_ancillary_qubits(problem_qubits)
qubits = [*problem_qubits, *ancillary_qubits]
initial_coefficients = tf.random.uniform(
[len(ising_model)], minval=-1, maxval=1
)
hamiltonian = Hamiltonian(
ising_model,
coefficients=tf.Variable(
tf.identity(initial_coefficients)
),
)
```
## Define the quantum circuit
Define the quantum circuit that is used for VarQITE and the preparation of the thermal state
```
n_layers = 3
circuit, symbol_names = build_ansatz(qubits, n_layers=n_layers)
initial_symbol_values = initialize_ansatz_symbols(
len(qubits),
n_layers=n_layers
)
```
## Setup VarQBM model
Setup a VarQBM model using defaults and providing the circuit, the Hamiltonian, circuit symbols and their initial values, and the number of time steps to which the VarQITE evolution is split as part of VarQBM (higher value generally is more accurate but takes more time)
```
qbm = QBM(
circuit,
symbol_names,
initial_symbol_values,
hamiltonian,
n_timesteps=40,
verbose_qite=3,
)
```
## Run VarQBM training
Running VarQBM training gives us the Hamiltonian with trained coefficients, the final trained state in density matrix format, trained circuit symbol values and metrics from the training process. (Likely, this requires redesign for actual hardware, i.e. for the density matrix one could directly consider returning samples if possible.)
```
# trained_hamiltonian, trained_state, trained_symbol_values, metrics = qbm.train(p_data, epochs=40)
```
(To avoid running several hours, here's the result of one "succesful" run)
```
trained_hamiltonian_coefficients = [
-0.13697385787963867,
-0.15982627868652344,
1.1702172756195068,
2.6191375255584717,
-0.29886317253112793,
-0.29584717750549316,
-0.07853007316589355,
-0.2576768398284912,
-0.2590906620025635,
-0.014436483383178711
]
trained_hamiltonian = Hamiltonian(ising_model, trained_hamiltonian_coefficients)
# One can prepare the thermal state of the trained Hamiltonian again by running VarQITE
#trained_symbol_values, state = qbm.run_qite(trained_hamiltonian, skip_metrics=True)
trained_symbol_values = [
-0.014642548747360706,
-3.441415117322322e-07,
-0.01728099398314953,
-8.662656000524294e-06,
-0.004501198884099722,
1.1648027793853544e-05,
0.2876802682876587,
-0.00012912784586660564,
-1.3544297627898771e-11,
-6.415218933852884e-08,
-1.2945443328415962e-11,
-3.1357657803710026e-08,
-1.0079138912377772e-11,
-7.583848571357521e-08,
-1.781818442792016e-11,
-2.060415482674216e-07,
-0.014686104841530323,
-9.479756045038812e-06,
-0.016323236748576164,
1.5828969480935484e-05,
0.041991300880908966,
-0.00016045317170210183,
0.07910387963056564,
-0.0002838200598489493,
-3.075627599824493e-11,
-3.1359231655869735e-08,
-1.2116604594658575e-11,
-7.584274896998977e-08,
-1.7822255823918276e-11,
-2.5283253535235417e-07,
-1.7819704045685114e-11,
-2.0604063877271983e-07,
-0.016164422035217285,
2.3528562451247126e-05,
0.009831184521317482,
-0.0001395646104356274,
-1.051080584526062,
-0.00016350357327610254,
0.015614356845617294,
-0.00030664922087453306,
2.5852771312617762e-11,
-1.4341814846829948e-07,
1.8851156746713116e-11,
-1.1220399187550356e-07,
1.2182884909228697e-11,
1.3185865554987686e-07,
1.2183080932981483e-11,
1.3185872660415043e-07,
1.578497290611267,
-8.13862470749882e-07,
1.565839171409607,
-7.825872307876125e-05,
1.509065866470337,
-0.00017533988284412771,
1.5611470937728882,
-4.3912779801758006e-05,
-5.514004183804211e-12,
3.741260456990858e-08,
3.3556379896992894e-11,
1.1650548259467541e-07,
4.171089515447868e-11,
-5.5527948461531196e-08,
4.2645778401684264e-12,
-5.552934823072064e-08,
0.999977171421051,
0.9999960064888,
0.999805212020874,
1.0,
1.0,
1.0,
1.0,
0.9999657273292542,
0.9997722506523132,
0.999398410320282,
1.0,
1.0,
1.0,
1.0,
0.9995574355125427,
0.9996995329856873,
0.999622642993927,
1.0,
1.0,
1.0,
1.0
]
```
## Pulling samples
Cirq and Tfq provides many different ways to get samples given the circuit, symbols and their values. One examples shown below which is easy but relies potentially non-optimized features.
### Tfq state layer to get probability distribution
Use ``circuit_to_state`` utility function which internally uses Tfq's State layer to get density matrix from which the probability distribution can be easilly acquired. Another way (potentially more "realistic") is to use Cirq's simulator to get samples from the problem qubits (i.e. tracing out the ancillary qubits).
```
state = circuit_to_state(
circuit,
symbol_names,
trained_symbol_values,
list(range(len(problem_qubits))),
)
p_model = np.diag(state.numpy().real)
```
Draw samples from probability distribution
```
plot_dataset(samples_from_distribution(p_model, n = 10), file_format="png", size = 2, save=False, sort=False)
plot_dataset(samples_from_distribution(p_model, n = 10), file_format="png", size = 2, save=False, sort=False)
```
As it is visible the model generates valid stripes but it also includes quite a lot of samples not beloging to bars and stripes. This clearly indicates that model was did not perform well enough given that the dataset is rather simple.
| github_jupyter |
# Exercise: putting everything together
In this you will write code for a model that learns to classify mnist digits. You will use tensorflow, tracking training progress with matplotlib.
For each sub-exercise, you have seen an example solution for it in one of the colabs leading up to this one.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import random
import seaborn as sns
import numpy as np
import tensorflow as tf
import datetime
from matplotlib import pyplot as plt
from google.colab import files
from scipy.stats import multivariate_normal
from IPython.display import clear_output, Image, display, HTML
sns.set_style('ticks')
tf.reset_default_graph()
# Fetch the mnist data from tf.keras.datasets.mnist.
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
# Check what the data is like:
print('Training dataset:')
train_input, train_label = mnist_train
print('* input shape:', train_input.shape)
print('* input min, mean, max:', train_input.min(), train_input.mean(), train_input.max())
print('* input dtype:', train_input.dtype)
print('* label shape:', train_label.shape)
print('* label min, mean, max:', train_label.min(), train_label.mean(), train_label.max())
print('* label dtype:', train_label.dtype)
test_input, test_label = mnist_test
print('Number of test examples:', test_input.shape[0])
```
Normalize the data into the \[0, 1\] interval. It's also a good idea to check the class distribution, but here we know that this is OK.
```
# Normalize both train_input and test_input so that it is in [0, 1].
#
# Also ensure the following data types:
#
# * train_input and test_input need to be np.float32.
# * the labels need to be converted to np.int32.
# We can visualize the first few training examples using matplotlib.imshow()
# in combination with the gallery function we defined.
#
# Copy the gallery function in this cell.
# Show the first 6 training images on a 1x6 grid.
# Remember to use grayscale plotting.
# Also print their corresponding labels in the same order.
# Write a function that turns the data into tensorflow datasets and into
# tensors corresponding to batches of examples, returning these tensors.
#
# The train data should be
#
# * shuffled across the full dataset
# * repeated indefinitely
# * batched at size 64.
#
# Simply batch the test data.
#
# IMPORTANT: Add a final (singleton) axis to the inputs; the conv nets that
# we will use will expect this.
# Creata function that returns a network with the following structure:
#
# 1. Conv2D with 16 filters, kernel shape 3, stride 1, padding 'SAME'
# 2. max pooling with window_shape [3, 3], srides [2, 2], padding 'SAME'
# 3. ReLU
# 4. Conv2D with 16 filters, kernel shape 3, stride 1, padding 'SAME'
# 5. Flatten the final conv features using snt.BatchFlatten
# 5. A Dense layer with output_size = 10, the number of classes.
#
# Make sure you use variable scoping to be able to share the underlying
# variables.
tf.reset_default_graph()
(train_inputs, train_labels), (test_inputs, test_labels) = get_tf_data()
# * Get the output of the network on the training data,
# * and the output of the *same* network with same weights on the test data.
# * Use the `tf.nn.sparse_softmax_cross_entropy_with_logits` op to define the loss
# * Define the train_op that minimizes the loss (averaged over the batch)
# using the `GradientDescentOptimizer`. Set the learning rate to 0.01.
# * Get the initialization op.
# Write a function that takes a list of losses and plots them.
# Run the training loop, keeping track of losses and potentially the accuracy
# on the training set. Plot the loss curve intermittently.
#
# The simplest solution would add a new plot with each plotting call. You
# can play with the frequency of plotting (and recording) a bit in order
# to find something that works.
#
# Based on the loss curves, decide how to set your total number of training
# iterations. Once you are satified, add some code that evaluates your
# prediction accuracy (not loss!) on the test set.
#
# Note that the outputs from the network are logits; for prediction accuracy
# we can pick the most likely label and see if it is correct.
# The accuracy (on the training set) you should expect:
#
# * Roughly 90% after 1000 training steps.
# * 96-97% after 8k training steps.
#
# First iterate with 1k steps, if that works, train for 8k. 8k steps will
# be roughly 8 minutes on CPU.
#
# The final test accuracy should also be ~96%.
```
| github_jupyter |
```
import cv2
import numpy as np
import math
import cv2
import random
import time
import sys
import operator
import os
from numpy import zeros, newaxis
import re
import sys
import matplotlib.pyplot as plt
import glob
import skimage
import skimage.io
import scipy.io as scp
from sklearn.utils import shuffle
from __future__ import print_function
from util.Generate_pm_pa import *
from util.UAV_subfunctions import *
from util.Extract_Patch import *
from util.Detect_Patch import *
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.externals import joblib
import pandas
from __future__ import print_function
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau,EarlyStopping, ModelCheckpoint
from keras.models import load_model
videoPath = './Data/'
app_model_path = './models/max500_1_10_threelayers/'
app_model_path_track = './models/Appearance_OriImage/'
mvmodel_path = './models/motion/'
bimodel_path = './models/Adaboost/'
bimodel_path_track = './models/Adaboost_track/'
#defult is 15, 2
trackwinS =15
video_savePath_features = './Experiment_Results/Final/Video/'
if not os.path.exists(video_savePath_features):
print ("path doesn't exist. trying to make")
os.makedirs(video_savePath_features)
video_savePath_Detection = './Experiment_Results/Final/txt/'
if not os.path.exists(video_savePath_Detection):
print ("path doesn't exist. trying to make")
os.makedirs(video_savePath_Detection)
par = []
par.append([0.001,40])
index= list(range(50))
print(index)
from sklearn.utils import shuffle
index = shuffle(index, random_state=42)
print("complete shuffling")
print(index)
maxD=4
for ind in range(1,2):
appmodel=load_model(app_model_path+str(ind)+'.h5')
appmodel.summary()
appmodel_track=load_model(app_model_path_track+str(ind)+'.h5')
appmodel_track.summary()
mvmodel = load_model(mvmodel_path+str(ind)+'.h5')
mvmodel.summary()
combinemodel = joblib.load(bimodel_path+'fold'+str(ind)+'.pkl')
combinemodel_track = joblib.load(bimodel_path_track+'fold'+str(ind)+'.pkl')
a = 0.001
b = 50
#test_ind = index[10*(ind-1):10*(ind-1)+10]
test_ind = index[10*(ind-1):10*(ind-1)+10]
objNo = 0
dtNo = 0
htNo = 0
allFA = 0
for i in test_ind:
all_params = dict(videoName = str(i+1),
downrightx = 350,
upleftx = 0,
downrighty = 780,
uplefty = 510,
fileName = 'supervised_SVM_'+str(i),
debug = 1,
qualityini = 0.005,#np.float32(sys.argv[4]),#1.5#,
K = 1,# number of previous frames before Xt-1
MaxCorns = 600,#np.int16(sys.argv[5]),#200,#600/(resizeFactor*resizeFactor),
mindist1 = 25,#np.int16(sys.argv[6]),#15,
quality = a,#np.float32(sys.argv[7]),#0.001 #,
maxcorners = 1000,#np.int16(sys.argv[8]),#100,#/(resizeFactor*resizeFactor),
mindist = b,#15,#np.int16(sys.argv[9]),#1,
use_ransac = True,
track_len = 10,# track_len: maximum number of points recorded in the track
lamda = 0,#taking average if bidirectional error
detect_interval = 6,
)
print ('Video:',all_params['videoName'])
videoName = all_params['videoName']
uplefty = all_params['uplefty']
downrighty = all_params['downrighty']
upleftx = all_params['upleftx']
downrightx = all_params['downrightx']
fileName = all_params['fileName']
debug = all_params['debug']
K = all_params['K']
qualityini = all_params['qualityini']
MaxCorns = all_params['MaxCorns']
mindist1 = all_params['mindist1']
use_ransac = all_params['use_ransac']
track_len = all_params['track_len']
lamda = all_params['lamda']
quality = all_params['quality']
maxcorners = all_params['maxcorners']
mindist = all_params['mindist']
detect_interval = all_params['detect_interval']
## parameter for feature detection(original image) and tracking[for background subtraction]
feature_params = dict( maxCorners = MaxCorns,
qualityLevel = qualityini,
minDistance = mindist1,
blockSize = 5 )
## parameter for feature tracking(original image)
lk_params = dict( winSize = (19, 19),
maxLevel = 2,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 20, 0.03))
## parameter for feature detection(error image) and tracking[for feature extraction and tracking]
print (maxcorners, quality, mindist)
feature_params_track = dict( maxCorners = 500,
qualityLevel = quality/20.0,
minDistance = mindist,
blockSize = 9 )
feature_params_track_oriImg = feature_params_track
lk_params_track = dict( winSize = (19, 19),
maxLevel = 2,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 30, 0.03),
minEigThreshold=1e-4)
lk_params_track_ori = dict( winSize = (25, 25),
maxLevel = 3,
flags = cv2.OPTFLOW_USE_INITIAL_FLOW,
criteria = (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 30, 0.03),
minEigThreshold=1e-4)
feature_params_Detect = dict( maxCorners = 10,
qualityLevel = 0.00000015,
minDistance = 0,
blockSize = 3 )
#cam_gt=cv2.VideoCapture(videoPath+ 'shor_clip_gtVideo/uav_Video_'+videoName+'_gt.mov')
cam=cv2.VideoCapture(videoPath+ 'Videos/Clip_'+videoName+'.mov')
gt_text = open(videoPath+ 'Annotation_update_180925/Video_'+videoName+'_gt.txt',"r")
f_txt = open(video_savePath_Detection+ videoName+'_dt.txt','w')
# read in one frame in order to obtain the video size
frameidx = 1
color=cam.read()[1]
color_gt=color.copy()#cam_gt.read()[1]
prepreFrame = np.float32(cv2.cvtColor(color, cv2.COLOR_RGB2GRAY))
h,w,channel = color.shape
groundtruth = gt_text.readline()
outputFeature = "time_layer: "+ str(frameidx)+" detections: "
f_txt.write(outputFeature+"\n")
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
FPS = cam.get(cv2.CAP_PROP_FPS)
video_PosPatch = cv2.VideoWriter(video_savePath_features+videoName+'.mov', fourcc,FPS,(w,h))
# initialize feature points
pImg = None
#initialize H_back
H_back = None
# read in Xt-1
color=cam.read()[1]
color_gt =color.copy()#cam_gt.read()[1]
groundtruth = gt_text.readline()
frameidx+=1
outputFeature = "time_layer: "+ str(frameidx)+" detections: "
#f_txt.write(outputFeature+"\n")
Xtminus1 = np.float32(cv2.cvtColor(color, cv2.COLOR_RGB2GRAY))
# blocks is 1 except for the pitot tube region(0)
blocks = np.ones((h,w), dtype='float32')
#blocks[uplefty:downrighty,upleftx:downrightx ] = 0
# parameter for groundtruth dilation
dt_d = 4
radius = 10
Dotft = []
Patchft=[]
maxPatchId = 0
while True:
#print(frameidx)
##############Start Detection Part############
######Background Subtraction #################
gray = Xtminus1.copy()
# read in current frame Xt
future_color = cam.read()[1]
if future_color is None:
frameidx+=1
outputFeature = "time_layer: "+ str(frameidx)+" detections: "
f_txt.write(outputFeature)
break
frameidx+=1
Xt = np.float32(cv2.cvtColor(future_color, cv2.COLOR_RGB2GRAY))
## generate groundtruth mask
gt_split = groundtruth.split()
length = len(gt_split)
gt_index = 3
gt_mask = np.zeros_like(Xt)
gt_ft_maske = np.zeros_like(Xt)
bbox_index = 0
centers = []
color_gt = color.copy()
while gt_index< length:
bbox_index+=1
uplefty = gt_split[gt_index]
uplefty = int(uplefty[1:-1])
upleftx = gt_split[gt_index+1]
upleftx = int(upleftx[0:-1])
downrighty = gt_split[gt_index+2]
downrighty = int(downrighty[0:-1])
downrightx = gt_split[gt_index+3]
downrightx = int(downrightx[0:-2])
downrighty = np.min([downrighty+dt_d, h-1])
downrightx = np.min([downrightx+dt_d, w-1])
uplefty = np.max([uplefty-dt_d,0])
upleftx = np.max([upleftx-dt_d,0])
cv2.rectangle(color_gt,(np.int16(upleftx),np.int16(uplefty)),(np.int16(downrightx),np.int16(downrighty)), (255,0,0),1)
gt_mask[uplefty:downrighty, upleftx:downrightx] = bbox_index#255
gt_ft_maske[uplefty:downrighty, upleftx:downrightx] = 255
centers.append([(upleftx+downrightx)/2,(uplefty+downrighty)/2])
gt_index += 4
oriImage = color_gt.copy()
oriImage_1 = color_gt.copy()
# extract feature points for previous frame gray = Xt-1. By using maskOut function, only keep features outside the pitot tube area
if pImg is None or frameidx % track_len == 0:
pImg = cv2.goodFeaturesToTrack(np.uint8(gray), **feature_params)
pImg = maskOut(blocks, pImg)
# compute onedirectional error Et-1 using backward transform to save computational time
if (frameidx) % detect_interval == 0:
weightedError,H_back,pImg = backgroundsubtraction(gray,prepreFrame, Xt,pImg,blocks,lamda, lk_params,use_ransac)
else:
H_back,pImg = backgroundMotion(gray,prepreFrame, Xt,pImg,blocks,lamda, lk_params, use_ransac)
###########################Part I.b Feature Extraction on Background Subtracted Image for Every Other 20 Frames###############################
#print('Start:', len(Dotft))
#start_time = time.time()
if len(Dotft)>0:
#print(frameidx, 'previous:', len(Dotft))
d1, d, p1, pPers, p0, st1, ft_mv, ft_app, gt_labels = generatePatches_MV_trackV1(Dotft, gray, Xt, H_back, lk_params_track_ori, radius,w, h, color, gt_ft_maske)
#print(frameidx, 'after:', len(Dotft))
score_mv = mvmodel.predict(ft_mv, batch_size = 1000000)
#print("--- %s seconds(deeplearning_motion) ---" % (time.time() - start_time))
#start_time1 = time.time()
score_app = appmodel_track.predict(ft_app,batch_size= 2560)
#print("--- %s seconds(deeplearning_app) ---" % (time.time() - start_time1))
#start_time1 = time.time()
bifeature = np.hstack([score_app[:,0].reshape(-1,1),score_mv[:,0].reshape(-1,1)])
trst = combinemodel_track.predict(bifeature)
#print("--- %s seconds(adaboost) ---" % (time.time() - start_time1))
#start_time1 = time.time()
#Dotft, indrm = prunddt(Dotft, Patchft)
Dotft,indrm = dotupdate(Dotft, Patchft)
#print("--- %s seconds(dotupdate) ---" % (time.time() - start_time1))
oriImage = visDotft(oriImage, Dotft,w, h)
#start_time1 = time.time()
Dotft = dottrack_detect(Dotft, p1[indrm], pPers[indrm], trst[indrm], st1[indrm], d1[indrm], d[indrm], Patchft)#updatetr(p1, st1)
#Dotft = dottrack_detect(Dotft, p1, pPers, trst, st1, d1, d, Patchft)#updatetr(p1, st1)
oriImage = visPtV1(oriImage, p0, st1, d1)
#print("--- %s seconds(dottrack_detect) ---" % (time.time() - start_time1))
#start_time1 = time.time()
#print("--- %s seconds(visPtV1) ---" % (time.time() - start_time1))
#print("--- %s seconds(updateDot) ---" % (time.time() - start_time))
#print('Midd1:', len(Dotft))
#start_time = time.time()
if len(Patchft)>0:
#print('hahha')
#print('before:', len(Patchft))
oriImage = visDetect_Kalman(Patchft, oriImage, radius, w, h)
#print("--- %s seconds(visDetect_Kalman) ---" % (time.time() - start_time))
#start_time1 = time.time()
outputFeature = writeDetect(outputFeature, radius, Patchft, w, h)
#print("--- %s seconds(writeDetect) ---" % (time.time() - start_time1))
#start_time2 = time.time()
Patchft = patch_KalmanTracking(Dotft, Patchft, H_back, w, h)
#print("--- %s seconds(patch_KalmanTracking) ---" % (time.time() - start_time2))
#print('after:', len(Patchft))
#print('Midd2:', len(Patchft))
#print("--- %s seconds(updatePatch) ---" % (time.time() - start_time))
#start_time = time.time()
if (frameidx) % detect_interval == 0:
#p_pos, p_pos_errImg, p_neg, p_neg_errImg, p_pos_gt, p_pos_gt_errImg, hit,ftNo,FAno, vis_points = generatePatches(frameidx, gray, Xt, weightedError, centers, H_back, feature_params_track, feature_params_track_oriImg, lk_params_track, radius, color, future_color, gt_mask, oriImage)
mv, detectedPatches, errorPatches, gt_labels, detectedLocs, curLocslll, hit, ftNo, FAno = Extract_Patch(frameidx, gray, Xt, weightedError, centers, H_back, feature_params_track, feature_params_track_oriImg, lk_params_track, radius, color, future_color, gt_mask, oriImage)
#errorPatches, detectedPatches, detectedLocs, Locs_next, Locs_pers, d_match = generatePatches_online(gray, Xt, weightedError, feature_params_track, color, radius,lk_params_track,H_back)
if mv.shape[0]>0:
errorPatches = errorPatches[:,:,:, newaxis]
mv = np.hstack([mv[:,4:6],mv[:,10:]])
#print(detectedPatches.shape, errorPatches.shape,errorPatche_.shape)
data_np_test = np.concatenate([detectedPatches/255.0 ,errorPatches/255.0,errorPatches/255.0,errorPatches/255.0], axis=3)#errorPatches/255.0
test_output_app = appmodel.predict(data_np_test,batch_size= 2560)
#pred_y = np.argmax(test_output, 1)
test_output_mv = mvmodel.predict(mv, batch_size = 1000000)
mvmafeature = np.hstack([test_output_app[:,0].reshape(-1,1),test_output_mv[:,0].reshape(-1,1)])
dt_lable = combinemodel.predict(mvmafeature)
oriImage = visPosPatch_Kalman(dt_lable, gt_labels, detectedLocs, oriImage, radius)
#print('frameidx', frameidx)
oriImage, Dotft, Patchft, maxPatchId = DetectOnX_V2(maxD, maxPatchId, oriImage, gray, Xt, lk_params_track_ori, H_back, detectedLocs, curLocslll, dt_lable, detectedPatches,feature_params_Detect, radius, Dotft, Patchft)
#print("--- %s seconds(newDetection) ---" % (time.time() - start_time))
#strat_time = time.time()
draw_str(oriImage, 20, 60, 'frame ID: %d' % (frameidx-1))
video_PosPatch.write(oriImage)
prepreFrame = Xtminus1.copy()
color = future_color.copy()
Xtminus1 = Xt.copy()
future_color_gt = future_color.copy()#cam_gt.read()[1]
f_txt.write(outputFeature+"\n")
groundtruth = gt_text.readline()
outputFeature = "time_layer: "+ str(frameidx)+" detections: "
#print("--- %s seconds(writeout Results) ---" % (time.time() - start_time))
video_PosPatch.release()
f_txt.close()
```
| github_jupyter |
```
%matplotlib inline
import sys
import numpy as np
import numpy.random as rnd
import time
import GPflow
import tensorflow as tf
import matplotlib
import matplotlib.pyplot as plt
plt.style.use('ggplot')
M = 50
```
# Create a dataset and initialise model
```
def func(x):
return np.sin(x * 3*3.14) + 0.3*np.cos(x * 9*3.14) + 0.5 * np.sin(x * 7*3.14)
X = rnd.rand(10000, 1) * 2 - 1
Y = func(X) + rnd.randn(10000, 1) * 0.2
plt.plot(X, Y, 'x')
D = X.shape[1]
Xt = np.linspace(-1.1, 1.1, 100)[:, None]
Yt = func(Xt)
def init():
kern = GPflow.kernels.RBF(D, 1)
Z = X[:M, :].copy()
m = GPflow.svgp.SVGP(X, Y, kern, GPflow.likelihoods.Gaussian(), Z, minibatch_size=len(X))
return m
m = init()
```
# Stochastically calculate bound and show noise
The minibatch estimate should be an unbiased estimator of the `ground_truth`. Here we show a histogram of the value from different evaluations, together with its mean and the ground truth. The small difference between the mean of the minibatch estimations and the ground truth shows that the minibatch estimator is working as expected.
```
ground_truth = m.compute_log_likelihood()
m.X.minibatch_size = 100
m.Y.minibatch_size = 100
evals = [m.compute_log_likelihood() for _ in range(100)]
plt.hist(evals)
plt.axvline(ground_truth)
```
# Show that minibatches speed up computation
The use of using minibatches is that it decreases the time needed to make an optimisation step, since estmating the objective is cheaper. Here we plot the change in time required with the size of the minibatch. We see that smaller minibatches result in a cheaper estimate of the objective.
```
mbps = np.logspace(-2, 0, 10)
times = []
objs = []
for mbp in mbps:
m.X.minibatch_size = m.Y.minibatch_size = int(len(X) * mbp)
start_time = time.time()
objs.append([m.compute_log_likelihood() for _ in range(20)])
# plt.hist(objs, bins = 100)
# plt.axvline(ground_truth, color='r')
times.append(time.time() - start_time)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 6))
ax1.plot(mbps, times, 'x-')
ax1.set_xlabel("Minibatch proportion")
ax1.set_ylabel("Time taken")
ax2.plot(mbps, np.array(objs), 'kx')
ax2.set_xlabel("Minibatch proportion")
ax2.set_ylabel("ELBO estimates")
```
# Show actual stochastic optimization
```
def plot():
pX = np.linspace(-1, 1, 100)[:, None]
pY, pYv = m.predict_y(pX)
plt.plot(X, Y, 'x')
line, = plt.plot(pX, pY, lw=1.5)
col = line.get_color()
plt.plot(pX, pY+2*pYv**0.5, col, lw=1.5)
plt.plot(pX, pY-2*pYv**0.5, col, lw=1.5)
plt.plot(m.Z.value, np.zeros(m.Z.value.shape), 'k|', mew=2)
plot()
plt.title("Predictions before training")
st = time.time()
logt = []
logx = []
logf = []
def logger(x):
if (logger.i % 10) == 0:
logx.append(x)
logf.append(m._objective(x)[0])
logt.append(time.time() - st)
logger.i+=1
logger.i = 1
m.X.minibatch_size = 100
m.Y.minibatch_size = 100
m.Z.fixed = True
m.optimize(method=tf.train.AdamOptimizer(), max_iters=np.inf, callback=logger)
plt.plot(-np.array(logf))
plt.xlabel('iteration')
plt.ylabel('ELBO')
plot()
plt.title("Predictions after training")
```
| github_jupyter |
# Train and deploy on Kubeflow from Notebooks
This notebook introduces you to using Kubeflow Fairing to train and deploy a model to Kubeflow on Google Kubernetes Engine (GKE), and Google Cloud ML Engine. This notebook demonstrate how to:
* Train an XGBoost model in a local notebook,
* Use Kubeflow Fairing to train an XGBoost model remotely on Kubeflow,
* Data is read from a PVC
* The append builder is used to rapidly build a docker image
* Use Kubeflow Fairing to deploy a trained model to Kubeflow, and
* Call the deployed endpoint for predictions.
To learn more about how to run this notebook locally, see the guide to [training and deploying on GCP from a local notebook][gcp-local-notebook].
[gcp-local-notebook]: https://kubeflow.org/docs/fairing/gcp-local-notebook/
## Set up your notebook for training an XGBoost model
Import the libraries required to train this model.
```
import demo_util
from pathlib import Path
import os
fairing_code = os.path.join(Path.home(), "git_jlewi-kubecon-demo", "fairing")
demo_util.notebook_setup(fairing_code)
# fairing:include-cell
import ames
import fire
import joblib
import logging
import nbconvert
import os
import pathlib
import sys
from pathlib import Path
import pandas as pd
import pprint
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
from sklearn.impute import SimpleImputer
from xgboost import XGBRegressor
from importlib import reload
# Imports not to be included in the built docker image
import kfp
import kfp.components as comp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
from kubernetes import client as k8s_client
import fairing
from fairing.builders import append
from fairing.deployers import job
import fairing_util
```
Define various constants
```
nfs_path = os.path.join("/mnt/kubeflow-gcfs/data/ames_dataset")
model_dir = os.path.join("/mnt/kubeflow-gcfs/models")
train_data = "/mnt/kubeflow-gcfs/data/ames_dataset/train.csv"
model_file = os.path.join(model_dir, "trained_ames_model.dat")
# Base image is built from the Dockerfile in the repo
# Can be the same image as your notebook
base_image = "gcr.io/code-search-demo/kubecon-demo/notebook:v20190518-2d04328-dirty-a8c2a5"
# Copy data to nfs
demo_util.copy_data_to_nfs(nfs_path, model_dir)
```
## Define Train and Predict functions
```
# fairing:include-cell
class HousingServe(object):
def __init__(self, model_file=None):
self.n_estimators = 50
self.learning_rate = 0.1
if not model_file:
print("model_file not supplied; checking environment variable")
model_file = os.getenv("MODEL_FILE")
self.model_file = model_file
print("model_file={0}".format(self.model_file))
self.model = None
def train(self, train_input, model_file):
(train_X, train_y), (test_X, test_y) = ames.read_input(train_input)
model = ames.train_model(train_X,
train_y,
test_X,
test_y,
self.n_estimators,
self.learning_rate)
ames.eval_model(model, test_X, test_y)
ames.save_model(model, model_file)
def predict(self, X, feature_names):
"""Predict using the model for given ndarray."""
if not self.model:
print("Loading model {0}".format(self.model_file))
self.model = ames.load_model(self.model_file)
# Do any preprocessing
prediction = self.model.predict(data=X)
# Do any postprocessing
return [[prediction.item(0), prediction.item(1)]]
def create_pr_to_update_model(self, job_spec_file, new_model):
ames.create_pr_to_update_model(job_spec_file, new_model)
def deploy_model(self, model_file):
ames.deploy_model(model_file)
def validate_model(self, endpoint):
ames.validate_model(endpoint)
```
## Train your Model Locally
* Train your model locally inside your notebook
```
local_model_file = "/tmp/trained_model.dat"
housing = HousingServe(local_model_file)
housing.train(train_data, local_model_file)
```
## Predict locally
* Run prediction inside the notebook using the newly created notebook
```
(train_X, train_y), (test_X, test_y) = ames.read_input("ames_dataset/train.csv")
housing.predict(test_X, None)
```
## Use Fairing to Launch a K8s Job to train your model
### Set up Kubeflow Fairing for training and predictions
Import the `fairing` library and configure the environment that your training or prediction job will run in.
```
# Setting up google container repositories (GCR) for storing output containers
# You can use any docker container registry istead of GCR
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job'.format(GCP_PROJECT)
PY_VERSION = ".".join([str(x) for x in sys.version_info[0:3]])
BASE_IMAGE = 'python:{}'.format(PY_VERSION)
```
## Use fairing to build the docker image
* This uses the append builder to rapidly build docker images
```
preprocessor = fairing_util.ConvertNotebookPreprocessorWithFire("HousingServe")
if not preprocessor.input_files:
preprocessor.input_files = set()
input_files=["ames.py", "deployment/update_model_job.yaml", "update_model.py"]
preprocessor.input_files = set([os.path.normpath(f) for f in input_files])
preprocessor.preprocess()
builder = append.append.AppendBuilder(registry=DOCKER_REGISTRY,
base_image=base_image, preprocessor=preprocessor)
builder.build()
```
## Launch the K8s Job
* Use pod mutators to attach a PVC and credentials to the pod
```
pod_spec = builder.generate_pod_spec()
pvc_mutator = fairing_util.add_pvc_mutator("kubeflow-gcfs", "/mnt/kubeflow-gcfs")
train_deployer = job.job.Job(namespace="kubeflow",
cleanup=False,
pod_spec_mutators=[
fairing.cloud.gcp.add_gcp_credentials_if_exists, pvc_mutator])
# Add command line arguments
pod_spec.containers[0].command.extend(["train", train_data, model_file])
result = train_deployer.deploy(pod_spec)
!kubectl get jobs -l fairing-id={train_deployer.job_id} -o yaml
```
## Deploy the trained model to Kubeflow for predictions
```
from fairing.deployers import serving
import fairing_util
pod_spec = builder.generate_pod_spec()
pvc_mutator = fairing_util.add_pvc_mutator("kubeflow-gcfs", "/mnt/kubeflow-gcfs")
module_name = os.path.splitext(preprocessor.executable.name)[0]
deployer = serving.serving.Serving(module_name + ".HousingServe",
service_type="ClusterIP",
labels={"app": "ames"})
pvc_mutator(None, pod_spec, deployer.namespace)
pod_spec.containers[0].env.append({"name": "MODEL_FILE", "value": model_file})
url = deployer.deploy(pod_spec)
!kubectl get deploy -o yaml {deployer.deployment.metadata.name}
```
## Call the prediction endpoint
Create a test dataset, then call the endpoint on Kubeflow for predictions.
```
(train_X, train_y), (test_X, test_y) = ames.read_input("ames_dataset/train.csv")
full_url = url + ":5000/predict"
result = fairing_util.predict_nparray(full_url, test_X)
pprint.pprint(result.content)
```
## Clean up the prediction endpoint
Delete the prediction endpoint created by this notebook.
```
# !kubectl delete service -l app=ames
# !kubectl delete deploy -l app=ames
```
## Build a simple 1 step pipeline
```
EXPERIMENT_NAME = 'Ames'
```
#### Define the pipeline
Pipeline function has to be decorated with the `@dsl.pipeline` decorator
```
@dsl.pipeline(
name='Training pipeline',
description='A pipeline that trains an xgboost model for the Ames dataset.'
)
def train_pipeline(
train_data="gs://code-search-demo_ames/data/ames_dataset/train.csv",
model_file="gs://code-search-demo_ames/output/hello-world1.txt",
):
command=["python", preprocessor.executable.name, "train", train_data, model_file]
train_op = dsl.ContainerOp(
name="train",
image=builder.image_tag,
command=command,
).apply(
gcp.use_gcp_secret('user-gcp-sa'),
)
train_op.container.working_dir = "/app"
```
#### Compile the pipeline
```
pipeline_func = train_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)
```
#### Submit the pipeline for execution
```
#Specify pipeline argument values
arguments = {"train_data": "gs://code-search-demo_ames/data/ames_dataset/train.csv",
"model_file": "gs://code-search-demo_ames/output/hello-world1.txt"}
# Get or create an experiment and submit a pipeline run
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
#vvvvvvvvv This link leads to the run information page. (Note: There is a bug in JupyterLab that modifies the URL and makes the link stop working)
```
## Define a pipeline for CI/CD
* Define a pipeline that trains the model
* Then deploy the model and verify its working
* If the model is good we create a PR updating the model in the deployment
```
CICD_EXPERIMENT_NAME = 'Ames CICD'
@dsl.pipeline(
name='Ames CICD pipeline',
description='A pipeline that trains an xgboost model for the Ames dataset and updates it.'
)
def cicd_pipeline(
train_data="gs://code-search-demo_ames/data/ames_dataset/train.csv",
model_file="gs://code-search-demo_ames/output/default.txt",
):
command=["python", preprocessor.executable.name, "train", train_data, model_file]
train_op = dsl.ContainerOp(
name="train",
image=builder.image_tag,
command=command,
).apply(
gcp.use_gcp_secret('user-gcp-sa'),
)
train_op.container.working_dir = "/app"
command=["python3", preprocessor.executable.name, "deploy-model",
model_file]
deploy_op = dsl.ContainerOp(
name="deploy-model",
image=builder.image_tag,
command=command,
)
deploy_op.container.working_dir = "/app"
deploy_op.after(train_op)
command=["python3", preprocessor.executable.name, "validate-model",
model_file]
validate_op = dsl.ContainerOp(
name="validate-model",
image=builder.image_tag,
command=command,
)
validate_op.container.working_dir = "/app"
validate_op.after(deploy_op)
command=["python3", preprocessor.executable.name, "create-pr-to-update-model",
"deployment/update_model_job.yaml", model_file]
pr_op = dsl.ContainerOp(
name="create-pr",
image=builder.image_tag,
command=command,
)
pr_op.container.working_dir = "/app"
pr_op.after(validate_op)
pipeline_func = cicd_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)
import datetime
gcs_model_file = "gs://code-search-demo_ames/models/" + datetime.datetime.now().strftime("%y%m%d_%H%M%S")
#Specify pipeline argument values
arguments = {"train_data": "gs://code-search-demo_ames/data/ames_dataset/train.csv",
"model_file": gcs_model_file}
# Get or create an experiment and submit a pipeline run
client = kfp.Client()
experiment = client.create_experiment(CICD_EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)
```
| github_jupyter |
# Run fitsverify
```
import os
import sys
import re
import shutil
import subprocess as sp
from configparser import ConfigParser
from random import choice
specprod = 'everest'
specprod_path = os.path.join(os.environ['DESI_SPECTRO_REDUX'], specprod)
```
## Create input file
```
fits_files = os.path.join(os.environ['CSCRATCH'], f'{specprod}_fits.txt')
if not os.path.exists(fits_files):
# os.chdir(specprod_path)
with open(fits_files, 'w') as out:
command = ['find', 'calibnight', 'exposures', 'healpix', 'preproc', 'tiles', 'zcatalog', '-type', 'f', '-name', '*.fits']
proc = sp.Popen(command, stdout=out, stderr=sp.DEVNULL, cwd=specprod_path)
status = proc.wait()
```
## List of Regular Expressions
```
parser = ConfigParser()
parser.read_string("""
[top]
exposures = exposures-everest\.fits;exposures-SPECPROD.rst
[calibnight]
fiberflatnight = calibnight/[0-9]{8}/fiberflatnight-[brz][0-9]-[0-9]{8}\.fits;calibnight/NIGHT/fiberflatnight-CAMERA-NIGHT.rst
psfnight = calibnight/[0-9]{8}/psfnight-[brz][0-9]-[0-9]{8}\.fits;calibnight/NIGHT/psfnight-CAMERA-NIGHT.rst
[exposures]
cframe = exposures/[0-9]{8}/[0-9]{8}/cframe-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/cframe-CAMERA-EXPID.rst
exposure-qa = exposures/[0-9]{8}/[0-9]{8}/exposure-qa-[0-9]{8}\.fits;exposures/NIGHT/EXPID/exposure-qa-EXPID.rst
fiberflat = exposures/[0-9]{8}/[0-9]{8}/fiberflat-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/fiberflat-CAMERA-EXPID.rst
fit-psf = exposures/[0-9]{8}/[0-9]{8}/fit-psf-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/fit-psf-CAMERA-EXPID.rst
fit-psf-before-blacklisted = exposures/[0-9]{8}/[0-9]{8}/fit-psf-before-blacklisted-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/fit-psf-before-blacklisted-CAMERA-EXPID.rst
fit-psf-before-blacklisted-fix = exposures/[0-9]{8}/[0-9]{8}/fit-psf-before-blacklisted-fix-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/fit-psf-before-blacklisted-fix-CAMERA-EXPID.rst
fit-psf-fixed-blacklisted = exposures/[0-9]{8}/[0-9]{8}/fit-psf-fixed-blacklisted-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/fit-psf-fixed-blacklisted-CAMERA-EXPID.rst
fluxcalib = exposures/[0-9]{8}/[0-9]{8}/fluxcalib-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/fluxcalib-CAMERA-EXPID.rst
frame = exposures/[0-9]{8}/[0-9]{8}/frame-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/frame-CAMERA-EXPID.rst
psf = exposures/[0-9]{8}/[0-9]{8}/psf-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/psf-CAMERA-EXPID.rst
sframe = exposures/[0-9]{8}/[0-9]{8}/sframe-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/sframe-CAMERA-EXPID.rst
shifted-input-psf = exposures/[0-9]{8}/[0-9]{8}/shifted-input-psf-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/shifted-input-psf-CAMERA-EXPID.rst
sky = exposures/[0-9]{8}/[0-9]{8}/sky-[brz][0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/sky-CAMERA-EXPID.rst
stdstars = exposures/[0-9]{8}/[0-9]{8}/stdstars-[0-9]-[0-9]{8}\.fits;exposures/NIGHT/EXPID/stdstars-SPECTROGRAPH-EXPID.rst
[healpix]
coadd = healpix/(sv1|sv2|sv3|main)/(backup|bright|dark|other)/[0-9]+/[0-9]+/coadd-(sv1|sv2|sv3|main)-(backup|bright|dark|other)-[0-9]+\.fits;healpix/SURVEY/PROGRAM/PIXGROUP/PIXNUM/coadd-SURVEY-PROGRAM-PIXNUM.rst
qso_mgii = healpix/(sv1|sv2|sv3|main)/(backup|bright|dark|other)/[0-9]+/[0-9]+/qso_mgii-(sv1|sv2|sv3|main)-(backup|bright|dark|other)-[0-9]+\.fits;healpix/SURVEY/PROGRAM/PIXGROUP/PIXNUM/qso_mgii-SURVEY-PROGRAM-PIXNUM.rst
qso_qn = healpix/(sv1|sv2|sv3|main)/(backup|bright|dark|other)/[0-9]+/[0-9]+/qso_qn-(sv1|sv2|sv3|main)-(backup|bright|dark|other)-[0-9]+\.fits;healpix/SURVEY/PROGRAM/PIXGROUP/PIXNUM/qso_qn-SURVEY-PROGRAM-PIXNUM.rst
redrock = healpix/(sv1|sv2|sv3|main)/(backup|bright|dark|other)/[0-9]+/[0-9]+/redrock-(sv1|sv2|sv3|main)-(backup|bright|dark|other)-[0-9]+\.fits;healpix/SURVEY/PROGRAM/PIXGROUP/PIXNUM/redrock-SURVEY-PROGRAM-PIXNUM.rst
spectra = healpix/(sv1|sv2|sv3|main)/(backup|bright|dark|other)/[0-9]+/[0-9]+/spectra-(sv1|sv2|sv3|main)-(backup|bright|dark|other)-[0-9]+\.fits;healpix/SURVEY/PROGRAM/PIXGROUP/PIXNUM/spectra-SURVEY-PROGRAM-PIXNUM.rst
tilepix = healpix/tilepix\.fits;healpix/tilepix.rst
[preproc]
fibermap = preproc/[0-9]{8}/[0-9]{8}/fibermap-[0-9]{8}\.fits;preproc/NIGHT/EXPID/fibermap-EXPID.rst
preproc = preproc/[0-9]{8}/[0-9]{8}/preproc-[brz][0-9]-[0-9]{8}\.fits;preproc/NIGHT/EXPID/preproc-CAMERA-EXPID.rst
[tiles]
coadd = tiles/(cumulative|perexp|pernight)/[0-9]+/[0-9]{8}/coadd-[0-9]-[0-9]+-(thru|exp|)[0-9]{8}\.fits;tiles/TILETYPE/TILEID/NIGHT/coadd-SPECTROGRAPH-NIGHT-EXPID.rst
qso_mgii = tiles/(cumulative|perexp|pernight)/[0-9]+/[0-9]{8}/qso_mgii-[0-9]-[0-9]+-(thru|exp|)[0-9]{8}\.fits;tiles/TILETYPE/TILEID/NIGHT/qso_mgii-SPECTROGRAPH-NIGHT-EXPID.rst
qso_qn = tiles/(cumulative|perexp|pernight)/[0-9]+/[0-9]{8}/qso_qn-[0-9]-[0-9]+-(thru|exp|)[0-9]{8}\.fits;tiles/TILETYPE/TILEID/NIGHT/qso_qn-SPECTROGRAPH-NIGHT-EXPID.rst
redrock = tiles/(cumulative|perexp|pernight)/[0-9]+/[0-9]{8}/redrock-[0-9]-[0-9]+-(thru|exp|)[0-9]{8}\.fits;tiles/TILETYPE/TILEID/NIGHT/redrock-SPECTROGRAPH-NIGHT-EXPID.rst
spectra = tiles/(cumulative|perexp|pernight)/[0-9]+/[0-9]{8}/spectra-[0-9]-[0-9]+-(thru|exp|)[0-9]{8}\.fits;tiles/TILETYPE/TILEID/NIGHT/spectra-SPECTROGRAPH-NIGHT-EXPID.rst
tile-qa = tiles/(cumulative|perexp|pernight)/[0-9]+/[0-9]{8}/tile-qa-[0-9]+-(thru|exp|)[0-9]{8}\.fits;tiles/TILETYPE/TILEID/NIGHT/tile-qa-NIGHT-EXPID.rst
[tiles:depth]
coadd = tiles/[14]x_depth/[0-9]+/[0-9]/coadd-[0-9]-[0-9]+-[14]xsubset[1-6]\.fits;tiles/DEPTH/TILEID/SPECTROGRAPH/coadd-SPECTROGRAPH-TILEID-EXPID.rst
qso_mgii = tiles/[14]x_depth/[0-9]+/[0-9]/qso_mgii-[0-9]-[0-9]+-[14]xsubset[1-6]\.fits;tiles/DEPTH/TILEID/SPECTROGRAPH/qso_mgii-SPECTROGRAPH-TILEID-EXPID.rst
qso_qn = tiles/[14]x_depth/[0-9]+/[0-9]/qso_qn-[0-9]-[0-9]+-[14]xsubset[1-6]\.fits;tiles/DEPTH/TILEID/SPECTROGRAPH/qso_sn-SPECTROGRAPH-TILEID-EXPID.rst
redrock = tiles/[14]x_depth/[0-9]+/[0-9]/redrock-[0-9]-[0-9]+-[14]xsubset[1-6]\.fits;tiles/DEPTH/TILEID/SPECTROGRAPH/redrock-SPECTROGRAPH-TILEID-EXPID.rst
spectra = tiles/[14]x_depth/[0-9]+/[0-9]/spectra-[0-9]-[0-9]+-[14]xsubset[1-6]\.fits;tiles/DEPTH/TILEID/SPECTROGRAPH/spectra-SPECTROGRAPH-TILEID-EXPID.rst
[zcatalog]
zpix = zcatalog/zpix-(sv1|sv2|sv3|main)-(backup|bright|dark|other)\.fits;zcatalog/zpix-SURVEY-PROGRAM.rst
ztile = zcatalog/ztile-(sv1|sv2|sv3|main)-(backup|bright|dark|other)-(cumulative|pernight)\.fits;zcatalog/ztile-SURVEY-PROGRAM-TILETYPE.rst
[exclude]
calibnight = calibnight/[0-9]{8}/tmp/*;dummy
exposures = exposures/[0-9]{8}/old/*;dummy
preproc = preproc/[0-9]{8}/old/*;dummy
"""
)
```
## Precompile Regular Expressions
```
r = dict()
structure = dict()
for s in parser.sections():
r[s] = dict()
structure[s] = dict()
for key, value in parser.items(s):
v = value.split(';')
r[s][key] = re.compile(v[0])
structure[s][key] = v[1]
```
## Scan the list of files
```
with open(fits_files) as e:
data = e.readlines()
data.append(f'exposures-{specprod}.fits\n')
scanable = dict()
for file in data:
ff = file.strip()
f = ff.split('/')
if len(f) == 1:
section = 'top'
elif f[1] == '1x_depth' or f[1] == '4x_depth':
section = 'tiles:depth'
else:
section = f[0]
if section not in scanable:
scanable[section] = dict()
excluded = False
for key in r['exclude']:
m = r['exclude'][key].match(ff)
if m is not None:
excluded = True
if excluded:
continue
matched = False
for key in r[section]:
m = r[section][key].match(ff)
if m is not None:
matched = True
if key in scanable[section]:
scanable[section][key].append(ff)
else:
scanable[section][key] = [ff]
if matched:
continue
print("ERROR: Could not match {0}!".format(ff))
```
## From the list of all file types, pick one at random
```
scan = dict()
for section in scanable:
scan[section] = dict()
for ftype in scanable[section]:
scan[section][ftype] = choice(scanable[section][ftype])
scan
```
## Run fitsverify on the chosen files
```
for section in scan:
for key in scan[section]:
command = ['fitsverify', '-l', scan[section][key]]
proc = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE, cwd=specprod_path)
out, err = proc.communicate()
# print(section, key, out.decode('ascii').split('\n')[-2])
result = out.decode('ascii').split('\n')[-2]
if result == "**** Verification found 0 warning(s) and 0 error(s). ****":
pass
# print(section, key, "No problems detected.")
else:
print(section, key, "Problems detected!")
print(out.decode('ascii'))
if err:
print(err.decode('ascii'))
```
## Generate datamodel stub files on all chosen files
```
for section in scan:
for key in scan[section]:
command = ['generate_model', os.path.join(specprod_path, scan[section][key])]
print(' '.join(command))
proc = sp.Popen(command, stdout=sp.PIPE, stderr=sp.PIPE)
out, err = proc.communicate()
if proc.returncode != 0:
print(out.decode('ascii'))
print(err.decode('ascii'))
if key == 'shifted-input-psf':
src = 'shifted'
elif key == 'fit-psf' or key == 'fit-psf-fixed-blacklisted' or key == 'fit-psf-before-blacklisted-fix':
src = 'fit'
elif key == 'exposure-qa':
src = 'exposure'
elif key == 'tile-qa':
src = 'tile'
else:
src = key
if os.path.exists(f'{src}.rst'):
d = os.path.dirname(structure[section][key])
if d and not os.path.isdir(d):
print("os.makedirs('{0}')".format(d))
os.makedirs(d)
print("shutil.move('{0}.rst', '{1}'".format(src, structure[section][key]))
shutil.move('{0}.rst'.format(src), structure[section][key])
else:
print("ERROR: Could not find file corresponding to {0}.rst -> {1}!".format(src, scan[section][key]))
```
| github_jupyter |
```
# Putting the initialisation at the top now!
import veneer
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
v = veneer.Veneer(port=9876)
```
# Session 6 - Model Setup and Reconfiguration
This session covers functionality in Veneer and veneer-py for making larger changes to model setup, including structural changes.
Using this functionality, it is possible to:
* Create (and remove) nodes and links
* Change model algorithms, such as changing links from Straight Through Routing to Storage Routing
* Assign input time series to model variables
* Query and modify parameters across similar nodes/links/catchments/functional-units
## Overview
- (This is a Big topic)
- Strengths and limitations of configuring from outside
- +ve repeatability
- +ve clarity around common elements - e.g. do one thing everywhere, parameterised by spatial data
- -ve feedback - need to query the system to find out what you need to do vs a GUI that displays it
- Obvious and compelling use cases
- Catchments: Applying a constituent model everywhere and assigning parameters using spatial data
- Catchments: Climate data
- How it works:
- The Python <-> IronPython bridge
- What’s happening under the hood
- Layers of helper functions
- How to discover parameters
- Harder examples (not fully worked)
- Creating and configuring a storage from scratch
- Extending the system
## Which Model?
**Note:** This session uses `ExampleProject/RiverModel2.rsproj`. You are welcome to work with your own model instead, however you will need to change the notebook text at certain points to reflect the names of nodes, links and functions in your model file.
## Warning: Big Topic
This is a big topic and the material in this session will only touch on some of the possibilities.
Furthermore, its an evolving area - so while there is general purpose functionality that is quite stable, making the functionality easy to use for particular tasks is a case by case basis that has been tackled on an as-needed basis. **There are lots of gaps!**
## Motivations, Strengths and Limitations of Scripting configuration
There are various motivations for the type of automation of Source model setup described here. Some of these motivations are more practical to achieve than others!
### Automatically build a model from scratch, using an executable 'recipe'
Could you build a complete Source model from scratch using a script?
In theory, yes you could. However it is not practical at this point in time using Veneer. (Though the idea of building a catchments-style model is more foreseeable than building a complex river model).
For some people, building a model from script would be desirable as it would have some similarities to configuring models in text files as was done with the previous generation of river models. A script would be more powerful though, because it has the ability to bring in adhoc data sources (GIS layers, CSV files, etc) to define the model structure. The scripting approach presented here wouldn't be the most convenient way to describe a model node-by-node, link-by-link - it would be quite cumbersome. However it would be possible to build a domain-specific language for describing models that makes use of the Python scripting.
### Automate bulk changes to a model
Most of the practical examples to date have involved applying some change across a model (whether that model is a catchments-style geographic model or a schematic style network). Examples include:
* **Apply a new constituent generation model:** A new generation model was being tested and needed to be applied to every catchment in the model. Some of the parameters would subsequently be calibrated (using PEST), but others needed to be derived from spatial data.
* **Add and configure nodes for point source inputs:** A series of point sources needed to be represented in the models. This involved adding inflow nodes for each point source, connecting those inflows to the most appropriate (and available) downstream node and computing and configuring time series inputs for the inflows.
* **Bulk rename nodes and links based on a CSV file:** A complex model needed a large number of nodes and links renamed to introduce naming conventions that would allow automatic post-processing and visualisation. A CSV was created with old node/link names (extracted from Source using veneer-py). A second column in the CSV was then populated (by hand) with new node/link names. This CSV file was read into Python and used to apply new names to affected nodes/links.
### Change multiple models in a consistent way
* **Testing a plugin in multiple catchments:** A new plugin model was being tested across multiple catchments models, including calibration. A notebook was written to apply the plugin to a running Source model, parameterise the plugin and configure PEST. This notebook was then applied to each distinct Source model in turn.
### Change a model without making the changes permanent
There are several reasons for making changes to the Source model without wanting the changes to be permanently saved in the model.
1. Testing an alternative setup, such as a different set of routing parameters. Automating the application of new parameters means you can test, and then re-test at a later date, without needing manual rework.
2. Maintaining a single point-of-truth for a core model that needs to support different purposes and users.
3. Persistence not available. In the earlier examples of testing new plugin models, the automated application of model setup allowed sophisticated testing, including calibration by PEST, to take place before the plugin was stable enough to be persisted using the Source data management system.
## Example - Switching routing methods and configuring
This example uses the earlier `RiverModel.rsproj` example file although it will work with other models.
Here, we will convert all links to use Storage Routing except for links that lead to a water user.
**Note:** To work through this example (and the others that follow), you will need to ensure the 'Allow Scripts' option is enabled in the Web Server Monitoring window.
### The `v.model` namepsace
Most of our work in this session will involve the `v.model` namespace. This namespace contains functionality that provides query and modification of the model structure. Everything in `v.model` relies on the 'Allow Scripts' option.
As with other parts of veneer-py (and Python packages in general), you can use `<tab>` completion to explore the available functions and the `help()` function (or the `?` suffix) to get help.
### Finding the current routing type
We can use `v.model.link.routing.get_models()` to find the routing models used on each link
```
existing_models = v.model.link.routing.get_models()
existing_models
```
**Note:**
* The `get_models()` functions is available in various places through the `v.model` namespace. For example, `v.model.catchments.runoff.get_models()` queries the rainfall runoff models in subcatchments (actually in functional units). There are other such methods, available in multiple places, including:
* `set_models`
* `get_param_values`
* `set_param_values`
* These functions are all *bulk* functions - that is they operate across *all* matching elements (all nodes, all links, etc).
* Each of these functions accept parameters to restrict the search, such as only including links with certain names. These query parameters differ in different contexts (ie between runoff models and routing models), but they are consistent between the functions in a given context. Confused?
* For example, in link routing you can look for links of certain names and you can do this with the different methods:
```python
v.model.link.routing.get_models(links='Default Link #3')
v.model.link.routing.set_models('RiverSystem.Flow.LaggedFlowRoutingWrapper',links='Default Link #3')
```
Whereas, with runoff models, you can restrict by catchment or by fus:
```python
v.model.catchment.runoff.get_models(fus='Grazing')
v.model.catchment.runoff.set_models('MyFancyRunoffModel',fus='Grazing')
```
* You can find out what query parameters are available by looking at the help, one level up:
```python
help(v.model.link.routing)
```
* The call to `get_models()` return a list of model names. Two observations about this:
1. The model name is the fully qualified class name as used internally in Source. This is a common pattern through the `v.model` namespace - it uses the terminology within Source. There are, however, help functions for finding what you need. For example:
```python
v.model.find_model_type('gr4')
v.model.find_parameters('RiverSystem.Flow.LaggedFlowRoutingWrapper')
```
2. Returning a list doesn't tell you which link has which model - so how are you going to determine which ones should be Storage Routing and which should stay as Straight Through? In general the `get_` functions return lists (although there is a `by_name` option being implemented) and the `set_` functions accept lists (unless you provide a single value in which case it is applied uniformly). It is up to you to interpret the lists returned by `get_*` and to provide `set_*` with a list in the right order. The way to get it right is to separately query for the _names_ of the relevant elements (nodes/links/catchments) and order accordingly. This will be demonstrated!
### Identifying which link should stay as StraightThroughRouting
We can ask for the names of each link in order to establish which ones should be Storage Routing and which should stay as Straight Through
```
link_names_order = v.model.link.routing.names()
link_names_order
```
OK - that gives us the names - but it doesn't help directly. We could look at the model in Source to
work out which one is connected to the Water User - but that's cheating!
More generally, we can ask Veneer for the network and perform a topological query
```
network = v.network()
```
Now that we've got the network, we want all the water users.
Now, the information we've been returned regarding the network is in GeoJSON format and is intended for use in visualisation. It doesn't explicitly say 'this is a water user' at any point, but it does tell us indirectly tell us this by telling us about the icon in use:
```
network['features']._unique_values('icon')
```
So, we can find all the water users in the network, by finding all the network features with `'/resources/WaterUserNodeModel'` as their icon!
```
water_users = network['features'].find_by_icon('/resources/WaterUserNodeModel')
water_users
```
Now, we can query the network for links upstream of each water user.
We'll loop over the `water_users` list (just one in the sample model)
```
links_upstream_of_water_users=[]
for water_user in water_users:
links_upstream_of_water_users += network.upstream_links(water_user)
links_upstream_of_water_users
```
Just one link (to be expected) in the sample model. Its the name we care about though:
```
names_of_water_user_links = [link['properties']['name'] for link in links_upstream_of_water_users]
names_of_water_user_links
```
To recap, we now have:
* `existing_models` - A list of routing models used on links
* `link_names_order` - The name of each link, in the same order as for `existing_models`
* `names_of_water_user_links` - The names of links immediately upstream of water users. These links need to stay as Straight Through Routing
We're ultimately going to call
```python
v.model.link.routing.set_models(new_models,fromList=True)
```
so we need to construct `new_models`, which will be a list of model names to assign to links, with the right mix and order of storage routing and straight through. We'll want `new_models` to be the same length as `existing_models` so there is one entry per link. (There are cases where you my use `set_models` or `set_param_values` with shorter lists. You'll get R-style 'recycling' of values, but its more useful in catchments where you're iterating over catchments AND functional units)
The entries in `new_models` need to be strings - those long, fully qualified class names from the Source world. We can find them using `v.model.find_model_type`
```
v.model.find_model_type('StorageRo')
v.model.find_model_type('StraightThrough')
```
We can construct our list using a list comprehension, this time with a bit of extra conditional logic thrown in
```
new_models = ['RiverSystem.Flow.StraightThroughRouting' if link_name in names_of_water_user_links
else 'RiverSystem.Flow.StorageRouting'
for link_name in link_names_order]
new_models
```
This is a more complex list comprehension than we've used before. It goes like this, reading from the end:
* Iterate over all the link names. This will be the right number of elements - and it tells us which link we're dealling with
```python
for link_name in link_names_order]
```
* If the current `link_name` is present in the list of links upstream of water users, use straight through routing
```python
['RiverSystem.Flow.StraightThroughRouting' if link_name in names_of_water_user_links
```
* Otherwise use storage routing
```python
else 'RiverSystem.Flow.StorageRouting'
```
All that's left is to apply this to the model
```
v.model.link.routing.set_models(new_models,fromList=True)
```
**Notes:**
* The Source Applications draws links with different line styles based on their routing types - but it might not redraw until you prompt it - eg by resizing the window
* The `fromList` parameter tells the `set_models` function that you want the list to be applied one element at a time.
Now that you have Storage Routing used in most links, you can start to parameterise the links from the script.
To do so, you could use an input set, as per the previous session. To change parameters via input sets, you would first need to know the wording to use in the input set commands - and at this stage you need to find that wording in the Source user interface.
Alternatively, you can set the parameters directly using `v.model.link.routing.set_param_values`, which expects the variable name as used internally by Source. You can query for the parameter names for a particular model, using `v.model.find_parameters(model_type)` and, if that doesn't work `v.model.find_properties(model_type)`.
We'll start by using `find_parameters`:
```
v.model.find_parameters('RiverSystem.Flow.StorageRouting')
```
The function `v.model.find_parameters`, accepts a model type (actually, you can give it a list of model types) and it returns a list of parameters.
This list is determined by the internal code of Source - a parameter will only be returned if it has a `[Parameter]` tag in the C\# code.
From the list above, we see some parameters that we expect to see, but not all of the parameters for a Storage Routing reach. For example, the list of parameters doesn't seem to say how we'd switch from Generic to Piecewise routing mode. This is because the model property in question (`IsGeneric`) doesn't have a `[Property]` attribute.
We can find a list of all fields and properties of the model using `v.model.find_properties`. It's a lot more information, but it can be helpful:
```
v.model.find_properties('RiverSystem.Flow.StorageRouting')
```
Lets apply an initial parameter set to every Storage Routing link by setting:
* `RoutingConstant` to 86400, and
* `RoutingPower` to 1
We will call `set_param_values`
```
help(v.model.link.routing.set_param_values)
v.model.link.routing.set_param_values('RoutingConstant',86400.0)
v.model.link.routing.set_param_values('RoutingPower',1.0)
```
You can check in the Source user interface to see that the parameters have been applied
### Setting parameters as a function of other values
Often, you will want to calculate model parameters based on some other information, either within the model or from some external data source.
The `set_param_values` can accept a list of values, where each item in the list is applied, in turn, to the corresponding models - in much the same way that we used the known link order to set the routing type.
The list of values can be computed in your Python script based on any available information. A common use case is to compute catchment or functional unit parameters based on spatial data.
We will demonstrate the list functionality here with a contrived example!
We will set a different value of `RoutingPower` for each link. We will compute a different value of `RoutingPower` from 1.0 down to >0, based on the number of storage routing links
```
number_of_links = len(new_models) - len(names_of_water_user_links)
power_vals = np.arange(1.0,0.0,-1.0/number_of_links)
power_vals
v.model.link.routing.set_param_values('RoutingPower',power_vals,fromList=True)
```
If you open the Feature Table for storage routing, you'll now see these values propagated.
The `fromList` option has another characteristic that can be useful - particularly for catchments models with multiple functional units: value recycling.
If you provide a list with few values than are required, the system will start again from the start of the list.
So, for example, the following code will assign the three values: `[0.5,0.75,1.0]`
```
v.model.link.routing.set_param_values('RoutingPower',[0.5,0.75,1.0],fromList=True)
```
Check the Feature Table to see the effect.
**Note:** You can run these scripts with the Feature Table open and the model will be updated - but the feature table won't reflect the new values until you Cancel the feature table and reopen it.
## How it Works
As mentioned, everything under `v.model` works by sending an IronPython script to Source to be run within the Source software itself.
IronPython is a native, .NET, version of Python and hence can access all the classes and objects that make up Source.
When you call a function witnin `v.model`, veneer-py is *generating* an IronPython script for Source.
To this point, we haven't seen what these IronPython scripts look like - they are hidden from view. We can see the scripts that get sent to Source by setting the option `veneer.general.PRINT_SCRIPTS=True`
```
veneer.general.PRINT_SCRIPTS=True
v.model.link.routing.get_models(links=['Default Link #3','Default Link #4'])
veneer.general.PRINT_SCRIPTS=False
```
*Writing these IronPython scripts from scratch requires an understanding of the internal data structures of Source. The functions under `v.model` are designed to shield you from these details.*
That said, if you have an idea of the data structures, you may wish to try writing IronPython scripts, OR, try working with some of the lower-level functionality offered in `v.model`.
Most of the `v.model` functions that we've used, are ultimately based upon two, low level thems:
* `v.model.get` and
* `v.model.set`
Both `get` and `set` expect a query to perform on a Source scenario object. Structuring this query is where an understanding of Source data structures comes in.
For example, the following query will return the number of nodes in the network. (We'll use the PRINT_SCRIPTS option to show how the query translates to a script):
```
veneer.general.PRINT_SCRIPTS=True
num_nodes = v.model.get('scenario.Network.Nodes.Count()')
num_nodes
```
The follow example returns the names of each node in the network. The `.*` notation tells veneer-py to generate a loop over every element in a collection
```
node_names = v.model.get('scenario.Network.Nodes.*Name')
node_names
```
You can see from the script output that veneer-py has generated a Python for loop to iterate over the nodes:
```python
for i_0 in scenario.Network.Nodes:
```
There are other characteristics in there, such as ignoring exceptions - this is a common default used in `v.model` to silently skip nodes/links/catchments/etc that don't have a particular property.
The same query approach can work for `set`, which can set a particular property (on one or more objects) to a particular value (which can be the same value everywhere, or drawn from a list)
```
# Generate a new name for each node (based on num_nodes)
names = ['New Name %d'%i for i in range(num_nodes)]
names
v.model.set('scenario.Network.Nodes.*Name',names,fromList=True,literal=True)
```
If you look at the Source model now (you may need to trigger a redraw by resizing the window), all the nodes have been renamed.
(Lets reset the names - note how we saved `node_names` earlier on!)
```
v.model.set('scenario.Network.Nodes.*Name',node_names,fromList=True,literal=True)
```
**Note:** The `literal=True` option is currently necessary setting text properties using `v.model.set`. This tells the IronPython generator to wrap the strings in quotes in the final script. Otherwise, IronPython would be looking for symbols (eg classes) with the same names
The examples of `v.model.get` and `v.model.set` illustrate some of the low level functionality for manipulating the source model.
The earlier, high-level, functions (eg `v.model.link.routing.set_param_values`) take care of computing the query string for you, including context dependent code such as searching for links of a particular name, or nodes of a particular type. They then call the lower level functions, which takes care of generating the actual IronPython script.
The `v.model` namespace is gradually expanding with new capabilities and functions - but at their essence, most new functions provide a high level wrapper, around `v.model.get` and `v.model.set` for some new area of the Source data structures. So, for example, you could envisage a `v.model.resource_assessment` which provides high level wrappers around resource assessment functionality.
### Exploring the system
Writing the high level wrappers (as with writing the query strings for `v.model.get/set`) requires an understanding of the internal data structures of Source. You can get this from the C# code for Source, or, to a degree, from a help function `v.model.sourceHelp`.
Lets say you want to discover how to change the description of the scenario (say, to automatically add a note about the changes made by your script)
Start, by asking for help on `'scenario'` and explore from there
```
veneer.general.PRINT_SCRIPTS=False
v.model.sourceHelp('scenario')
```
This tells you everything that is available on a Source scenario. It's a lot, but `Description` looks promising:
```
existing_description = v.model.get('scenario.Description')
existing_description
```
OK. It looks like there is no description in the existing scenario. Lets set one
```
v.model.set('scenario.Description','Model modified by script',literal=True)
```
## Harder examples
Lets look at a simple model building example.
We will test out different routing parameters, by setting up a scenario with several parallel networks. Each network will consist of an Inflow Node and a Gauge Node, joined by a Storage Routing link.
The inflows will all use the same time series of flows, so the only difference will be the routing parameters.
To proceed,
1. Start a new copy of Source (in the following code, I've assumed that you're leaving the existing copy open)
2. Create a new schematic model - but don't add any nodes or links
3. Open Tools|Web Server Monitoring
4. Once Veneer has started, make a note of what port number it is using - it will probably be 9877 if you've left the other copy of Source open.
5. Make sure you tick the 'Allow Scripts' option
Now, create a new veneer client (creatively called `v2` here)
```
v2 = veneer.Veneer(port=9877)
```
And check that the network has nothing in it at the moment
```
v2.network()
```
We can create nodes with `v.model.node.create`
```
help(v2.model.node.create)
```
There are also functions to create different node types:
```
help(v2.model.node.new_gauge)
```
First, we'll do a bit of a test run. Ultimately, we'll want to create a number of such networks - and the nodes will definitely need unique names then
```
loc = [10,10]
v2.model.node.new_inflow('The Inflow',schematic_location=loc,location=loc)
loc = [20,10]
v2.model.node.new_gauge('The Gauge',schematic_location=loc,location=loc)
```
**Note:** At this stage (and after some frustration) we can't set the location of the node on the schematic. We can set the 'geographic' location - which doesn't have to be true geographic coordinates, so that's what we'll do here.
Creating a link can be done with `v2.model.link.create`
```
help(v2.model.link.create)
v2.model.link.create('The Inflow','The Gauge','The Link')
```
Now, lets look at the information from `v2.network()` to see that it's all there. (We should also see the model in the geographic view)
```
v2.network().as_dataframe()
```
Now, after all that, we'll delete everything we've created and then recreate it all in a loop to give us parallel networks
```
v2.model.node.remove('The Inflow')
v2.model.node.remove('The Gauge')
```
So, now we can create (and delete) nodes and links, lets create multiple parallel networks, to test out our flow routing parameters. We'll create 20, because we can!
```
num_networks=20
for i in range(1,num_networks+1): # Loop from 1 to 20
veneer.log('Creating network %d'%i)
x = i
loc_inflow = [i,10]
loc_gauge = [i,0]
name_inflow = 'Inflow %d'%i
name_gauge = 'Gauge %d'%i
v2.model.node.new_inflow(name_inflow,location=loc_inflow,schematic_location=loc_inflow)
v2.model.node.new_gauge(name_gauge,location=loc_gauge,schematic_location=loc_gauge)
# Create the link
name_link = 'Link %d'%i
v2.model.link.create(name_inflow,name_gauge,name_link)
# Set the routing type to storage routing (we *could* do this at the end, outside the loop)
v2.model.link.routing.set_models('RiverSystem.Flow.StorageRouting',links=name_link)
```
We'll use one of the flow files from the earlier model to drive each of our inflow nodes. We need to know where that data is. Here, I'm assuming its in the `ExampleProject` directory within the same directory as this notebook. We'll need the absolute path for Source, and the Python `os` package helps with this type of filesystem operation
```
import os
os.path.exists('ExampleProject/Fish_G_flow.csv')
absolute_path = os.path.abspath('ExampleProject/Fish_G_flow.csv')
absolute_path
```
We can use `v.model.node.assign_time_series` to attach a time series of inflows to the inflow node. We could have done this in the for loop, one node at a time, but, like `set_param_values` we can assign time series to multiple nodes at once.
One thing that we do need to know is the parameter that we're assigning the time series to (because, after all, this could be any type of node - veneer-py doesn't know at this stage). We can find the model type, then check `v.model.find_parameters` and, if that doesn't work, `v.model.find_inputs`:
```
v2.model.node.get_models(nodes='Inflow 1')
v2.model.find_parameters('RiverSystem.Nodes.Inflow.InjectedFlow')
v2.model.find_inputs('RiverSystem.Nodes.Inflow.InjectedFlow')
```
So `'Flow'` it is!
```
v2.model.node.assign_time_series('Flow',absolute_path,'Inflows')
```
Almost there.
Now, lets set a range of storage routing parameters (much like we did before)
```
power_vals = np.arange(1.0,0.0,-1.0/num_networks)
power_vals
```
And assign those to the links
```
v2.model.link.routing.set_param_values('RoutingConstant',86400.0)
v2.model.link.routing.set_param_values('RoutingPower',power_vals,fromList=True)
```
Now, configure recording
```
v2.configure_recording(disable=[{}],enable=[{'RecordingVariable':'Downstream Flow Volume'}])
```
And one last thing - work out the time period for the run from the inflow time series
```
inflow_ts = pd.read_csv(absolute_path,index_col=0)
start,end=inflow_ts.index[[0,-1]]
start,end
```
That looks a bit much. Lets run for a year
```
v2.run_model(start='01/01/1999',end='31/12/1999')
```
Now, we can retrieve some results. Because we used a naming convention for all the nodes, its possible to grab relevant results using those conventions
```
upstream = v2.retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume','NetworkElement':'Inflow.*'})
downstream = v2.retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume','NetworkElement':'Gauge.*'})
downstream[['Gauge 1:Downstream Flow Volume','Gauge 20:Downstream Flow Volume']].plot(figsize=(10,10))
```
If you'd like to change and rerun this example, the following code block can be used to delete all the existing nodes. (Or, just start a new project in Source)
```
#nodes = v2.network()['features'].find_by_feature_type('node')._all_values('name')
#for n in nodes:
# v2.model.node.remove(n)
```
## Conclusion
This session has looked at structural modifications of Source using Veneer, veneer-py and the use of IronPython scripts that run within Source.
Writing IronPython scripts requires a knowledge of internal Source data structures, but there is a growing collection of helper functions, under the `v.model` namespace to assist.
| github_jupyter |
```
import requests
import json
#res = requests.get("https://api.airtable.com/v0/appNcYtL8fFZa1STA/iris?api_key=keyshdNC8CZdj1xgo")
Base_ID = 'appNcYtL8fFZa1STA'
Table_name = 'iris'
# url格式: API URL/v版本/Base_ID/Table_Name
url = 'https://api.airtable.com/v0/{0}/{1}'.format(Base_ID, Table_name);
API_KEY = {'api_key': 'keyshdNC8CZdj1xgo'}
#Authorization格式: Bearer YOUR_API_KEY
Aut_cxt_header = {'Authorization': 'Bearer keyshdNC8CZdj1xgo', 'Content-type': 'application/json'}
Aut_header = {'Authorization': 'Bearer keyshdNC8CZdj1xgo'}
payload = {
"fields": {
"花萼長度": "9.9",
"花萼寬度": "9.9",
"花瓣長度": "9.9",
"花瓣寬度": "9.9",
"屬種": "new type"
}
}
url
```
## 新增刪修格式
###### GET - 查詢 (Retrieve)
```
r = requests.get(url, params=API_KEY)
#query by record ID:
query_by_id_url = url +"/"+ "record_id"
r = requests.get(query_by_id_url, headers=(Aut_cxt_header))
```
###### POST - 新增 (Create)
~~#POST with form-encoded data~~
~~(不使用) r = requests.post(url, data=payload, headers=(Aut_cxt_header))~~
```
# POST with JSON import json
r = requests.post(url, data=json.dumps(payload), headers=(Aut_cxt_header))
```
###### PATCH - 修改 (Update)部份欄位:some (but not all) fields
```
query_by_id_url = url +"/"+ "record_id"
r = requests.patch(query_by_id_url, data=json.dumps(payload), headers=(Aut_cxt_header))
```
###### PUT - 修改 (Update)全部欄位:all fields
```
query_by_id_url = url +"/"+ "record_id"
r = requests.put(query_by_id_url, data=json.dumps(payload), headers=(Aut_cxt_header))
```
###### DELETE - 刪除 (Delete)
```
query_by_id_url = url +"/"+ "delete target record id"
r = requests.delete(query_by_id_url, headers=(Aut_header))
```
###### 查詢語法
```
requests.post?
Signature: requests.post(url, data=None, json=None, **kwargs)
Docstring:
Sends a POST request.
:param url: URL for the new :class:`Request` object.
:param data: (optional) Dictionary (will be form-encoded), bytes, or file-like object to send in the body of the :class:`Request`.
:param json: (optional) json data to send in the body of the :class:`Request`.
:param \*\*kwargs: Optional arguments that ``request`` takes.
:return: :class:`Response <Response>` object
:rtype: requests.Response
File: c:\anaconda3\lib\site-packages\requests\api.py
Type: function
```
### Http Status Code: (API Error code)
https://airtable.com/appNcYtL8fFZa1STA/api/docs#curl/errors:servererrorcodes
###### 帶查詢條件: maxrecord, pagesize, and api_key
https://api.airtable.com/v0/appNcYtL8fFZa1STA/iris?maxRecords=100&pageSize=2&api_key=keyshdNC8CZdj1xgo
**Pagination** (關於分頁設定,pageSize預設100筆)
*The server returns one page of records at a time. Each page will contain pageSize records, which is 100 by default.*
**offset** (分頁的下一頁面第一筆記錄的ID)
*If there are more records, the response will contain an offset. To fetch the next page of records, include offset in the next request's parameters.*
```
query_string = {'api_key':'keyshdNC8CZdj1xgo', 'maxRecords':'100', 'pageSize':'2',
'filterByFormula':'{屬種} = "new type"'} #pageSize:一個頁面顯示(取回)幾筆資料
r = requests.get(url, params=query_string)
r.status_code
r.text
```
###### 查詢單筆資料
```
#查詢單筆資料
query_by_id_url = url +"/"+ "rec06Uv00gLHpXCsK" #url+record_id
r = requests.get(query_by_id_url, headers=(Aut_cxt_header)) #also,it's work that Authorization send by query string.
r.status_code
query_by_id_url
r.text
```
### 新增資料
```
r = requests.post(url, data=json.dumps(payload), headers=(Aut_cxt_header))
r.status_code
r.text #剛才新增的記錄資料,會被 Return.
```
### 刪除資料
```
query_by_id_url = url +"/"+ "recXX06GaZcWsjdxk" #url+record_id
r = requests.delete(query_by_id_url, headers=(Aut_header))
r.status_code
query_by_id_url
```
### 修改資料
```
query_by_id_url = url +"/"+ "rec7FyUfaCBL944p7"
payload = {
"fields": {
"花萼長度": "11",
"花萼寬度": "11",
"花瓣長度": "11",
"花瓣寬度": "11",
"屬種": "new type"
}
}
r = requests.put(query_by_id_url, data=json.dumps(payload), headers=(Aut_cxt_header))
r.status_code
r.text
```
| github_jupyter |
# Mask R-CNN - Compare ouptuts from Heatmap layer and FCN layer
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:95% !important; }</style>"))
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../')
import tensorflow as tf
import keras.backend as KB
import numpy as np
from mrcnn.datagen import data_generator, load_image_gt
from mrcnn.callbacks import get_layer_output_1,get_layer_output_2
from mrcnn.utils import mask_string, parse_image_meta, apply_box_deltas_tf
from mrcnn.visualize import display_gt_bboxes, display_roi_proposals, plot_gaussian, plot_gaussian2
from mrcnn.visualize import display_gt_bboxes, display_roi_proposals
import mrcnn.visualize as visualize
from mrcnn.prep_notebook import prep_oldshapes_dev
model, dataset_train, train_generator, config = prep_oldshapes_dev(init_with = 'last', FCN_layers = True)
model_info = [model, config, dataset_train, train_generator]
train_batch_x, train_batch_y = next(train_generator)
# train_batch_x, train_batch_y = next(train_generator)
imgmeta_idx = model.keras_model.input_names.index('input_image_meta')
img_meta = train_batch_x[imgmeta_idx]
for img_idx in range(config.BATCH_SIZE):
image_id = img_meta[img_idx,0]
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
print('Image id: ',image_id)
print('Image meta', img_meta[img_idx])
print('Classes (1: circle, 2: square, 3: triangle ): ',class_ids)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
model.layer_info()
# model.keras_model.outputs[0].name
```
### Push Data thru model using get_layer_output()
```
# model_output = get_layer_output_2(model.keras_model, train_batch_x, 1)
model_output = get_layer_output_1(model.keras_model, train_batch_x, [ 15,16,17,18,19,20,21,22,23], 1)
```
### Input Values
```
# 0 Tensor("input_image:0", shape=(?, 128, 128, 3), dtype=float32)
# 1 Tensor("input_image_meta:0", shape=(?, ?), dtype=float32)
# 2 Tensor("input_rpn_match:0", shape=(?, ?, 1), dtype=int32)
# 3 Tensor("input_rpn_bbox:0", shape=(?, ?, 4), dtype=float32)
# 4 Tensor("input_gt_class_ids:0", shape=(?, ?), dtype=int32)
# 5 Tensor("input_gt_boxes:0", shape=(?, ?, 4), dtype=float32)
# 6 Tensor("input_gt_masks:0", shape=(?, 56, 56, ?), dtype=bool)
input_image_meta = train_batch_x[1]
input_rpn_match = train_batch_x[2]
input_rpn_bbox = train_batch_x[3]
input_gt_class_ids = train_batch_x[4]
input_gt_bboxes = train_batch_x[5]
# gt_masks = train_batch_x[6]
print(' input_rpn_match ', input_rpn_match.shape)
print(' input_rpn_bbox ', input_rpn_bbox.shape)
print(' input_gt_class_ids ', input_gt_class_ids.shape)
print(' input_gt_bboxes ', input_gt_bboxes.shape)
print(len(model_output))
pred_heatmap_norm = model_output[0] # layer: 15 shape: (5, 128, 128, 4)
gt_heatmap_norm = model_output[1] # layer: 16 shape: (5, 128, 128, 4)
pred_heatmap_scores = model_output[2] # layer: 17 shape: (5, 4, 32, 10)
gt_heatmap_scores = model_output[3] # layer: 18 shape: (5, 4, 100, 10)
pred_tensor = model_output[4] # layer: 19 shape: (5, 4, 32, 6)
gt_tensor = model_output[5] # layer: 20 shape: (5, 4, 100, 6)
fcn_heatmap = model_output[6] # layer: 21 shape: (5, 16, 16, 4)
fcn_heatmap_norm = model_output[7] # layer: 22 shape: (5, 128, 128, 4)
fcn_scores = model_output[8] # layer: 23 shape: (5, 4, 32, 13)
# print(type(model_output[4]))
# print(type(output_rois))
for i in model_output:
print( i.shape)
# print(output_rois[0])
```
## Plot Predicted and Ground Truth Probability Heatmaps `pred_gaussian` and `gt_gaussian` (Tensorflow)
`pred_gaussian2` and `gt_gaussian2` from Tensorflow PCN layer
```
%matplotlib notebook
from mrcnn.visualize import plot_gaussian
# gt_heatmap = layers_out[27] # gt_gaussiam
# pred_heatmap= layers_out[24] # pred_gaussian
# gt_heatmap = layers_out[19] # gt_gaussiam
# pred_heatmap= layers_out[18] # pred_gaussian
print('gt_gaussian heatmap shape : ', gt_heatmap_norm.shape, ' pred_gaussian heatmap shape: ', pred_heatmap_norm.shape)
num_images = 1 # config.IMAGES_PER_GPU
num_classes = config.NUM_CLASSES
img = 0
image_id = input_image_meta[img,0]
print('Image id: ',image_id)
print('Classes (1: circle, 2: square, 3: triangle ): ')
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
for cls in range(1,num_classes):
ttl = 'GROUND TRUTH HEATMAP - image : {} class: {} '.format(img,cls)
print(' *** Zout ', gt_heatmap_norm[img,:,:,cls].shape, ttl)
plot_gaussian( gt_heatmap_norm[img,:,:,cls], title = ttl)
ttl = 'PREDICTED heatmap - image : {} class: {} '.format(img,cls)
print(' *** pred_heatmap ', pred_heatmap_norm[img,:,:,cls].shape, ttl)
plot_gaussian(pred_heatmap_norm[img,:,:,cls], title = ttl)
# print *_tensor / *_heatmap_scores
np.set_printoptions(linewidth=150, threshold=10000)
# print(' Pred tensor: ', pred_tensor.shape, 'gt_tensor:', gt_tensor.shape)
print(' Pred tensor: ', pred_heatmap_scores.shape, 'gt_tensor:', gt_heatmap_scores.shape)
# pt2_sum = tf.reduce_sum(tf.abs(in_tensor[:,:,:,:-1]), axis=-1)
# print(pred_tensor[2])
# print(gt_tensor[2])
# print(gt_heatmap_scores[2])
print(pred_heatmap_scores[2,0])
print(fcn_scores[2,0])
for i in [2]:
for cls in range(4):
pred_tst = pred_heatmap[img,:,:,cls]
gt_tst = gt_heatmap[img,:,:,cls]
print(pred_tst.shape, gt_tst.shape)
print('img/cls :', img,cls, 'pred sum : ',tf.reduce_sum(pred_tst).eval(), 'gt sum : ',tf.reduce_sum(gt_tst).eval())
```
### Plot heatmap produced by network `fcn_bilinear` and compare with `pred_gaussian`
```
from mrcnn.visualize import plot_gaussian
import matplotlib as plt
%matplotlib notebook
img = 3
image_id = input_image_meta[img,0]
print('Image id: ',image_id)
print('Classes (1: circle, 2: square, 3: triangle ): ')
image = dataset_train.load_image(image_id)
mask, class_ids = dataset_train.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset_train.class_names)
Zout = pred_heatmap_norm # gt_gaussiam
Zout2 = fcn_heatmap # fcn_bilinear
print(Zout.shape, Zout2.shape)
num_images = config.IMAGES_PER_GPU
num_classes = config.NUM_CLASSES
width = 9
for cls in range(num_classes):
ttl = 'FR-CNN - image : {} class: {} '.format(img,cls)
print(' *** Zout ', Zout[img,:,:,cls].shape, ttl)
plot_gaussian( Zout[img,:,:,cls], title = ttl, width = width)
ttl = 'FCN_Bilinear - image : {} class: {} '.format(img,cls)
print(' *** Zout2 ', Zout2[img,:,:,cls].shape, ttl)
plot_gaussian(Zout2[img,:,:,cls], title = ttl, width = width)
%matplotlib notebook
width = 12
plot_gaussian2([pred_heatmap_norm, fcn_heatmap_norm], image_idx = 0, title = ttl, width = width)
```
### - Compute mean/min/max of regular and L2 normalized Pred, GT, and FCN heatmap tensors
```
sess = KB.get_session()
with sess.as_default():
fcn_masks = KB.identity(fcn_heatmap)
pred_masks = KB.identity(pred_heatmap_norm)
gt_masks = KB.identity(gt_heatmap_norm)
shape = KB.int_shape(pred_masks)
print(shape)
pred_masks_r = tf.reshape(pred_masks, [shape[0], -1, shape[-1]])
fcn_masks_r = tf.reshape(fcn_masks, [shape[0], -1, shape[-1]])
gt_masks_r = tf.reshape(gt_masks, [shape[0], -1, shape[-1]])
print(gt_masks_r.shape, fcn_masks_r.shape)
pred_mean2 = KB.mean(pred_masks_r, axis = 1).eval()
pred_max2 = KB.max(pred_masks_r, axis=1).eval()
pred_min2 = KB.min(pred_masks_r, axis=1).eval()
gt_mean2 = KB.mean(gt_masks_r, axis = 1).eval()
gt_max2 = KB.max(gt_masks_r, axis=1).eval()
gt_min2 = KB.min(gt_masks_r, axis=1).eval()
fcn_mean2 = KB.mean(fcn_masks_r, axis = 1).eval()
fcn_max2 = KB.max(fcn_masks_r, axis=1).eval()
fcn_min2 = KB.min(fcn_masks_r, axis=1).eval()
##---------------------------------------------------------------------------------------------
## Compute L2 Normalizationof Pred, GT, and FCN tensors
##---------------------------------------------------------------------------------------------
pred_l2 = KB.l2_normalize(pred_masks_r, axis = 1)
gt_l2 = KB.l2_normalize(gt_masks_r, axis = 1)
fcn_l2 = KB.l2_normalize(fcn_masks_r, axis = 1)
pred_l2_min = KB.min(pred_l2, axis = 1).eval()
pred_l2_max = KB.max(pred_l2, axis = 1).eval()
pred_l2_mean = KB.mean(pred_l2, axis = 1).eval()
gt_l2_min = KB.min(gt_l2, axis = 1).eval()
gt_l2_max = KB.max(gt_l2, axis = 1).eval()
gt_l2_mean = KB.mean(gt_l2, axis = 1).eval()
fcn_l2_min = KB.min(fcn_l2, axis = 1).eval()
fcn_l2_max = KB.max(fcn_l2, axis = 1).eval()
fcn_l2_mean = KB.mean(fcn_l2, axis = 1).eval()
print(' Shape of L2 normalized tensor: ',pred_l2.shape, gt_l2.shape, fcn_l2.shape)
print(' Shape of L2 min tensor : ',pred_l2_min.shape, gt_l2_min.shape, fcn_l2_min.shape)
print(' Shape of L2 max tensor : ',pred_l2_max.shape, gt_l2_max.shape, fcn_l2_max.shape)
print(' Shape of L2 mean tensor : ',pred_l2_mean.shape, gt_l2_mean.shape, fcn_l2_mean.shape)
```
#### Print results of L2 normalization (Mean, Min , Max) vs. Original values
```
for img in range(5):
for cls in range(4):
print('\n I/C:{}/{} '.format(img, cls))
print(' Mean: gt:{:.5e} fcn:{: 11.5e} pred: {:.6f}'\
.format(gt_mean2[img,cls], fcn_mean2[img,cls], pred_mean2[img,cls]))
print(' L2 Mean: gt:{:.5e} fcn:{: 11.5e} pred: {:.6f}'\
.format(gt_l2_mean[img,cls], fcn_l2_mean[img,cls], pred_l2_mean[img,cls]))
print(' MAX: gt:{:.5e} fcn:{: 11.5e} pred: {:.5e}' \
.format(gt_max2[img,cls], fcn_max2[img,cls], pred_max2[img,cls]))
print(' L2 MAX: gt:{:.5e} fcn:{: 11.5e} pred: {:.5e}' \
.format(gt_l2_max[img,cls], fcn_l2_max[img,cls], pred_l2_max[img,cls]))
print(' MIN: gt:{:.5e} fcn:{: 11.5e} pred: {:.5e}'\
.format(gt_min2[img,cls], fcn_min2[img,cls], pred_min2[img,cls]))
print(' L2 MIN: gt:{:.5e} fcn:{: 11.5e} pred: {:.5e}'\
.format(gt_l2_min[img,cls], fcn_l2_min[img,cls], pred_l2_min[img,cls]))
```
#### Print (Mean, Min , Max) values
```
print(gt_masks.shape, fcn_masks.shape)
for img in range(5):
for cls in range(4):
print('I/C: {}/{} min/max gt: [{:.5e} , {:.5e}] pred: [{:.5e} , {:.5e}] fcn:[{:.5e} , {:.5e}] '\
.format(img, cls, gt_min2[img,cls], gt_max2[img,cls], pred_min2[img,cls], pred_max2[img,cls] ,
fcn_min2[img,cls], fcn_max2[img,cls] ))
print('\n\n')
for img in range(5):
for cls in range(4):
print('I/C:{}/{} Mean: gt: {:.5e} fcn : {:9.5e} pred : {:.6f} \t MAX: gt:{:.5e} fcn:{:.5e} pred: {:.5e}'\
.format(img, cls, gt_mean2[img,cls], fcn_mean2[img,cls], pred_mean2[img,cls],
gt_max2[img,cls], fcn_max2[img,cls], pred_max2[img,cls]))
with sess.as_default():
print(tf.shape(pred_masks).eval())
shape = tf.shape(pred_masks).eval()
pred_masks_r = tf.reshape(pred_masks, [shape[0], -1, shape[-1]])
means = KB.mean(pred_masks_r, axis = 1)
maxs = KB.max(pred_masks_r, axis=1)
# norms, means2, var = KB.normalize_batch_in_training(pred_masks[, 1.0, 0.0,[0,3])
l2_norm = KB.l2_normalize (pred_masks_r,axis = 1)
print(pred_masks.shape,pred_masks_r.shape)
print(means.shape, maxs.shape)
# print(' Shape of BN tensor: ', norms.shape)
# print(' Shape of means2 tensor: ', means2.shape)
# print(' Shape of var tensor: ', var.shape)
print(' Shape of L2 normalized tensor: ',l2_norm.shape)
print()
np.set_printoptions(linewidth=130, threshold=20000, precision=6)
# print('norms')
# print(norms.eval())
print('means')
print(means.eval())
print('maxs')
print(maxs.eval())
# pt = layers_out[4] # pred_gaussian
# pt2 = layers_out[10] # pred_gaussian_2
np.set_printoptions(linewidth=130, threshold=20000)
gt = np.transpose(pred_masks.eval(), [0,3,1,2])
gt2 = np.transpose(p2.eval(), [0,3,1,2])
# gt = np.where(gt > 1e-6,gt,0)
# gt2 = np.where(gt2 > 1e-6,gt2,0)
print( ' pt shape ', gt.shape, ' pt2.shape ', gt2.shape)
for img in range(config.BATCH_SIZE):
# print(' from np ')
# print(pt[img])
# print(' from tensorflow')
# print(pt2[img])
for cls in range(4):
equal = np.equal(gt2[img,cls,:,:] , gt[img, cls,:,:])
print(equal.shape)
print( 'Image ',img,' Class ',cls, ' all equal: ',equal.all())
print(equal.shape)
if (~equal.all()):
print('Not Equal: ',~equal)
print( 'Image ',img,' Class ',cls, ' ALL EQUAL: ',equal.all())
# print('\n -- using numpy \n', gt[img, cls, ~equal])
# print('\n -- using tensorflow \n', gt2[img, cls, ~equal])
# if not equal display the different between the mismatching rows
for i in range(equal.shape[0]):
if ~equal[i]:
diff = np.abs(gt2[img, cls, i] - gt[img, cls, i])
big_error = np.any(diff > 3.0e-9, axis = -1)
print(' row = ', i, ' rows equal = ',equal[i], ' Big Error (larger than 7.0e-8): ' ,big_error)
if big_error:
print(' difference :', diff )
# print(' -- using numpy \n',gt[img,cls,i])
# print(' -- using tensorflow \n',gt2[img,cls,i])
```
### Display ground truth bboxes from Shapes database (using `load_image_gt` )
Here we are displaying the ground truth bounding boxes as provided by the dataset
```
from mrcnn.utils import display_gt_bboxes, dispaly_roi_bboxes
model_info = [model, config, dataset_train, train_generator]
display_gt_bboxes(model_info, input_image_meta, 0)
```
### Display Predicted Ground Truth Bounding Boxes `gt_tensor` and `gt_tensor2`
layers_out[22] `gt_tensor` is based on input_gt_class_ids and input_normlzd_gt_boxes
layers_out[28] `gt_tensor2` is based on input_gt_class_ids and input_normlzd_gt_boxes, generated using Tensorflow
Display the Ground Truth bounding boxes from the tensor we've constructed
```
from mrcnn.utils import stack_tensors, stack_tensors_3d
gt_bboxes_stacked = stack_tensors_3d(layers_out[23][img])
# print(gt_bboxes)
# visualize.display_instances(p_original_image, p_gt_bbox, p_gt_mask, p_gt_class_id,
# dataset_train.class_names, figsize=(8, 8))
# pp.pprint(gt_bboxes)
img = 0
image_id = img_meta[img,0]
print('Image id: ',image_id)
p_image, p_image_meta, p_gt_class_id, p_gt_bbox, p_gt_mask = \
load_image_gt(dataset_train, config, image_id, augment=False, use_mini_mask=True)
print(gt_bboxes_stacked)
visualize.draw_boxes(p_image, gt_bboxes_stacked[:,0:4])
```
### Display RoI proposals `pred_bboxes` generated for one class
Display bounding boxes from tensor of proposals produced by the network
Square: 1 , Circle:2 , Triangle 3
```
model_info = [model, config, dataset_train, train_generator]
display_roi_proposals(model_info, input_image_meta, pred_tensor, 0)
```
| github_jupyter |
# [Hashformers](https://github.com/ruanchaves/hashformers)
Hashformers is a framework for hashtag segmentation with transformers. For more information, please check the [GitHub repository](https://github.com/ruanchaves/hashformers).
# Installation
The steps below will install the hashformers framework on Google Colab.
Make sure you are on GPU mode.
```
!nvidia-smi
```
Here we install `mxnet-cu110`, which is compatible with Google Colab.
If installing in another environment, replace it by the mxnet package compatible with your CUDA version.
```
%%capture
!pip install mxnet-cu110
!pip install hashformers
```
# Segmenting hashtags
Visit the [HuggingFace Model Hub](https://huggingface.co/models) and choose any GPT-2 and a BERT models for the WordSegmenter class.
The GPT-2 model should be informed as `segmenter_model_name_or_path` and the BERT model as `reranker_model_name_or_path`.
Here we choose `distilgpt2` and `distilbert-base-uncased`.
```
%%capture
from hashformers import TransformerWordSegmenter as WordSegmenter
ws = WordSegmenter(
segmenter_model_name_or_path="distilgpt2",
reranker_model_name_or_path="distilbert-base-uncased"
)
```
Now we can simply segment lists of hashtags with the default settings and look at the segmentations.
```
hashtag_list = [
"#myoldphonesucks",
"#latinosinthedeepsouth",
"#weneedanationalpark"
]
segmentations = ws.segment(hashtag_list)
print(*segmentations, sep='\n')
```
Remember that any pair of BERT and GPT-2 models will work. This means you can use **hashformers** to segment hashtags in any language, not just English.
```
%%capture
from hashformers import TransformerWordSegmenter as WordSegmenter
portuguese_ws = WordSegmenter(
segmenter_model_name_or_path="pierreguillou/gpt2-small-portuguese",
reranker_model_name_or_path="neuralmind/bert-base-portuguese-cased"
)
hashtag_list = [
"#benficamemes",
"#mouraria",
"#CristianoRonaldo"
]
segmentations = portuguese_ws.segment(hashtag_list)
print(*segmentations, sep='\n')
```
# Advanced usage
## Speeding up
If you want to investigate the speed-accuracy trade-off, here are a few things that can be done to improve the speed of the segmentations:
* Turn off the reranker model by passing `use_reranker = False` to the `ws.segment` method.
* Adjust the `segmenter_gpu_batch_size` (default: `1` ) and the `reranker_gpu_batch_size` (default: `2000`) parameters in the `WordSegmenter` initialization.
* Decrease the beamsearch parameters `topk` (default: `20`) and `steps` (default: `13`) when calling the `ws.segment` method.
```
%%capture
from hashformers import TransformerWordSegmenter as WordSegmenter
ws = WordSegmenter(
segmenter_model_name_or_path="distilgpt2",
reranker_model_name_or_path="distilbert-base-uncased",
segmenter_gpu_batch_size=1,
reranker_gpu_batch_size=2000
)
%%timeit
hashtag_list = [
"#myoldphonesucks",
"#latinosinthedeepsouth",
"#weneedanationalpark"
]
segmentations = ws.segment(hashtag_list)
%%timeit
hashtag_list = [
"#myoldphonesucks",
"#latinosinthedeepsouth",
"#weneedanationalpark"
]
segmentations = ws.segment(
hashtag_list,
topk=5,
steps=5,
use_reranker=False
)
```
## Getting the ranks
If you pass `return_ranks == True` to the `ws.segment` method, you will receive a dictionary with the ranks generated by the segmenter and the reranker, the dataframe utilized by the ensemble and the final segmentations. A segmentation will rank higher if its score value is **lower** than the other segmentation scores.
Rank outputs are useful if you want to combine the segmenter rank and the reranker rank in ways which are more sophisticated than what is done by the basic ensembler that comes by default with **hashformers**.
For instance, you may want to take two or more ranks ( also called "runs" ), convert them to the trec format and combine them through a rank fusion technique on the [trectools library](https://github.com/joaopalotti/trectools).
```
hashtag_list = [
"#myoldphonesucks",
"#latinosinthedeepsouth",
"#weneedanationalpark"
]
ranks = ws.segment(
hashtag_list,
use_reranker=True,
return_ranks=True
)
# Segmenter rank
ranks.segmenter_rank
# Reranker rank
ranks.reranker_rank
```
## Evaluation
The `evaluate_df` function can evaluate the accuracy, precision and recall of our segmentations. It uses exactly the same evaluation method as previous authors in the field of hashtag segmentation ( Çelebi et al., [BOUN Hashtag Segmentor](https://tabilab.cmpe.boun.edu.tr/projects/hashtag_segmentation/) ).
We have to pass a dataframe with fields for the gold segmentations ( a `gold_field` ) and your candidate segmentations ( a `segmentation_field` ).
The relationship between gold and candidate segmentations does not have to be one-to-one. If we pass more than one candidate segmentation for a single hashtag, `evaluate_df` will measure what is the upper boundary that can be achieved on our ranks ( e.g. Acc@10, Recall@10 ).
### Minimal example
```
# Let's measure the actual performance of the segmenter:
# we will evaluate only the top-1.
import pandas as pd
from hashformers.experiments.evaluation import evaluate_df
gold_segmentations = {
"myoldphonesucks" : "my old phone sucks",
"latinosinthedeepsouth": "latinos in the deep south",
"weneedanationalpark": "we need a national park"
}
gold_df = pd.DataFrame(gold_segmentations.items(),
columns=["characters", "gold"])
segmenter_top_1 = ranks.segmenter_rank.groupby('characters').head(1)
eval_df = pd.merge(gold_df, segmenter_top_1, on="characters")
eval_df
evaluate_df(
eval_df,
gold_field="gold",
segmentation_field="segmentation"
)
```
### Benchmarking
Here we evaluate a `distilgpt2` model on 1000 hashtags.
We collect our hashtags from 10 word segmentation datasets by taking the first 100 hashtags from each dataset.
```
%%capture
!pip install datasets
%%capture
from hashformers.experiments.evaluation import evaluate_df
import pandas as pd
from hashformers import TransformerWordSegmenter
from datasets import load_dataset
user = "ruanchaves"
dataset_names = [
"boun",
"stan_small",
"stan_large",
"dev_stanford",
"test_stanford",
"snap",
"hashset_distant",
"hashset_manual",
"hashset_distant_sampled",
"nru_hse"
]
dataset_names = [ f"{user}/{dataset}" for dataset in dataset_names ]
ws = TransformerWordSegmenter(
segmenter_model_name_or_path="distilgpt2",
reranker_model_name_or_path=None
)
def generate_experiments(datasets, splits, samples=100):
for dataset_name in datasets:
for split in splits:
try:
dataset = load_dataset(dataset_name, split=f"{split}[0:{samples}]")
yield {
"dataset": dataset,
"split": split,
"name": dataset_name
}
except:
continue
benchmark = []
for experiment in generate_experiments(dataset_names, ["train", "validation", "test"], samples=100):
hashtags = experiment['dataset']['hashtag']
annotations = experiment['dataset']['segmentation']
segmentations = ws.segment(hashtags, use_reranker=False, return_ranks=False)
eval_df = [{
"gold": gold,
"hashtags": hashtag,
"segmentation": segmentation
} for gold, hashtag, segmentation in zip(annotations, hashtags, segmentations)]
eval_df = pd.DataFrame(eval_df)
eval_results = evaluate_df(
eval_df,
gold_field="gold",
segmentation_field="segmentation"
)
eval_results.update({
"name": experiment["name"],
"split": experiment["split"]
})
benchmark.append(eval_results)
benchmark_df = pd.DataFrame(benchmark)
benchmark_df["name"] = benchmark_df["name"].apply(lambda x: x[(len(user) + 1):])
benchmark_df = benchmark_df.set_index(["name", "split"])
benchmark_df = benchmark_df.round(3)
benchmark_df
benchmark_df.agg(['mean', 'std']).round(3)
```
| github_jupyter |
In the previous tutorial, we introduced you to the basics of binary finite fields, but didn't really dive into the math or the implementation. In this tutorial, we're going to go deeper and actually walk through the mathematics of how binary fields actually work.
# What is “binary finite fields”?
Finite fields of order $2^m$ ($GF(2^m)$) are called binary fields or characteristic-two finite fields. They are of special interest because they are particularly efficient for implementation in hardware, or on a binary computer.
The elements of $GF(2^m)$ are binary polynomials, i.e. polynomials whose coefficients are either 0 or 1. There are $2^m$ such polynomials in the field and the degree of each polynomial is no more than $m-1$. Therefore, the elements can be represented as $m$-bit strings. Each bit in the bit string corresponding to the coefficient in the polynomial at the same position. For example, $GF(2^3)$ contains 8 element ${0,1,x,x+1,x^2,x^2+1,x^2+x,x^2+x+1}$. $x+1$ is actually $0x^2+1x+1$, so it can be represented as a bit string 011. Similarly, $x^2+x= 1x^2+1x+0$, so it can be represented as 110.
# How can we perform integer operations in $GF(2^m)$? Are they the same as integer operations in binary representation?
Integer operations in $GF(2^m)$ are a little different from regular operations. In the following we provide details of addition, subtraction, multiplication, and division operations for binary numbers in $GF(2^m)$.
## Addition/Subtraction:
In modulo 2 arithmetics, $1+1 \equiv 0~~mod~~2$, $1+0 \equiv 1~~mod~~2$, and $0+0 \equiv 0~~mod~~2$, which coincide with bit-XOR, i.e. $1 \oplus 1 = 0$, $1 \oplus 0 = 1$ and $0 \oplus 0 = 0$. Therefore, for binary polynomials, addition is simply bit-by-bit XOR. Also, in modulo 2 arithmetics, $-1 \equiv 1~~mod~~2$, so the result of subtraction of elements is the same as addition. This is the general form of addition and subtraction operations:
$A = a_{m-1} x^{m-1}+a_{m-2} x^{m-2}+\ldots+a_1 x^1+a_0$ where $a_i \in {0,1}$ for $i = 0, \ldots, m-1$
$B = b_{m-1} x^{m-1}+b_{m-2} x^{m-2}+\ldots+b_1 x^1+b_0$ where $b_i \in {0,1}$ for $i = 0, \ldots, m-1$
$A+B = A-B = (a_{m-1} \oplus b_{m-1})x^{m-1}+(a_{m-2} \oplus b_{m-2})x^{m-2}+\ldots+(a_{1} \oplus b_{1})x^{1}+(a_{0} \oplus b_{0})$
#### Example:
$(x^2+x+1)+(x^3+x^2+1)= x^3+2x^2+x+2= x^3+x$ since $2 \equiv 0~~mod~~2$. It can also be computed as $0111 \oplus 1101 = 1010$
$(x^2+x+1)-(x^3+x^2+1)= x^3+x$
```
import starks
from starks.modp import IntegersModP
from starks.polynomial import polynomials_over
from starks.finitefield import FiniteField
#GF(2^4)
p = 2
m = 4
Zp = IntegersModP(p)
polysOver = polynomials_over(Zp)
#reduction function p(x) = x^4+x+1
coefficients = [Zp(0)] * 5
coefficients[0] = Zp(1)
coefficients[1] = Zp(1)
coefficients[4] = Zp(1)
poly = polysOver(coefficients)
field = FiniteField(p, m, polynomialModulus=poly)
#A = x^2+x+1
A = field(polysOver([1,1,1]))
#B = x^3+x^2+1
B = field(polysOver([1,0,1,1]))
A+B
```
0 + 1 x^1 + 0 x^2 + 1 x^3 $\in$ F_{2^4}
```
A-B
```
0 + 1 x^1 + 0 x^2 + 1 x^3 $\in$ F_{2^4}
## Multiplication:
Multiplication of binary polynomials can be implemented as simple bit-shift and XOR, but is general form we can define multiplications as follows:
$A = a_{m-1} x^{m-1}+a_{m-2} x^{m-2}+\ldots+a_1 x^1+a_0$ where $a_i \in {0,1}$ for $i = 0, \ldots, m-1$
$B = b_{m-1} x^{m-1}+b_{m-2} x^{m-2}+\ldots+b_1 x^1+b_0$ where $b_i \in {0,1}$ for $i = 0, \ldots, m-1$
$A \times B = (a_{m-1} \cdot b_{m-1})x^{2m-2}+(a_{m-1} \cdot b_{m-2}+a_{m-2} \cdot b_{m-1})x^{2m-1}+\ldots+(a_0 \cdot b_1+a_1 \cdot b_0 ) x^1+(a_0 \cdot b_0)$
#### Example:
$(x^2+x+1) \times (x^3+x^2+1) = x^5+2x^4+2x^3+2x^2+x+1= x^5+x+1$ after reduction modulo 2. It can also be computed as $0111 \times 1101 = 110100 \oplus 11010 \oplus 1101 = 100011$
#### Note:
In $GF(2^m)$, when the degree of the result is more than $m-1$, it needs to be reduced modulo an irreducible/reduction polynomial $p(x)$ in degree $m$. This can be implemented as bit-shift and XOR. The general form of multiplication operation is update as follows:
$A \times B = (a_{m-1} \cdot b_{m-1})x^{2m-2}+(a_{m-1} \cdot b_{m-2}+a_{m-2} \cdot b_{m-1})x^{2m-1}+\ldots+(a_0 \cdot b_1+a_1 \cdot b_0 ) x^1+(a_0 \cdot b_0)~~mod~~p(x)$
Since the details of reduction procedure is a little complex, and there exists some libraries to perform this computation, we do not focus on details of its computation here. If you are interested to learn the details, we encourage you to study section 2.3.5 of \cite{link4}. We provide an example in the following to show how reduction decreases size of multiplication.
#### Example:
Reduction polynomial: $p(x)= x^4+x+1$
$(x^2+x+1) \times (x^3+x^2+1) = x^5+x+1 \equiv x^2+1~~mod~~p(x)$
```
A*B
```
1 + 0 x^1 + 1 x^2 $\in$ F_{2^4}
## Division:
The division operation is implemented by multiply and modulo inverse operation. In means that we can implement division operation as follows:
$A = a_{m-1} x^{m-1}+a_{m-2} x^{m-2}+\ldots+a_1 x^1+a_0$ where $a_i \in {0,1}$ for $i = 0, \ldots, m-1$
$B = b_{m-1} x^{m-1}+b_{m-2} x^{m-2}+\ldots+b_1 x^1+b_0$ where $b_i \in {0,1}$ for $i = 0, \ldots, m-1$
$\frac{A}{B} = A \times B^{-1}~~mod~~p(x)$ where $p(x)$ is reduction polynomial
#### How can we perform modulo inverse operation?
There is a well-known algorithm for computing modulo inverse which is Extended Euclidean GCD algorithm. Since, the implementation of Extended Euclidean GCD algorithm is already available, it the following, we provide the high-level idea of how Extended Euclidean GCD can help us to find modulo inverse. If you are interested to learn more, we encourage you to study following link:
https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture5.pdf
Given an $a$ that is relatively prime to $n$, we must obviously have $gcd(a, n) = 1$. Such $a$ and $n$ must satisfy the following constraint for some $x$ and $y$:
$xa+yn = 1$
Let’s now consider this equation modulo $n$. Since $y$ is an integer $yn~~mod~~n$ equals 0. Thus, it must be the case that, considered modulo $n$, $x$ equals $a^{-1}$, the multiplicative inverse of a modulo $n$. We extend this solution to work in $GF(2^m)$.
#### Example:
Reduction polynomial: $p(x) = x^4+x+1$
$\frac{x^2+x+1}{x^3+x^2+1} = (x^2+x+1) \times (x^3+x^2+1)^{-1} = (x^2+x+1) \times (x^2) \equiv (x^3+x^2+x+1) ~~mod~~p(x)$
```
A/B
```
1 + 1 x^1 + 1 x^2 + 1 x^3 $\in$ F_{2^4}
## Exercises
1. Define new polynomials `C` and `D`. Work out by hand what `C + D` would be. Verify your answer using code.
2. Work out by hand what `C * D` would be. Verify your answer using code.
3. (Extra Credit) Work out what `C / D` would be.
| github_jupyter |
A quick look at GAMA bulge and disk colours in multi-band GALAPAGOS fits versus single-band GALAPAGOS and SIGMA fits.
Pretty plots at the bottom.
```
%matplotlib inline
from matplotlib import pyplot as plt
# better-looking plots
plt.rcParams['font.family'] = 'serif'
plt.rcParams['figure.figsize'] = (10.0*1.3, 8*1.3)
plt.rcParams['font.size'] = 18*1.3
import pandas
#from galapagos_to_pandas import galapagos_to_pandas
## convert the GALAPAGOS data
#galapagos_to_pandas()
## convert the SIGMA data
#galapagos_to_pandas('/home/ppzsb1/projects/gama/qc/raw/StructureCat_SersicExp.fits',
# '/home/ppzsb1/quickdata/StructureCat_SersicExp.h5')
## read in GALAPAGOS data
## no attempt has been made to select only reliable bulges and discs
store = pandas.HDFStore('/home/ppzsb1/quickdata/GAMA_9_all_combined_gama_only_bd6.h5')
data = store['data'].set_index('CATAID')
print len(data)
## read in SIGMA data - this is the raw sersic+exponential catalogue
## no attempt has been made here to select true two-component systems
store = pandas.HDFStore('/home/ppzsb1/quickdata/StructureCat_SersicExp.h5')
sigma = store['data'].set_index('CATAID')
print len(sigma)
## get overlap between the catalogue objects
data = data.join(sigma, how='inner', rsuffix='_SIGMA')
len(data)
## restrict to bright objects
data = data[data['MAG_GALFIT'] < 18.0]
len(data)
## band information
allbands = list('ugrizYJHK')
#band_wl = pandas.Series([3543,4770,6231,7625,9134,10395,12483,16313,22010], index=allbands)
normband = 'K'
bands = list('ugrizYJH')
band_labels = ['${}$'.format(i) for i in bands]
band_wl = pandas.Series([3543,4770,6231,7625,9134,10395,12483,16313], index=bands)
#normband = 'Z'
#bands = list('ugriYJHK')
#band_wl = numpy.array([3543,4770,6231,7625,10395,12483,16313,22010])
## extract magnitudes and use consistent column labels
mags_b = data[['MAG_GALFIT_BAND_B_{}'.format(b.upper()) for b in allbands]]
mags_d = data[['MAG_GALFIT_BAND_D_{}'.format(b.upper()) for b in allbands]]
mags_b_single = data[['SINGLE_MAG_GALFIT_B_{}'.format(b.upper()) for b in allbands]]
mags_d_single = data[['SINGLE_MAG_GALFIT_D_{}'.format(b.upper()) for b in allbands]]
mags_b_sigma = data[['GALMAG_01_{}'.format(b) for b in allbands]]
mags_d_sigma = data[['GALMAG_02_{}'.format(b) for b in allbands]]
mags_b.columns = mags_d.columns = allbands
mags_b_single.columns = mags_d_single.columns = allbands
mags_b_sigma.columns = mags_d_sigma.columns = allbands
## normalise SEDs and select only objects for which all magnitudes are sensible
def get_normsed(mags, bands, normband):
normsed = mags[bands]
normsed = normsed.sub(mags[normband], axis='index')
good = ((normsed > -50) & (normsed < 50)).T.all()
good &= ((mags[normband] > -50) & (mags[normband] < 50))
return normsed, good
## get normalised SEDs
normsed_b, good_b = get_normsed(mags_b, bands, normband)
normsed_b_single, good_b_single = get_normsed(mags_b_single, bands, normband)
normsed_b_sigma, good_b_sigma = get_normsed(mags_b_sigma, bands, normband)
normsed_d, good_d = get_normsed(mags_d, bands, normband)
normsed_d_single, good_d_single = get_normsed(mags_d_single, bands, normband)
normsed_d_sigma, good_d_sigma = get_normsed(mags_d_sigma, bands, normband)
print len(normsed_d)
## restrict sample to set of object that are good in all three catalogues
good_b &= good_b_single & good_b_sigma
good_d &= good_d_single & good_d_sigma
normsed_b_single = normsed_b_single[good_b]
normsed_d_single = normsed_d_single[good_d]
normsed_b_sigma = normsed_b_sigma[good_b]
normsed_d_sigma = normsed_d_sigma[good_d]
normsed_b = normsed_b[good_b]
normsed_d = normsed_d[good_d]
print len(normsed_d)
## overlay all SEDS
def plot_labels(i, label):
if i == 1:
plt.title('bulges')
if i == 2:
plt.title('discs')
if i == 3:
plt.ylabel('mag offset from $K$-band')
if i % 2 == 0:
plt.ylabel(label)
fig = plt.figure(figsize=(12,8))
def plot(d, label):
if not hasattr(plot, "plotnum"):
plot.plotnum = 0
plot.plotnum += 1
ax = plt.subplot(3, 2, plot.plotnum)
d.T.plot(ax=ax, x=band_wl, ylim=(5,-2), legend=False, color='r', alpha=0.2)
ax.xaxis.set_ticks(band_wl)
ax.xaxis.set_ticklabels(bands)
plot_labels(plot.plotnum, label)
plt.axis(ymin=8, ymax=-5)
plot(normsed_b, 'GALA multi')
plot(normsed_d, 'GALA multi')
plot(normsed_b_single, 'GALA single')
plot(normsed_d_single, 'GALA single')
plot(normsed_b_sigma, 'SIGMA')
plot(normsed_d_sigma, 'SIGMA')
plt.subplots_adjust(wspace=0.25, hspace=0.25)
## produce boxplots
fig = plt.figure(figsize=(12,8))
def boxplot(d, label):
if not hasattr(boxplot, "plotnum"):
boxplot.plotnum = 0
boxplot.plotnum += 1
plt.subplot(3, 2, boxplot.plotnum)
d.boxplot(sym='b.')
plot_labels(boxplot.plotnum, label)
plt.axis(ymin=8, ymax=-5)
boxplot(normsed_b, 'GALA multi')
boxplot(normsed_d, 'GALA multi')
boxplot(normsed_b_single, 'GALA single')
boxplot(normsed_d_single, 'GALA single')
boxplot(normsed_b_sigma, 'SIGMA')
boxplot(normsed_d_sigma, 'SIGMA')
plt.subplots_adjust(wspace=0.25, hspace=0.25)
## functions to produce nice asymmetric violin plots
## clip tails of the distributions to produce neater violins
from scipy.stats import scoreatpercentile
def clip(x, p=1):
y = []
for xi in x:
p_lo = scoreatpercentile(xi, p)
p_hi = scoreatpercentile(xi, 100-p)
y.append(xi.clip(p_lo, p_hi))
return y
## fancy legend text, which mimics the appearance of the violin plots
import matplotlib.patheffects as PathEffects
def outlined_text(x, y, text, color='k', rotation=0):
## \u2009 is a hairspace
## DejaVu Serif is specified as the default serif fonts on my system don't have this character
plt.text(x, y, u'\u2009'.join(text), color='white', alpha=0.5,
fontname='DejaVu Serif', rotation=rotation,
path_effects=[PathEffects.withStroke(linewidth=2.5, foreground=color, alpha=1.0)])
## produce asymmetric violin plots
from statsmodels.graphics.boxplots import violinplot
def bdviolinplot(bm, bs, dm, ds, mtext='', stext=''):
wl = np.log10(band_wl)*10
vw = 0.5
vlw = 1.5
p = violinplot(clip(bm.T.values), labels=band_labels, positions=wl,
side='left', show_boxplot=False,
plot_opts={'violin_width':vw, 'violin_fc':'red',
'violin_ec':'darkred', 'violin_lw':vlw})
p = violinplot(clip(bs.T.values), ax=plt.gca(), labels=band_labels,
positions=wl, side='right', show_boxplot=False,
plot_opts={'violin_width':vw, 'violin_fc':'red',
'violin_ec':'darkred', 'violin_lw':vlw})
p = violinplot(clip(dm.T.values), ax=plt.gca(), labels=band_labels,
positions=wl, side='left', show_boxplot=False,
plot_opts={'violin_width':vw, 'violin_fc':'blue',
'violin_ec':'darkblue', 'violin_lw':vlw})
p = violinplot(clip(ds.T.values), ax=plt.gca(), labels=band_labels,
positions=wl, side='right', show_boxplot=False,
plot_opts={'violin_width':vw, 'violin_fc':'blue',
'violin_ec':'darkblue', 'violin_lw':vlw})
## overlay median trends
plt.plot(wl, bm.median(), color='r', lw=2)
plt.plot(wl, bs.median(), color='r', ls='--', lw=2)
plt.plot(wl, dm.median(), color='b', lw=2)
plt.plot(wl, ds.median(), color='b', ls='--', lw=2)
## tidy up
plt.axis(ymin=8, ymax=-5)
plt.ylabel('mag offset from $K$-band')
plt.text(38.5, 7.3, '{} galaxies'.format(len(bm)))
## legend
x, y = (41.0, 6.9)
outlined_text(x, y, 'discs', 'darkblue')
outlined_text(x, y+0.75, 'bulges', 'darkred')
x, y = (x-0.35, 2.2)
outlined_text(x, y, 'multi-band', '0.1', rotation=90)
outlined_text(x+0.4, y, 'single-band', '0.1', rotation=90)
outlined_text(x-0.3, y, mtext, '0.3', rotation=90)
outlined_text(x+0.7, y, stext, '0.3', rotation=90)
bdviolinplot(normsed_b, normsed_b_single, normsed_d, normsed_d_single,
'GALAPAGOS', 'GALAPAGOS')
```
The figure is an asymmetric violin plot, which compares the distribution of disc and bulge SEDs with one another and between multi- and single-band fitting approaches. For the multi-band fits, all the images were fit simultaneously, constrained to the same structural parameters, but with magnitude free to vary. For the single-band fits, each image was fit completely independently. All the fits were performed with GALAPAGOS and GALFITM, which allows a simple, fair comparison. However, as SIGMA contains logic to retry fits which do not meet physical expectations, it is likely to perform somewhat differently. The same sample used is ~400 galaxies with r < 18 mag and 0.025 < redshift < 0.06 for which none of the fits crashed (more sophisticated cleaning could certainly be done). The SEDs are normalised to the K-band magnitude.
The disc data are shown in blue, while the bulge data are shown in red. The shape of each side of a violin represents the distribution of magnitude offset for that band. The left-side of each violin presents the multi-band fit results, while the right-sides present the single-band results. The medians of each distribution are also plotted in their corresponding colour, with solid lines for multi-band and dashed lines for single-band results.
The single-band results do not distinguish very much between the SEDs of bulge and disc components, as can be seen from the coincidence between the dashed lines and the fact that the right-sides of the red and blue violins mostly overlap.
In constrast, the multi-band results show a significant difference in the SEDs of bulges and discs, in terms of both medians, and overall distributions. Note that there is no colour difference between the components in the initial parameters. The colour difference simply arises from the improved decomposition afforded by the multi-band approach.
```
bdviolinplot(normsed_b, normsed_b_sigma, normsed_d, normsed_d_sigma,
'GALAPAGOS', 'SIGMA')
```
The figure is the same as above, but now compares the GALAPAGOS multi-band fit results to single-band fits using SIGMA. The SIGMA results show less scatter the the GALAPAGS single-band fits, but there is still very little differentiation between the SEDs of bulges and discs.
| github_jupyter |
```
%matplotlib inline
```
PyTorch是什么?
================
基于Python的科学计算包,服务于以下两种场景:
- 作为NumPy的替代品,可以使用GPU的强大计算能力
- 提供最大的灵活性和高速的深度学习研究平台
开始
---------------
Tensors(张量)
^^^^^^^
Tensors与Numpy中的 ndarrays类似,但是在PyTorch中
Tensors 可以使用GPU进行计算.
```
from __future__ import print_function
import torch
```
创建一个 5x3 矩阵, 但是未初始化:
```
x = torch.empty(5, 3)
print(x)
```
创建一个随机初始化的矩阵:
```
x = torch.rand(5, 3)
print(x)
```
创建一个0填充的矩阵,数据类型为long:
```
x = torch.zeros(5, 3, dtype=torch.long)
print(x)
```
创建tensor并使用现有数据初始化:
```
x = torch.tensor([5.5, 3])
print(x)
```
根据现有的张量创建张量。 这些方法将重用输入张量的属性,例如, dtype,除非设置新的值进行覆盖
```
x = x.new_ones(5, 3, dtype=torch.double) # new_* 方法来创建对象
print(x)
x = torch.randn_like(x, dtype=torch.float) # 覆盖 dtype!
print(x) # 对象的size 是相同的,只是值和类型发生了变化
```
获取 size
***译者注:使用size方法与Numpy的shape属性返回的相同,张量也支持shape属性,后面会详细介绍***
```
print(x.size())
```
<div class="alert alert-info"><h4>Note</h4><p>``torch.Size`` 返回值是 tuple类型, 所以它支持tuple类型的所有操作.</p></div>
操作
^^^^^^^^^^
操作有多种语法。
我们将看一下加法运算。
加法1:
```
y = torch.rand(5, 3)
print(x + y)
```
加法2
```
print(torch.add(x, y))
```
提供输出tensor作为参数
```
result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
```
替换
```
# adds x to y
y.add_(x)
print(y)
```
<div class="alert alert-info"><h4>Note</h4><p>任何 以``_`` 结尾的操作都会用结果替换原变量.
例如: ``x.copy_(y)``, ``x.t_()``, 都会改变 ``x``.</p></div>
你可以使用与NumPy索引方式相同的操作来进行对张量的操作
```
print(x[:, 1])
```
``torch.view``: 可以改变张量的维度和大小
***译者注:torch.view 与Numpy的reshape类似***
```
x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # size -1 从其他维度推断
print(x.size(), y.size(), z.size())
```
如果你有只有一个元素的张量,使用``.item()``来得到Python数据类型的数值
```
x = torch.randn(1)
print(x)
print(x.item())
```
**Read later:**
100+ Tensor operations, including transposing, indexing, slicing,
mathematical operations, linear algebra, random numbers, etc.,
are described
`here <https://pytorch.org/docs/torch>`_.
NumPy 转换
------------
将一个Torch Tensor转换为NumPy数组是一件轻松的事,反之亦然。
Torch Tensor与NumPy数组共享底层内存地址,修改一个会导致另一个的变化。
将一个Torch Tensor转换为NumPy数组
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
```
观察numpy数组的值是如何改变的。
```
a.add_(1)
print(a)
print(b)
```
NumPy Array 转化成 Torch Tensor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
使用from_numpy自动转化
```
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a, 1, out=a)
print(a)
print(b)
```
所有的 Tensor 类型默认都是基于CPU, CharTensor 类型不支持到
NumPy 的转换.
CUDA 张量
------------
使用``.to`` 方法 可以将Tensor移动到任何设备中
```
# is_available 函数判断是否有cuda可以使用
# ``torch.device``将张量移动到指定的设备中
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA 设备对象
y = torch.ones_like(x, device=device) # 直接从GPU创建张量
x = x.to(device) # 或者直接使用``.to("cuda")``将张量移动到cuda中
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` 也会对变量的类型做更改
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
# 노트북 안에 그래프를 그리기 위해
%matplotlib inline
# 그래프에서 격자로 숫자 범위가 눈에 잘 띄도록 ggplot 스타일을 사용
plt.style.use('ggplot')
# 그래프에서 마이너스 폰트 깨지는 문제에 대한 대처
mpl.rcParams['axes.unicode_minus'] = False
train = pd.read_csv('train.csv',parse_dates=['datetime'])
train.shape
train.info()
train.head(10)
train.temp.describe()
train.isnull().sum()
import missingno as msno
msno.matrix(train, figsize=(12,5))
train["year"] = train["datetime"].dt.year
train["month"] = train["datetime"].dt.month
train["day"] = train["datetime"].dt.day
train["hour"] = train["datetime"].dt.hour
train["minute"] = train["datetime"].dt.minute
train["second"] = train["datetime"].dt.second
train.shape
train.head()
figure, ((ax1,ax2,ax3), (ax4,ax5,ax6)) = plt.subplots(nrows=2, ncols=3)
figure.set_size_inches(18,8)
sns.barplot(data=train, x="year", y="count", ax=ax1)
sns.barplot(data=train, x="month", y="count", ax=ax2)
sns.barplot(data=train, x="day", y="count", ax=ax3)
sns.barplot(data=train, x="hour", y="count", ax=ax4)
sns.barplot(data=train, x="minute", y="count", ax=ax5)
sns.barplot(data=train, x="second", y="count", ax=ax6)
ax1.set(ylabel='Count',title="연도별 대여량")
ax2.set(xlabel='month',title="월별 대여량")
ax3.set(xlabel='day', title="일별 대여량")
ax4.set(xlabel='hour', title="시간별 대여량")
print ('버전: ', mpl.__version__)
print ('설치 위치: ', mpl.__file__)
print ('설정 위치: ', mpl.get_configdir())
print ('캐시 위치: ', mpl.get_cachedir())
print ('설정파일 위치: ', mpl.matplotlib_fname())
plt.rcParams['font.family']
plt.rcParams['font.family'] = 'NanumGothic'
plt.rcParams['font.family']
import matplotlib.font_manager as fm
font_list = fm.findSystemFonts(fontpaths=None, fontext='ttf')
print(len(font_list))
font_list[:10]
[(f.name, f.fname) for f in fm.fontManager.ttflist if 'Nanum' in f.name]
plt.rcParams['font.serif']
fig, axes = plt.subplots(nrows=2,ncols=2)
fig.set_size_inches(12, 10)
sns.boxplot(data=train,y="count",orient="v",ax=axes[0][0])
sns.boxplot(data=train,y="count",x="season",orient="v",ax=axes[0][1])
sns.boxplot(data=train,y="count",x="hour",orient="v",ax=axes[1][0])
sns.boxplot(data=train,y="count",x="workingday",orient="v",ax=axes[1][1])
axes[0][0].set(ylabel='Count',title="대여량")
axes[0][1].set(xlabel='Season', ylabel='Count',title="계절별 대여량")
axes[1][0].set(xlabel='Hour Of The Day', ylabel='Count',title="시간별 대여량")
axes[1][1].set(xlabel='Working Day', ylabel='Count',title="근무일 여부에 따른 대여량")
train["dayofweek"] = train["datetime"].dt.dayofweek
train.shape
train["dayofweek"].value_counts()
train.tail()
fig,(ax1,ax2,ax3,ax4,ax5)= plt.subplots(nrows=5)
fig.set_size_inches(18,25)
sns.pointplot(data=train, x="hour", y="count", ax=ax1)
sns.pointplot(data=train, x="hour", y="count", hue="workingday", ax=ax2)
sns.pointplot(data=train, x="hour", y="count", hue="dayofweek", ax=ax3)
sns.pointplot(data=train, x="hour", y="count", hue="weather", ax=ax4)
sns.pointplot(data=train, x="hour", y="count", hue="season", ax=ax5)
corrMatt = train[["temp", "atemp", "casual", "registered", "humidity", "windspeed", "count"]]
print(corrMatt[:5])
corrMatt = corrMatt.corr()
print(corrMatt)
mask = np.array(corrMatt)
print(mask)
mask[np.tril_indices_from(mask)] = False
print(mask)
fig, ax = plt.subplots()
fig.set_size_inches(20,10)
sns.heatmap(corrMatt, mask=mask,vmax=.8, square=True,annot=True)
fig,(ax1,ax2,ax3) = plt.subplots(ncols=3)
fig.set_size_inches(12, 5)
sns.regplot(x="temp", y="count", data=train,ax=ax1)
sns.regplot(x="windspeed", y="count", data=train,ax=ax2)
sns.regplot(x="humidity", y="count", data=train,ax=ax3)
def concatenate_year_month(datetime):
return "{0}-{1}".format(datetime.year, datetime.month)
train["year_month"] = train["datetime"].apply(concatenate_year_month)
print(train.shape)
train[["datetime", "year_month"]].head()
train.info()
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2)
fig.set_size_inches(18, 4)
sns.barplot(data=train, x="year", y="count", ax=ax1)
sns.barplot(data=train, x="month", y="count", ax=ax2)
fig, ax3 = plt.subplots(nrows=1, ncols=1)
fig.set_size_inches(18, 4)
sns.barplot(data=train, x="year_month", y="count", ax=ax3)
# trainWithoutOutliers
trainWithoutOutliers = train[np.abs(train["count"] - train["count"].mean()) <= (3*train["count"].std())]
print(train.shape)
print(trainWithoutOutliers.shape)
# count값의 데이터 분포도를 파악
figure, axes = plt.subplots(ncols=2, nrows=2)
figure.set_size_inches(12, 10)
sns.distplot(train["count"], ax=axes[0][0])
stats.probplot(train["count"], dist='norm', fit=True, plot=axes[0][1])
sns.distplot(np.log(trainWithoutOutliers["count"]), ax=axes[1][0])
stats.probplot(np.log1p(trainWithoutOutliers["count"]), dist='norm', fit=True, plot=axes[1][1])
```
| github_jupyter |
<i>Copyright (c) Microsoft Corporation. All rights reserved.</i>
<i>Licensed under the MIT License.</i>
# Spark Collaborative Filtering (ALS) Deep Dive
Spark MLlib provides a collaborative filtering algorithm that can be used for training a matrix factorization model, which predicts explicit or implicit ratings of users on items for recommendations.
This notebook presents a deep dive into the Spark collaborative filtering algorithm.
## 1 Matrix factorization algorithm
### 1.1 Matrix factorization for collaborative filtering problem
Matrix factorization is a common technique used in recommendation tasks. Basically, a matrix factorization algorithm tries to find latent factors that represent intrinsic user and item attributes in a lower dimension. That is,
$$\hat r_{u,i} = q_{i}^{T}p_{u}$$
where $\hat r_{u,i}$ is the predicted ratings for user $u$ and item $i$, and $q_{i}^{T}$ and $p_{u}$ are latent factors for item and user, respectively. The challenge to the matrix factorization problem is to find $q_{i}^{T}$ and $p_{u}$. This is achieved by methods such as matrix decomposition. A learning approach is therefore developed to converge the decomposition results close to the observed ratings as much as possible. Furthermore, to avoid overfitting issue, the learning process is regularized. For example, a basic form of such matrix factorization algorithm is represented as below.
$$\min\sum(r_{u,i} - q_{i}^{T}p_{u})^2 + \lambda(||q_{i}||^2 + ||p_{u}||^2)$$
where $\lambda$ is a the regularization parameter.
In case explict ratings are not available, implicit ratings which are usually derived from users' historical interactions with the items (e.g., clicks, views, purchases, etc.). To account for such implicit ratings, the original matrix factorization algorithm can be formulated as
$$\min\sum c_{u,i}(p_{u,i} - q_{i}^{T}p_{u})^2 + \lambda(||q_{i}||^2 + ||p_{u}||^2)$$
where $c_{u,i}=1+\alpha r_{u,i}$ and $p_{u,i}=1$ if $r_{u,i}>0$ and $p_{u,i}=0$ if $r_{u,i}=0$. $r_{u,i}$ is a numerical representation of users' preferences (e.g., number of clicks, etc.).
### 1.2 Alternating Least Square (ALS)
Owing to the term of $q_{i}^{T}p_{u}$ the loss function is non-convex. Gradient descent method can be applied but this will incur expensive computations. An Alternating Least Square (ALS) algorithm was therefore developed to overcome this issue.
The basic idea of ALS is to learn one of $q$ and $p$ at a time for optimization while keeping the other as constant. This makes the objective at each iteration convex and solvable. The alternating between $q$ and $p$ stops when there is convergence to the optimal. It is worth noting that this iterative computation can be parallelised and/or distributed, which makes the algorithm desirable for use cases where the dataset is large and thus the user-item rating matrix is super sparse (as is typical in recommendation scenarios). A comprehensive discussion of ALS and its distributed computation can be found [here](http://stanford.edu/~rezab/classes/cme323/S15/notes/lec14.pdf).
## 2 Spark Mllib implementation
The matrix factorization algorithm is available as `ALS` module in [Spark `ml`](https://spark.apache.org/docs/latest/ml-collaborative-filtering.html) for DataFrame or [Spark `mllib`](https://spark.apache.org/docs/latest/mllib-collaborative-filtering.html) for RDD.
* The uniqueness of ALS implementation is that it distributes the matrix factorization model training by using "Alternating Least Square" method.
* In the training method, there are parameters that can be selected to control the model performance.
* Both explicit and implicit ratings are supported by Spark ALS model.
## 3 Spark ALS based MovieLens recommender
In the following code, the MovieLens-100K dataset is used to illustrate the ALS algorithm in Spark.
**Note**: This notebook requires a PySpark environment to run properly. Please follow the steps in [SETUP.md](https://github.com/Microsoft/Recommenders/blob/master/SETUP.md#dependencies-setup) to install the PySpark environment.
```
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import pandas as pd
from matplotlib import pyplot as plt
import numpy as np
import seaborn as sns
import sys
import pandas as pd
import pyspark
from pyspark.sql import SparkSession
from pyspark.ml.recommendation import ALS
import pyspark.sql.functions as F
from pyspark.sql.functions import col
from pyspark.ml.tuning import CrossValidator
from pyspark.sql.types import StructType, StructField
from pyspark.sql.types import FloatType, IntegerType, LongType
from reco_utils.dataset import movielens
from reco_utils.common.spark_utils import start_or_get_spark
from reco_utils.evaluation.spark_evaluation import SparkRankingEvaluation, SparkRatingEvaluation
from reco_utils.evaluation.parameter_sweep import generate_param_grid
from reco_utils.dataset.spark_splitters import spark_random_split
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("PySpark version: {}".format(pyspark.__version__))
```
Data column names
```
COL_USER = "UserId"
COL_ITEM = "MovieId"
COL_RATING = "Rating"
COL_PREDICTION = "prediction"
COL_TIMESTAMP = "Timestamp"
schema = StructType(
(
StructField(COL_USER, IntegerType()),
StructField(COL_ITEM, IntegerType()),
StructField(COL_RATING, FloatType()),
StructField(COL_TIMESTAMP, LongType()),
)
)
```
Model hyper parameters - these parameters are selected with reference to the benchmarking results [here](http://mymedialite.net/examples/datasets.html).
```
RANK = 10
MAX_ITER = 15
REG_PARAM = 0.05
```
Number of recommended items
```
K = 10
```
Initialize a Spark session.
```
spark = start_or_get_spark("ALS Deep Dive", memory="16g")
```
### 3.1 Load and prepare data
Data is read from csv into a Spark DataFrame.
```
dfs = movielens.load_spark_df(spark=spark, size="100k", schema=schema)
dfs.show(5)
```
Data is then randomly split by 80-20 ratio for training and testing.
```
dfs_train, dfs_test = spark_random_split(dfs, ratio=0.75, seed=42)
```
### 3.2 Train a movielens model
It is worth noting that Spark ALS model allows dropping cold users to favor a robust evaluation with the testing data. In case there are cold users, Spark ALS implementation allows users to drop cold users in order to make sure evaluations on the prediction results are sound.
```
als = ALS(
maxIter=MAX_ITER,
rank=RANK,
regParam=REG_PARAM,
userCol=COL_USER,
itemCol=COL_ITEM,
ratingCol=COL_RATING,
coldStartStrategy="drop"
)
model = als.fit(dfs_train)
```
### 3.3 Prediction with the model
The trained model can be used to predict ratings with a given test data.
```
dfs_pred = model.transform(dfs_test).drop(COL_RATING)
```
With the prediction results, the model performance can be evaluated.
```
evaluations = SparkRatingEvaluation(
dfs_test,
dfs_pred,
col_user=COL_USER,
col_item=COL_ITEM,
col_rating=COL_RATING,
col_prediction=COL_PREDICTION
)
print(
"RMSE score = {}".format(evaluations.rmse()),
"MAE score = {}".format(evaluations.mae()),
"R2 score = {}".format(evaluations.rsquared()),
"Explained variance score = {}".format(evaluations.exp_var()),
sep="\n"
)
```
Oftentimes ranking metrics are also of interest to data scientists. Note usually ranking metrics apply to the scenario of recommending a list of items. In our case, the recommended items should be different from those that have been rated by the users.
```
# Get the cross join of all user-item pairs and score them.
users = dfs_train.select('UserId').distinct()
items = dfs_train.select('MovieId').distinct()
user_item = users.crossJoin(items)
dfs_pred = model.transform(user_item)
# Remove seen items.
dfs_pred_exclude_train = dfs_pred.alias("pred").join(
dfs_train.alias("train"),
(dfs_pred['UserId'] == dfs_train['UserId']) & (dfs_pred['MovieId'] == dfs_train['MovieId']),
how='outer'
)
dfs_pred_final = dfs_pred_exclude_train.filter(dfs_pred_exclude_train["train.Rating"].isNull()) \
.select('pred.' + 'UserId', 'pred.' + 'MovieId', 'pred.' + "prediction")
dfs_pred_final.show()
evaluations = SparkRankingEvaluation(
dfs_test,
dfs_pred_final,
col_user=COL_USER,
col_item=COL_ITEM,
col_rating=COL_RATING,
col_prediction=COL_PREDICTION,
k=K
)
print(
"Precision@k = {}".format(evaluations.precision_at_k()),
"Recall@k = {}".format(evaluations.recall_at_k()),
"NDCG@k = {}".format(evaluations.ndcg_at_k()),
"Mean average precision = {}".format(evaluations.map_at_k()),
sep="\n"
)
```
### 3.4 Fine tune the model
Prediction performance of a Spark ALS model is often affected by the parameters
|Parameter|Description|Default value|Notes|
|-------------|-----------------|------------------|-----------------|
|`rank`|Number of latent factors|10|The larger the more intrinsic factors considered in the factorization modeling.|
|`regParam`|Regularization parameter|1.0|The value needs to be selected empirically to avoid overfitting.|
|`maxIters`|Maximum number of iterations|10|The more iterations the better the model converges to the optimal point.|
It is always a good practice to start model building with default parameter values and then sweep the parameter in a range to find the optimal combination of parameters. The following parameter set is used for training ALS models for comparison study purposes.
```
param_dict = {
"rank": [10, 15, 20, 25],
"regParam": [0.001, 0.1, 1.0]
}
```
Generate a dictionary for each parameter combination which can then be fed into model training.
```
param_grid = generate_param_grid(param_dict)
```
Train models with parameters specified in the parameter grid. Evaluate the model with, for example, the RMSE metric, and then record the metrics for visualization.
```
rmse_score = []
for g in param_grid:
als = ALS(
userCol=COL_USER,
itemCol=COL_ITEM,
ratingCol=COL_RATING,
coldStartStrategy="drop",
**g
)
model = als.fit(dfs_train)
dfs_pred = model.transform(dfs_test).drop(COL_RATING)
evaluations = SparkRatingEvaluation(
dfs_test,
dfs_pred,
col_user=COL_USER,
col_item=COL_ITEM,
col_rating=COL_RATING,
col_prediction=COL_PREDICTION
)
rmse_score.append(evaluations.rmse())
rmse_score = [float('%.4f' % x) for x in rmse_score]
rmse_score_array = np.reshape(rmse_score, (len(param_dict["rank"]), len(param_dict["regParam"])))
rmse_df = pd.DataFrame(data=rmse_score_array, index=pd.Index(param_dict["rank"], name="rank"),
columns=pd.Index(param_dict["regParam"], name="reg. parameter"))
fig, ax = plt.subplots()
sns.heatmap(rmse_df, cbar=False, annot=True, fmt=".4g")
display(fig)
```
The calculated RMSE scores can be visualized to comparatively study how model performance is affected by different parameters.
It can be seen from this visualization that RMSE first decreases and then increases as rank increases, due to overfitting. When the rank equals 20 and the regularization parameter equals 0.1, the model achieves the lowest RMSE score.
It is noted from the visualization that the RMSE does not decrease together with increase of `rank`, which may be owing to the reason of overfitting. When `rank` is 10 and `regParam` is 0.1, the lowest RMSE score is achieved, which indicates that the model is optimal.
### 3.5 Top K recommendation
#### 3.5.1 Top k for all users (items)
```
dfs_rec = model.recommendForAllUsers(10)
dfs_rec.show(10)
```
#### 3.5.2 Top k for a selected set of users (items)
```
users = dfs_train.select(als.getUserCol()).distinct().limit(3)
dfs_rec_subset = model.recommendForUserSubset(users, 10)
dfs_rec_subset.show(10)
```
#### 3.5.3 Run-time considerations for top-k recommendations
It is worth noting that usually computing the top-k recommendations for all users is the bottleneck of the whole pipeline (model training and scoring) of an ALS based recommendation system. This is because
* Getting the top k from all user-item pairs requires a cross join which is usually very computationally expensive.
* Inner products of user-item pairs are calculated individually instead of leveraging matrix block multiplication features which are available in certain contemporary computing acceleration libraries (e.g., BLAS).
More details about possible optimizations of the top k recommendations in Spark can be found [here](https://engineeringblog.yelp.com/2018/05/scaling-collaborative-filtering-with-pyspark.html).
```
# cleanup spark instance
spark.stop()
```
## References
1. Yehuda Koren, Robert Bell, and Chris Volinsky, "Matrix Factorization Techniques for Recommender Systems
", ACM Computer, Vol. 42, Issue 8, pp 30-37, Aug., 2009.
2. Yifan Hu, Yehuda Koren, and Chris Volinsky, "Collaborative Filtering for Implicit Feedback Datasets
", Proc. IEEE ICDM, 2008, Dec, Pisa, Italy.
3. Apache Spark. url: https://spark.apache.org/docs/latest/ml-collaborative-filtering.html
4. Seaborn. url: https://seaborn.pydata.org/
5. Scaling collaborative filtering with PySpark. url: https://engineeringblog.yelp.com/2018/05/scaling-collaborative-filtering-with-pyspark.html
6. Matrix Completion via Alternating Least Square (ALS). url: http://stanford.edu/~rezab/classes/cme323/S15/notes/lec14.pdf
| github_jupyter |
```
%run "Retropy_framework.ipynb"
conf_cache_disk = True
conf_cache_memory = True
t = get("FDN").s
https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html
https://docs.scipy.org/doc/scipy/reference/interpolate.html
https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.interpolate.UnivariateSpline.html
https://stackoverflow.com/questions/7906126/spline-representation-with-scipy-interpolate-poor-interpolation-for-low-amplitu/8944934
https://stackoverflow.com/questions/17913330/fitting-data-using-univariatespline-in-scipy-python
https://stackoverflow.com/questions/8719754/scipy-interpolate-univariatespline-not-smoothing-regardless-of-parameters
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.interpolate.html
https://www.analyticsvidhya.com/blog/2018/03/introduction-regression-splines-python-codes/
http://www.nehalemlabs.net/prototype/blog/2014/04/12/how-to-fix-scipys-interpolating-spline-default-behavior/
http://www.nehalemlabs.net/prototype/blog/2013/04/09/an-introduction-to-smoothing-time-series-in-python-part-ii-wiener-filter-and-smoothing-splines/
from scipy.interpolate import UnivariateSpline, LSQUnivariateSpline
# # http://www.nehalemlabs.net/prototype/blog/2014/04/12/how-to-fix-scipys-interpolating-spline-default-behavior/
# from scipy.signal import gaussian
# from scipy.ndimage import filters
# def moving_average(series, sigma=3):
# b = gaussian(39, sigma)
# average = filters.convolve1d(series, b/b.sum())
# var = filters.convolve1d(np.power(series-average,2), b/b.sum())
# return average, var
# def spline_util(t, extend=0, smooth=None, order=3, use_w=True):
# _, var = moving_average(t)
# plt.plot(var)
# w = 1/np.sqrt(var) if use_w else None
# spln = UnivariateSpline(t.index.values.astype('d'), t, s=smooth, k=order, w=w)
# si = pd.date_range(t.index[0], t.index[-1] + pd.DateOffset(days=extend))
# sv = spln(si.values.astype('d'))
# ser = pd.Series(sv, si)
# return ser
def spline_util(t, extend=0, smooth=None, order=3):
if smooth == "orig":
return t
if smooth == "auto":
m = t.shape[0]
smooth = (m - math.sqrt(2*m)) * np.std(t)/200
spln = UnivariateSpline(t.index.values.astype('d'), t, s=smooth, k=order)
si = pd.date_range(t.index[0], t.index[-1] + pd.DateOffset(days=extend))
sv = spln(si.values.astype('d'))
ser = pd.Series(sv, si)
ser.name = f"{t.name} {smooth}"
return ser
tt = t.asfreq("M")
ser = spline_util(tt, smooth=None)
show(ser, tt, ta=False)
def spline(t, min_n=0, max_n = 365, mx=True, order=3, extend=180, inter=True, smooth=None):
res = pd.Series()
res.name = t.name
# res[t.index[0]] = t[0]
mat = ma(t, max_n/4)
cagrs = []
start = 0
while start < t.shape[0]:
end = min(start+max_n, t.shape[0])
all = [cagr(t[start+min_n:i]) for i in np.arange(start+min_n+2, end+1)]
if len(all) == 0:
break
if mx:
i_max = np.argmax(all) + 1
else:
i_max = np.argmin(all) + 1
val = all[i_max-1]
print(val, t.index[start+min_n+i_max])
if t.shape[0] - (start+i_max) < 30:
if mx and t[-1] < mat[-1]:
break
if not mx and t[-1] > mat[-1]:
break
if abs(val) > 100:
break
if abs(val) < 50:
res[t.index[start+min_n+i_max]] = t[start+min_n+i_max]
cagrs.append(val)
if i_max > 0:
start += min_n + i_max
else:
start = end
cagrs = pd.Series(cagrs, res.index)
def cleanup_curves():
nonlocal res, cagrs
pct = res.pct_change()*100
toDropDates = []
toDropIdx = []
for i, (dt, v) in enumerate(res.iteritems()):
if i < len(res)-1:
days = abs((dt-res.index[i+1]).days)
if days > 120:
continue
if np.sign(cagrs[i]) != np.sign(cagrs[i+1]) or (pct[i] < 2):
print(f"dropping {dt}")
toDropDates.append(dt)
toDropIdx.append(i)
res = res.drop(toDropDates)
cagrs = cagrs.drop(toDropDates)
for i in range(1, len(res)):
# print(cagr(res[i:i+2]), res[i:i+2])
cagrs[i] = cagr(res[i-1:i+1])
print(cagrs)
return len(toDropDates)
while cleanup_curves():
pass
if not inter:
return res
if isinstance(smooth, list):
return [spline_util(res, extend=extend, order=order, smooth=s) for s in smooth]
return spline_util(res, extend=extend, order=order, smooth=smooth)
min_n = 0
max_n = 365
l = '2019/02'
inter = True
eo = 0
smooth = [0, None, 'auto', 'orig']
extend=365
top = spline(t[:l], min_n=min_n, max_n=max_n, extend=extend, order=3+eo, smooth=smooth, inter=True)
bot = spline(t[:l], min_n=min_n, max_n=max_n, extend=extend, order=3+eo, smooth=smooth, inter=True, mx=False)
show(t, top, bot, ta=False)
extend = 0
order = 3
m = bot.shape[0]
s = (m - math.sqrt(2*m))
s = (m - math.sqrt(2*m)) * np.std(bot)/100*0.5
print(m, s)
show(t,
t[:l], t[:l].index[-1],
# top,
bot,
# spline_util(top, extend=extend, order=3, smooth=0.1),
spline_util(bot, extend=extend, order=3, smooth=s),
# spline_util(bot, extend=extend, order=3, smooth=m),
# spline_util(bot, extend=extend, smooth=None),
# spline_util(t.asfreq("M"), extend=extend, smooth=None),
# spline_util(bot, extend=extend, order=3, smooth=0.1),
# spline_util(bot, extend=extend, order=3, smooth=0),
ta=False)
t = get(lc).s
index = pd.date_range(t.index[0], t.index[-1] + pd.DateOffset(days=10))
tt = t.asfreq("M").reindex(index).interpolate('spline', order=3)
show(tt, t)
def top(s, n=200, q=0.98, ret_thresh=False):
thresh1 = s.rolling(n, center=True).quantile(q).fillna(method="bfill")
thresh2 = s.rolling(n, center=False).quantile(q).fillna(method="bfill")
thresh = np.minimum(thresh1, thresh2)
tt = s[s>=thresh]
# tt = tt.asfreq("M")
tt = tt.resample("M").max()
# plt.scatter(tt.index, tt)
if ret_thresh:
return thresh
tt[s.index[-1]] = s[-1]
return tt.reindex(s.index).interpolate('spline', order=3)
def bottom(s, n=200, q=0.02, ret_thresh=False):
thresh = s.rolling(n, center=True).quantile(q)
tt = s[s<=thresh]
if ret_thresh:
return tt
# tt[s.index[-1]] = s[-1]
return tt.reindex(s.index).interpolate('spline', order=3)
n = 800
t = get(lc).s
#t = pd.Series(np.log(np.arange(1, t.shape[0]+1)), t.index)
#show(t, top(t, n, ret_thresh=False), ta=False)
show(t, t.resample("3M").max(), ta=False)
# Author: Mathieu Blondel
# Jake Vanderplas
# License: BSD 3 clause
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import *
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
def f(x):
""" function to approximate by polynomial interpolation"""
return x * np.sin(x)
# generate points used to plot
x_plot = t.index
y_plot = t
# generate points and keep a subset of them
#x = np.random.choice(x_plot, 10)
x = tt.index
y = y_plot[x]
# create matrix versions of these arrays
X = x[:, np.newaxis]
X_plot = x_plot[:, np.newaxis]
lw = 2
plt.plot(x_plot, y_plot, color='red', linewidth=lw,
label="ground truth")
plt.scatter(x, y, color='navy', s=30, marker='o', label="training points")
for count, degree in enumerate([3, 4, 5, 6]):
model = make_pipeline(PolynomialFeatures(degree), Ridge(alpha=1))
model.fit(X, y)
y_plot = model.predict(X_plot)
plt.plot(x_plot, y_plot, linewidth=lw,
label="degree %d" % degree)
plt.legend(loc='lower left')
plt.show()
#p = np.polynomial.polynomial.Polynomial([1, 2, 3])
#p.linspace()
from numpy.polynomial.polynomial import polyval
polyval(np.arange(10), [1, 2, 3])
def lrret2(target, n_coef = 5):
target = unwrap(get(target))
index_x = np.arange(target.shape[0])
def apply(x):
y = polyval(index_x, x)
return pd.Series(y, target.index)
def objective(x):
pred = apply(x)
diff = target - pred
pos = diff[diff>0]
neg = -diff[diff<0]
obj = np.sum(pos**2) + np.sum(neg ** 2)*100
obj = np.log(obj)
return obj
# prep data
# target, sources = prep_as_df(target, sources, as_geom_value=fit_values, freq=freq)
# sources_logret = sources.apply(lambda x: logret(x, dropna=False), axis=0)
# n_sources = sources_logret.shape[1]
# miniization args
# cons = []
# bounds = None
# if pos_weights:
# # using bounds, instead of cons, works much better
# #cons.append({'type': 'python ', 'fun' : lambda x: np.min(x[1:])})
# if sum1:
# x_bound = (0, 1)
# else:
# x_bound = (0, None)
# bounds = [(None, None)] + ([x_bound] * n_sources)
# if sum1:
# if sum_max1:
# cons.append({'type': 'ineq', 'fun' : lambda x: 1-np.sum(x[1:])}) # sum<=1 same as 1-sum>=0
# else:
# cons.append({'type': 'eq', 'fun' : lambda x: np.sum(x[1:])-1})
# objective = value_objective if fit_values else returns_objective
def run_optimize(rand_x0):
# n = sources_logret.shape[1]
# if rand_x0:
# x0 = np.random.rand(n+1)
# if sum1:
# x0 /= np.sum(x0)
# else:
# x0 = np.full(n+1, 1/n)
# #x0 += np.random.randn(n+1)*(1/n)
# #x0 = np.maximum(x0, 0)
# x0[0] = 0
x0 = np.full(n_coef, 0)
x0 = np.random.rand(n_coef)
# minimize, to use constrains, we can choose from COBYLA / SLSQP / trust-constr
# COBYLA: results are not stable, and vary greatly from run to run
# also doesn't support equality constraint (sum1)
#options={'rhobeg': 0.1, 'maxiter': 10000, 'disp': True, 'catol': 0.0002}
#res = minimize(objective, x0, constraints=cons, method="COBYLA", options=options)
# SLSQP: provides stable results from run to run, and support eq constraints (sum1)
# using a much smaller eps than default works better (more stable and better results)
options={'maxiter': 1000, 'ftol': 1e-06, 'iprint': 1, 'disp': False, 'eps': 1.4901161193847656e-08}
res = minimize(objective, x0, method="SLSQP", options=options)
#res = minimize(objective, x0, method="SLSQP")
#res = minimize(objective, x0)
print(res)
return res
def finalize(res):
# results
pred = apply(res.x)
pred.name = target.name + " - fit"
return pred
# uniform x0 works best usually, but when it doesn't random seems to work well
res = run_optimize(rand_x0=False)
pred = finalize(res)
return pred
show(elgb, lrret2(elgb, n_coef=2), ta=False, log=False)
from scipy.interpolate import spline
t = get(elgb)
x = np.arange(t.shape[0])
spline(x, t, x)
res = _
show(t, pd.Series(res, t.index))
from scipy.interpolate import CubicSpline
tt = t[t.s<ma(t, 365)]
x = np.arange(tt.shape[0])
y = tt
cs = CubicSpline(x, y)
res = cs(x)
res = pd.Series(res, tt.index)
show(ma(res, 100), res, ta=False)
```
| github_jupyter |
# Sparse Sinkhorn Transformer (PyTorch/GPU) (Ver 1.0)
***
### Credit for the PyTorch Reformer implementation goes out to @lucidrains of GitHub:
https://github.com/lucidrains/sinkhorn-transformer
***
This is a work in progress so please check back for updates.
***
Project Los Angeles
Tegridy Code 2021
# Setup Environment
```
#@title Install all dependencies
!git clone https://github.com/asigalov61/tegridy-tools
!pip install sinkhorn_transformer
!pip install local-attention
#@title Import all needed modules
%cd /content/tegridy-tools/tegridy-tools/
import TMIDI
%cd /content/
import os
if not os.path.exists('/content/Dataset'):
os.makedirs('/content/Dataset')
from sinkhorn_transformer import SinkhornTransformerLM
from sinkhorn_transformer.autoregressive_wrapper import AutoregressiveWrapper
import random
import tqdm
import gzip
import numpy as np
import torch
import torch.optim as optim
from torch.nn import functional as F
from torch.utils.data import DataLoader, Dataset
# constants
NUM_BATCHES = int(1e5)
BATCH_SIZE = 28
GRADIENT_ACCUMULATE_EVERY = 2
LEARNING_RATE = 6e-4
VALIDATE_EVERY = 100
GENERATE_EVERY = 500
GENERATE_LENGTH = 1024
SEQ_LEN = 4096
```
# Prep MIDIs and TXT/INT datasets
```
#@title Download special Tegridy Piano MIDI dataset
#@markdown Works best stand-alone/as-is for the optimal results
%cd /content/Dataset/
!wget 'https://github.com/asigalov61/Tegridy-MIDI-Dataset/raw/master/Tegridy-Piano-CC-BY-NC-SA.zip'
!unzip -j '/content/Dataset/Tegridy-Piano-CC-BY-NC-SA.zip'
!rm '/content/Dataset/Tegridy-Piano-CC-BY-NC-SA.zip'
%cd /content/
#@title Process MIDIs to special MIDI dataset with Tegridy MIDI Processor
#@markdown NOTES:
#@markdown 1) Dataset MIDI file names are used as song names. Feel free to change it to anything you like.
#@markdown 2) Best results are achieved with the single-track, single-channel, single-instrument MIDI 0 files with plain English names (avoid special or sys/foreign chars)
#@markdown 3) MIDI Channel = -1 means all MIDI channels except drums. MIDI Channel = 16 means all channels will be processed. Otherwise, only single indicated MIDI channel will be processed.
file_name_to_output_dataset_to = "/content/Music-Sinkhorn_TXT_Dataset" #@param {type:"string"}
desired_MIDI_channel_to_process = 16 #@param {type:"slider", min:-1, max:16, step:1}
encode_velocities = True #@param {type:"boolean"}
encode_MIDI_channels = False #@param {type:"boolean"}
add_transposed_dataset_by_this_many_pitches = 0 #@param {type:"slider", min:-12, max:12, step:1}
chordify_input_MIDIs = False #@param {type:"boolean"}
time_denominator = 10 #@param {type:"slider", min:1, max:20, step:1}
chars_encoding_offset = 33 #@param {type:"number"}
print('Starting up...')
###########
average_note_pitch = 0
min_note = 127
max_note = 0
files_count = 0
ev = 0
chords_list_f = []
melody_list_f = []
chords_list = []
chords_count = 0
melody_chords = []
melody_count = 0
TXT_String = 'DATASET=Music-Sinkhorn-Transformer-Dataset' + chr(10)
TXT = ''
melody = []
chords = []
###########
print('Loading MIDI files...')
print('This may take a while on a large dataset in particular.')
dataset_addr = "/content/Dataset/"
os.chdir(dataset_addr)
filez = os.listdir(dataset_addr)
print('Processing MIDI files. Please wait...')
for f in tqdm.auto.tqdm(filez):
try:
fn = os.path.basename(f)
fn1 = fn.split('.')[0]
#notes = song_notes_list[song_notes_list.index(fn1)+1]
files_count += 1
TXT, melody, chords = TMIDI.Optimus_MIDI_TXT_Processor(f, chordify_TXT=chordify_input_MIDIs, output_MIDI_channels=encode_MIDI_channels, char_offset=chars_encoding_offset, dataset_MIDI_events_time_denominator=time_denominator, output_velocity=encode_velocities, MIDI_channel=desired_MIDI_channel_to_process, MIDI_patch=range(0, 127))
TXT_String += TXT
melody_list_f += melody
chords_list_f += chords
if add_transposed_dataset_by_this_many_pitches != 0:
TXT, melody, chords = TMIDI.Optimus_MIDI_TXT_Processor(f, chordify_TXT=chordify_input_MIDIs, output_MIDI_channels=encode_MIDI_channels, char_offset=chars_encoding_offset, dataset_MIDI_events_time_denominator=time_denominator, output_velocity=encode_velocities, MIDI_channel=desired_MIDI_channel_to_process, transpose_by=add_transposed_dataset_by_this_many_pitches, MIDI_patch=range(0, 127))
TXT_String += TXT
melody_list_f += melody
chords_list_f += chords
#TXT_String += 'INTRO=' + notes + '\n'
except:
print('Bad MIDI:', f)
continue
print('Task complete :)')
print('==================================================')
print('Number of processed dataset MIDI files:', files_count)
print('Number of MIDI chords recorded:', len(chords_list_f))
print('First chord event:', chords_list_f[0], 'Last chord event:', chords_list_f[-1])
print('Number of recorded melody events:', len(melody_list_f))
print('First melody event:', melody_list_f[0], 'Last Melody event:', melody_list_f[-1])
print('Total number of MIDI events recorded:', len(chords_list_f) + len(melody_list_f))
# Writing dataset to TXT file
with open(file_name_to_output_dataset_to + '.txt', 'wb') as f:
f.write(TXT_String.encode('utf-8', 'replace'))
f.close
# Dataset
MusicDataset = [chords_list_f, melody_list_f]
# Writing dataset to pickle file
TMIDI.Tegridy_Pickle_File_Writer(MusicDataset, file_name_to_output_dataset_to)
#@title Process the TXT MIDI dataset to TXT INT dataset
full_path_to_TXT_dataset = "/content/Music-Sinkhorn_TXT_Dataset.txt" #@param {type:"string"}
print('Processing...')
with open(full_path_to_TXT_dataset) as file:
z = file.read()
file.close()
Y = list(z)
string = '\n'.join([str(ord(item)) for item in Y if ord(item) < 256])
with open('/content/Music-Sinkhorn_INT_Dataset.txt', 'w') as file:
file.write(string)
print('Done!')
#@title Load INT dataset into memory and setup the dataset
full_path_to_INT_dataset = "/content/Music-Sinkhorn_INT_Dataset.txt" #@param {type:"string"}
dataset_split_ratio = 0.9 #@param {type:"slider", min:0.1, max:0.9, step:0.1}
print('Processing...')
with open(full_path_to_INT_dataset) as file:
X = file.read()
H = []
for x in X.split('\n'):
H.append(int(x))
trX, vaX = np.split(H, [int(len(H) * dataset_split_ratio)])
data_train, data_val = torch.from_numpy(trX), torch.from_numpy(vaX)
print('Done!')
```
# Setup and train the model
```
#@title Instantiate Model
# helpers
def cycle(loader):
while True:
for data in loader:
yield data
def decode_token(token):
return str(chr(max(32, token)))
def decode_tokens(tokens):
return ''.join(list(map(decode_token, tokens)))
# instantiate model
model = SinkhornTransformerLM(
num_tokens = 256,
emb_dim = 128,
dim = 512,
depth = 8,
max_seq_len = SEQ_LEN,
heads = 8,
bucket_size = 128,
ff_chunks = 2,
causal = True,
reversible = True,
attn_dropout = 0.1,
n_local_attn_heads = 4
)
model = AutoregressiveWrapper(model)
model.cuda()
class TextSamplerDataset(Dataset):
def __init__(self, data, seq_len):
super().__init__()
self.data = data
self.seq_len = seq_len
def __getitem__(self, index):
rand_start = torch.randint(0, self.data.size(0) - self.seq_len - 1, (1,))
full_seq = self.data[rand_start: rand_start + self.seq_len + 1].long()
return full_seq.cuda()
def __len__(self):
return self.data.size(0) // self.seq_len
train_dataset = TextSamplerDataset(data_train, SEQ_LEN)
val_dataset = TextSamplerDataset(data_val, SEQ_LEN)
train_loader = cycle(DataLoader(train_dataset, batch_size = BATCH_SIZE))
val_loader = cycle(DataLoader(val_dataset, batch_size = BATCH_SIZE))
# optimizer
optim = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
#@title Train the Model
# training
for i in tqdm.tqdm(range(NUM_BATCHES), mininterval=10., desc='training'):
model.train()
for __ in range(GRADIENT_ACCUMULATE_EVERY):
loss = model(next(train_loader), return_loss = True)
loss.backward()
print(f'training loss: {loss.item()}')
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.5)
optim.step()
optim.zero_grad()
if i % VALIDATE_EVERY == 0:
model.eval()
with torch.no_grad():
loss = model(next(val_loader), return_loss = True)
print(f'validation loss: {loss.item()}')
if i % GENERATE_EVERY == 0:
model.eval()
inp = random.choice(val_dataset)[:-1]
prime = decode_tokens(inp)
print(f'%s \n\n %s', (prime, '*' * 100))
sample = model.generate(inp, GENERATE_LENGTH)
output_str = decode_tokens(sample)
print(output_str)
```
# Save and Load/Reload the model
```
#@title Save the model
torch.save(model.state_dict(), '/content/Music-Sinkhorn-Model.pth')
checkpoint = {'state_dict': model.state_dict(),'optimizer' :optim.state_dict()}
torch.save(checkpoint, '/content/Music-Sinkhorn-Model_sd_opt.pth')
#@title Load/Reload the model
torch.load('/content/Music-Sinkhorn-Model_sd_opt.pth')
model.eval()
```
# Generate from the model
```
#@title Generate Music
model_temperature = 0.6 #@param {type:"slider", min:0.1, max:2, step:0.1}
number_of_tokens_to_generate = 1032 #@param {type:"slider", min:8, max:8192, step:128}
model.eval()
inp = random.choice(val_dataset)[:-1]
prime = decode_tokens(inp)
print(f'%s \n\n %s', (prime, '*' * 100))
sample = model.generate(inp, number_of_tokens_to_generate, temperature=model_temperature)
output_str = decode_tokens(sample)
print(output_str)
#@title Convert generated output to MIDI
time_denominator = 10 #@param {type:"slider", min:1, max:20, step:1}
encoding_has_velocities = True #@param {type:"boolean"}
simulate_velocity = False #@param {type:"boolean"}
char_encoding_offset = 33 #@param {type:"number"}
SONG = TMIDI.Tegridy_Optimus_TXT_to_Notes_Converter('SONG=SONG ' + output_str, line_by_line_dataset = False, has_MIDI_channels=False, has_velocities=encoding_has_velocities, dataset_MIDI_events_time_denominator=time_denominator, char_encoding_offset=char_encoding_offset, simulate_velocity=simulate_velocity)
stats = TMIDI.Tegridy_SONG_to_MIDI_Converter(SONG[0], output_file_name='/content/Music-Sinkhorn_MIDI', output_signature='Music Sinkhorn Transformer')
print(stats)
```
# Congrats! You did it :)
| github_jupyter |
```
import torch
from torch import nn, optim
from neurodiffeq import diff
from neurodiffeq.networks import FCNN
from neurodiffeq.temporal import generator_2dspatial_rectangle, generator_2dspatial_segment, generator_temporal
from neurodiffeq.temporal import FirstOrderInitialCondition, BoundaryCondition
from neurodiffeq.temporal import SingleNetworkApproximator2DSpatialSystem, Approximator
from neurodiffeq.temporal import MonitorMinimal
from neurodiffeq.temporal import _solve_2dspatial
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.tri as tri
def u_T_approximation(x):
A = 25. # make A larger, then our network refuse to converge
return (1-torch.exp(-A*x))*(1-torch.exp(A*(x-1)))
x = torch.linspace(0, 1, 100)
y = u_T_approximation(x)
plt.plot(x, y);
torch.set_default_dtype(torch.float64)
class LidDrivenCavityApproximator(Approximator):
def __init__(self):
self.u_nn = FCNN(n_input_units=2, n_hidden_units=64, actv=nn.Tanh)
self.v_nn = FCNN(n_input_units=2, n_hidden_units=64, actv=nn.Tanh)
self.p_nn = FCNN(n_input_units=2, n_hidden_units=64, actv=nn.Tanh)
def __call__(self, xx, yy):
xx = torch.unsqueeze(xx, dim=1)
yy = torch.unsqueeze(yy, dim=1)
xy = torch.cat((xx, yy), dim=1)
uu = torch.squeeze(self.u_nn(xy))
vv = torch.squeeze(self.v_nn(xy))
pp = torch.squeeze(self.p_nn(xy))
xx = torch.squeeze(xx, dim=1)
yy = torch.squeeze(yy, dim=1)
uu = xx*(1-xx)*yy*(1-yy)*uu + yy*u_T_approximation(xx)
vv = xx*(1-xx)*yy*(1-yy)*vv
pp = (1-torch.exp(-xx))*(1-torch.exp(-yy))*pp
return uu, vv, pp
def parameters(self):
return list(self.u_nn.parameters()) + list(self.v_nn.parameters()) + list(self.p_nn.parameters())
def calculate_loss(self, xx, yy):
uu = self.__call__(xx, yy)
equation_mse = sum(
torch.mean(eq**2)
for eq in self.pde(*uu, xx, yy)
)
return equation_mse
def calculate_metrics(self, xx, yy, metrics):
uu = self.__call__(xx, yy)
return {
metric_name: metric_func(*uu, xx, yy)
for metric_name, metric_func in metrics.items()
}
@staticmethod
def pde(u, v, p, x, y):
RE = 100.0
momentum_x = u*diff(u, x) + v*diff(u, y) + diff(p, x) - 1/RE * (diff(u, x, order=2) + diff(u, y, order=2))
momentum_y = u*diff(v, x) + v*diff(v, y) + diff(p, y) - 1/RE * (diff(v, x, order=2) + diff(v, y, order=2))
continuity = diff(u, x) + diff(v, y)
return momentum_x, momentum_y, continuity
# training set and validation set
train_gen_spatial = generator_2dspatial_rectangle(
size=(32, 32), x_min=0.0, x_max=1.0, y_min=0.0, y_max=1.0
)
valid_gen_spatial = generator_2dspatial_rectangle(
size=(10, 10), x_min=0.0, x_max=1.0, y_min=0.0, y_max=1.0, random=False
)
fcnn_approximator = LidDrivenCavityApproximator()
adam = optim.Adam(fcnn_approximator.parameters(), lr=0.001)
%matplotlib notebook
lid_driven_cavity_solution, _ = _solve_2dspatial(
train_generator_spatial=train_gen_spatial,
valid_generator_spatial=valid_gen_spatial,
approximator=fcnn_approximator,
optimizer=adam,
batch_size=256,
max_epochs=10000,
shuffle=True,
metrics={},
monitor=MonitorMinimal(check_every=10)
)
%matplotlib inline
xx, yy = torch.meshgrid(torch.linspace(0,1,20), torch.linspace(0,1,20))
xx = xx.flatten()
yy = yy.flatten()
uu, vv, pp = lid_driven_cavity_solution(xx, yy)
xx = xx.detach().numpy()
yy = yy.detach().numpy()
uu = uu.detach().numpy()
vv = vv.detach().numpy()
pp = pp.detach().numpy()
fig = plt.figure(figsize=(10, 10))
ax = plt.gca()
triang = tri.Triangulation(xx, yy)
contour = ax.tricontourf(triang, pp, cmap='coolwarm')
fig.colorbar(contour, format='%.0e', ax=ax)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_aspect('equal', adjustable='box')
xx = xx.reshape(20, 20)
yy = yy.reshape(20, 20)
uu = uu.reshape(20, 20)
vv = vv.reshape(20, 20)
ax.quiver(xx, yy, uu, vv, scale=4)
plt.show()
```
| github_jupyter |
# Creating your own dataset from Google Images
*by: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)*
In this tutorial we will see how to easily create an image dataset through Google Images. **Note**: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).
```
from fastai.vision.all import *
from nbdev.showdoc import *
```
## Get a list of URLs
### Search and scroll
Go to [Google Images](http://images.google.com) and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.
Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.
It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants:
"canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis
You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.
### Download into file
Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.
Press <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd> in Windows/Linux and <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd> in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.
You will need to get the urls of each of the images. Before running the following commands, you may want to disable ad blocking extensions (uBlock, AdBlockPlus etc.) in Chrome. Otherwise the window.open() command doesn't work. Then you can run the following commands:
```javascript
urls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);
window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));
```
### Create directory and upload urls file into your server
Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.
```
path = Config().data/'bears'
path.mkdir(parents=True, exist_ok=True)
path.ls()
```
Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.
## Download images
Now you will need to download your images from their respective urls.
fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.
Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.
```
classes = ['teddy','grizzly','black']
for c in classes:
print(c)
file = f'urls_{c}.csv'
download_images(path/c, path/file, max_pics=200)
# If you have problems download, try with `max_workers=0` to see exceptions:
#download_images(path/file, dest, max_pics=20, max_workers=0)
```
Then we can remove any images that can't be opened:
```
for c in classes:
print(c)
verify_images(path/c, delete=True, max_size=500)
```
## View data
```
np.random.seed(42)
dls = ImageDataLoaders.from_folder(path, train=".", valid_pct=0.2, item_tfms=RandomResizedCrop(460, min_scale=0.75),
bs=64, batch_tfms=[*aug_transforms(size=224, max_warp=0), Normalize.from_stats(*imagenet_stats)])
# If you already cleaned your data, run this cell instead of the one before
# np.random.seed(42)
# dls = ImageDataLoaders.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv',
# item_tfms=RandomResizedCrop(460, min_scale=0.75), bs=64,
# batch_tfms=[*aug_transforms(size=224, max_warp=0), Normalize.from_stats(*imagenet_stats)])
```
Good! Let's take a look at some of our pictures then.
```
dls.vocab
dls.show_batch(rows=3, figsize=(7,8))
dls.vocab, dls.c, len(dls.train_ds), len(dls.valid_ds)
```
## Train model
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(4)
learn.save('stage-1')
learn.unfreeze()
```
If the plot is not showing try to give a start and end learning rate:
`learn.lr_find(start_lr=1e-5, end_lr=1e-1)`
```
learn.lr_find()
learn.load('stage-1')
learn.fit_one_cycle(2, lr_max=slice(3e-5,3e-4))
learn.save('stage-2')
```
## Interpretation
```
learn.load('stage-2');
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_confusion_matrix()
```
## Putting your model in production
First thing first, let's export the content of our `Learner` object for production:
```
learn.export()
```
This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used).
You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:
```
defaults.device = torch.device('cpu')
img = Image.open(path/'black'/'00000021.jpg')
img
```
We create our `Learner` in production environment like this, just make sure that `path` contains the file 'export.pkl' from before.
```
learn = torch.load(path/'export.pkl')
pred_class,pred_idx,outputs = learn.predict(path/'black'/'00000021.jpg')
pred_class
```
So you might create a route something like this ([thanks](https://github.com/simonw/cougar-or-not) to Simon Willison for the structure of this code):
```python
@app.route("/classify-url", methods=["GET"])
async def classify_url(request):
bytes = await get_bytes(request.query_params["url"])
img = PILImage.create(bytes)
_,_,probs = learner.predict(img)
return JSONResponse({
"predictions": sorted(
zip(cat_learner.dls.vocab, map(float, probs)),
key=lambda p: p[1],
reverse=True
)
})
```
(This example is for the [Starlette](https://www.starlette.io/) web app toolkit.)
## Things that can go wrong
- Most of the time things will train fine with the defaults
- There's not much you really need to tune (despite what you've heard!)
- Most likely are
- Learning rate
- Number of epochs
### Learning rate (LR) too high
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fit_one_cycle(1, lr_max=0.5)
```
### Learning rate (LR) too low
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
```
Previously we had this result:
```
Total time: 00:57
epoch train_loss valid_loss error_rate
1 1.030236 0.179226 0.028369 (00:14)
2 0.561508 0.055464 0.014184 (00:13)
3 0.396103 0.053801 0.014184 (00:13)
4 0.316883 0.050197 0.021277 (00:15)
```
```
learn.fit_one_cycle(5, lr_max=1e-5)
learn.recorder.plot_loss()
```
As well as taking a really long time, it's getting too many looks at each image, so may overfit.
### Too few epochs
```
learn = cnn_learner(dls, resnet34, metrics=error_rate, pretrained=False)
learn.fit_one_cycle(1)
```
### Too many epochs
```
from fastai.basics import *
from fastai.callback.all import *
from fastai.vision.all import *
from nbdev.showdoc import *
path = Config().data/'bears'
np.random.seed(42)
dls = ImageDataLoaders.from_folder(path, train=".", valid_pct=0.8, item_tfms=RandomResizedCrop(460, min_scale=0.75),
bs=32, batch_tfms=[AffineCoordTfm(size=224), Normalize.from_stats(*imagenet_stats)])
learn = cnn_learner(dls, resnet50, metrics=error_rate, config=cnn_config(ps=0))
learn.unfreeze()
learn.fit_one_cycle(40, slice(1e-6,1e-4), wd=0)
```
| github_jupyter |
# Matplotlib Bars
## Creating Bars
With Pyplot, you can use the `bar()` function to draw bar graphs:
```
# Draw 4 bars:
import matplotlib.pyplot as plt
import numpy as np
x = np.array(["A", "B", "C", "D"])
y = np.array([4, 9, 1, 11])
plt.bar(x,y)
plt.show()
```
The `bar()` function takes arguments that describes the layout of the bars.
The categories and their values represented by the first and second argument as arrays.
```
import matplotlib.pyplot as plt
x = ["APPLES", "BANANAS"]
y = [350, 250]
plt.bar(x, y)
plt.show()
```
# Horizontal Bars
If you want the bars to be displayed horizontally instead of vertically, use the `barh()` function:
```
# Drawing 5 horizontal bars
import matplotlib.pyplot as plt
import numpy as np
x = np.array(["A", "B", "C", "D","F"])
y = np.array([4, 9, 1, 11,15])
plt.barh(x,y)
plt.show()
```
# Bar Color
The `bar()` and `barh()` takes the keyword argument `color` to set the color of the bars:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.array(["A", "B", "C", "D","F"])
y = np.array([4, 9, 1, 11,15])
plt.barh(x,y, color='g')
plt.show()
```
# Bar Width
The `bar()` takes the keyword argument `width` to set the width of the bars:
```
# Draw 5 very thin bars
import matplotlib.pyplot as plt
import numpy as np
x = np.array(["A", "B", "C", "D","F"])
y = np.array([4, 9, 1, 11,15])
plt.bar(x, y, width=0.1)
plt.show()
```
The default width value is `0.8`
<b>Note:</b> For horizontal bars, use `height` instead of `width`.
# Bar Height
The `barh()` takes the keyword argument `height` to set the height of the bars:
```
import matplotlib.pyplot as plt
import numpy as np
x = np.array(["A", "B", "C", "D","F"])
y = np.array([4, 9, 1, 11,15])
plt.barh(x, y, height=0.1)
plt.show()
```
# Matplotlib Histograms
## Histogram
A histogram is a graph showing frequency distributions.
It is a graph showing the number of observations within each given interval.
Example: Say you ask for the height of 230 people, you might end up with a histogram like this:

In Matplotlib, we use the `hist()` function to create histograms.
The `hist()` function will use an array of numbers to create a histogram, the array is sent into the function as an argument.
For simplicity we use NumPy to randomly generate an array with 230 values, where the values will concentrate around 160, and the standard deviation is 9.
```
# A Normal Data Distribution by NumPy
import numpy as np
values_generated = np.random.normal(160,9,230)
print(values_generated)
```
<b>Note</b>: The `hist()` function will read the array and produce a histogram:
```
# A simple histogram chart
import matplotlib.pyplot as plt
plt.hist(values_generated)
plt.show()
```
| github_jupyter |
Query NASA/Ads from python
https://github.com/adsabs/adsabs-dev-api/blob/master/README.md
```
from astroquery.ned import Ned
from astroquery.nasa_ads import ADS
ADS.TOKEN = open('ADS_DEV_KEY','r').read()
token = open('ADS_DEV_KEY','r').read()
import requests
import urllib
import json
from pnlf.constants import tab10
result_table = Ned.get_table("NGC628", table='positions')
result_table
def check_type(func):
def inner(x,y):
print()
@check_type
def add(x,y):
return x+y
add(1,"2")
query = 'id:2019ApJ...887...80K'
query = urllib.parse.quote(query)
start=0
cache_rows=200
sort='pubdate+desc'
r = requests.get('https://api.adsabs.harvard.edu/v1/search/query?'
f'q={query}&start={start}&rows={cache_rows}'
f'&sort={sort}&fl=title,author,year,bibcode,pub',
headers={'Authorization': f'Bearer {token}'})
resp = r.json()
def get_bibtex(bibcodes):
'''retrive the bibtex entry from ads
'''
if not isinstance(bibcodes,list):
bibcodes = [bibcodes]
bibcode = {"bibcode":bibcodes}
r = requests.post("https://api.adsabs.harvard.edu/v1/export/bibtex", \
headers={"Authorization": "Bearer " + token, "Content-type": "application/json"}, \
data=json.dumps(bibcode))
# in case of an error
if not r.ok:
if r.status_code == 401:
raise ValueError('Unauthorized access to ADS. Check that the ADS token is valid.')
try:
reason = r.json()['error']
except:
reason = r.text
raise ValueError(f'HTTP request failed ({r.status_code}): {reason}')
return r.json()['export']
bib = get_bibtex(['2019ApJ...887...80K'])
r = requests.get("https://api.adsabs.harvard.edu/v1/search/query?q='references(id:2019ApJ...887...80K)'",\
headers={'Authorization': 'Bearer ' + token})
# the requests package returns an object; to get just the JSON API response, you have to specify this
#print(r.json())
r.ok
```
https://github.com/andycasey/ads
```
import ads
ads.config.token = open('ADS_DEV_KEY','r').read()
bibcode = '2019ApJ...887...80K'
ads.SearchQuery?
list(ads.SearchQuery(bibcode=bibcode))
articles = [list(ads.SearchQuery(bibcode=bibcode))[0] for bibcode in bibcodes]
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0,10)
y1 = (1-np.exp(3*(-4.47-x)))
y2 = np.exp(0.307*x)
y = y1*y2
plt.plot(x,y1)
plt.plot(x,y2)
plt.plot(x,y)
plt.yscale('log')
from astropy.io import fits
from pathlib import Path
import logging
from astropy.wcs import WCS
from reproject import reproject_interp, reproject_exact
z = 0.0028906664
def combine_fits(folder,output_projection):
'''combine the different linemaps into one fits file
'''
if not folder.is_dir():
raise IOError('folder does not exist')
data = []
data_header = []
err = []
err_header = []
# so astropy doesn't warn us that the wcs contains unused sip information
logger = logging.getLogger('astropy')
logger.setLevel(logging.WARNING)
for flux_file in [x for x in (folder / 'MAPS').iterdir() if x.name.endswith('flux.fits')]:
err_file = flux_file.with_name(flux_file.stem + '-err.fits')
with fits.open(flux_file) as hdul:
linemap, _ = reproject_exact(hdul, output_projection)
data.append(linemap)
data_header.append(hdul[0].header)
with fits.open(err_file) as hdul:
linemap, _ = reproject_exact(hdul, output_projection)
err.append(linemap)
err_header.append(hdul[0].header)
object_name = str(folder).split('_')[0]
print(str(len(data)) + ' linemaps found for ' + object_name)
keywords = ['PROGRAM','DATE','OBSERVAT','TELESCOP','INSTRUME','MJD-OBS','DATE-OBS']
primary_header = fits.Header()
for card in data_header[0].cards:
if card[0] in keywords:
primary_header.append(card)
l = float(data_header[0]['FILETYPE'].split(' ')[-1])/(1+z)
# get this from somewhere else
primary_header.insert('PROGRAM ',('OBJECT',object_name,'Object Name'))
primary_hdu = fits.PrimaryHDU(header=primary_header)
hdul = fits.HDUList([primary_hdu])
print('primary extension created')
for d,dh,e,eh in zip(data,data_header,err,err_header):
# get the original wavelength of the line
l = float(dh['FILETYPE'].split(' ')[-1])/(1+z)
header = WCS(output_projection).to_header()
header['BITPIX'] = (-32,'array data type')
header.insert(0,('FILETYPE','Map flux {:.0f}'.format(l)))
header.append()
hdu = fits.ImageHDU(data=d,header=header,name='OII{:.0f}'.format(l))
hdul.append(hdu)
header['FILETYPE'] = 'Map flux error {:.0f}'.format(l)
hdu = fits.ImageHDU(data=e,header=header,name='OII{:.0f}_err'.format(l))
hdul.append(hdu)
#single = fits.PrimaryHDU(d)
#single.writeto('[OII]{:.0f}.fits'.format(l))
print('all extensions created')
filename = '{}_[OII]_maps.fits'.format(object_name)
hdul.writeto(filename,overwrite=True)
print('saved to {}'.format(filename))
return hdul
folder = Path('d:/Documents/university/PhD/sitelle/NGC2835_SN1.1.0.ORCS')
data_raw = Path('d:\downloads\MUSEDAP')
muse_header = fits.getheader(data_raw/'MUSEDAP'/'NGC2835_MAPS.fits',ext=1)
#combine_fits(Path('NGC2835_SN1.1.0.ORCS'),muse_header)
hdul = combine_fits(folder,muse_header)
```
## Split multi-extension fits file
```
def split_fits(filename,extensions=''):
'''split a fits file with multiple extensions into separate files
'''
with fits.open(filename) as hdul:
OIII = hdul[extension]
OIII.writeto('OIII5007.fits',overwrite=True)
```
## Voronoi diagram
```
from scipy.spatial import Voronoi, voronoi_plot_2d
import numpy as np
import matplotlib.pyplot as plt
points = np.random.uniform(0,10,(10,2))
vor = Voronoi(points)
fig = voronoi_plot_2d(vor)
plt.show()
```
## masks to contours
```
from skimage import measure
from skimage.draw import polygon
from collections import Counter
import numpy as np
from astropy import wcs
from astropy.io import fits
import matplotlib.pyplot as plt
from astropy.io import fits
from pathlib import Path
data_raw = Path('g:\Archive')
mask_file = data_raw/'MUSE'/'DR1'/'AUXILIARY'/'Nebulae catalogue'/'spatial_masks'/'NGC2835_HIIreg_mask.fits'
with fits.open(mask_file) as hdul:
mask = hdul[0].data
mask_header = hdul[0].header
basedir = Path('d:\Documents') / 'university' / 'PhD' / 'sitelle'
with fits.open(basedir/'NGC2835_deepframe.fits') as hdul:
target_data = hdul[0].data
target_header = hdul[0].header
plt.imshow(mask)
plt.savefig('test.pdf')
props = measure.regionprops(mask.astype(int))
def reverse_columns(array):
"""This function reverses the order of the columns
old:
temp = coordinates[:,0]
temp2 = coordinates[:,1]
return np.column_stack([temp2, temp])
new:
faster because we do not create two new arrays
also works with shapes other than (n,2)
Parameters
----------
array : ndarray
"""
return array.T[::-1].T
from pymuse.masks_to_contours import get_contours, \
convert_pixel2world,convert_world2pixel,\
create_masks_from_wcs_contours
###
# at 0.5, contours will perfectly line up with mask boundaries, so
# in current wcs projection use 0.5. But at 0, boundaries as inflated (dilated)
# slightly by half a pixel, which, will not plot nice for touching masks
# (as not the contours will overlap a little), but might help masking
# new wcs projections which have bigger pixels. I recommend just using 0.5
contour_dilation = 0.5
contours_y_x, contour_id = get_contours(labeled_image=mask,
contour_dilation=contour_dilation,
get_contour_id=True,
touching_masks=False) #if masks do not touch, change this to False
plt.figure(1)
plt.imshow(mask, origin='lower')
for cont in contours_y_x:
plt.plot(cont[:,1]+1, cont[:,0]+1, 'k-',lw=0.2) #to make lines look thinner, set lw=0.8 in plt.plot
plt.savefig('test.pdf')
plt.show()
mask
print(f'{len(touching)} touching regions found')
fig,(ax1,ax2) =plt.subplots(1,2)
ax1.imshow(mask)
im = ax2.imshow(touching_regions)
plt.savefig('test.pdf',dpi=800)
plt.show()
from astropy.wcs import WCS
from skimage.measure import regionprops, find_contours
class regions:
def __init__(self,data,header=None):
'''
Parameters
----------
data : ndarray
array with labeld regions
header :
'''
self.data = data
self.header = header
self.wcs = WCS(header)
self.regions = {reg.label: reg for reg in regionprops(self.data.astype(int))}
with np.errstate(invalid='ignore'):
self.regions_id = set(np.unique(mask[mask>=0]).flatten())
def find_touching(self,bkg=0):
'''find all regions that touch another region'''
touching = set()
# for each row up to the second last one we subtract the row below
difference = np.zeros_like(self.data)
difference[:-1,...] = self.data[:-1,...] - self.data[1:,...]
difference[self.data==bkg] = 0
difference[difference==self.data+bkg] = 0
touching |= set(self.data[(difference!=0) & ~np.isnan(difference)])
# now going the other way around
difference = np.zeros_like(self.data)
difference[1:,...] = self.data[1:,...] - self.data[:-1,...]
difference[self.data==-bkg] = 0
difference[difference==self.data+bkg] = 0
touching |= set(self.data[(difference!=0) & ~np.isnan(difference)])
# left to right
difference = np.zeros_like(self.data)
difference[...,1:] = self.data[...,1:] - self.data[...,:-1]
difference[self.data==-bkg] = 0
difference[difference==self.data+bkg] = 0
touching |= set(self.data[(difference!=0) & ~np.isnan(difference)])
# right to left
difference = np.zeros_like(self.data)
difference[...,:-1] = self.data[...,:-1] - self.data[...,1:]
difference[self.data==-bkg] = 0
difference[difference==self.data+bkg] = 0
touching |= set(self.data[(difference!=0) & ~np.isnan(difference)])
return touching
def select_regions(self,regions_id):
'''create an image that contains only the regions in region_id'''
if not isinstance(regions_id,list): regions_id = list(regions_id)
data = np.zeros_like(self.data)
for i in regions_id:
data[self.data==i] = i
data[np.isnan(self.data)] = np.nan
return data
def construct_separated_regions(self):
'''
'''
# the regions that touch another region will be handled later
remaining = self.find_touching()
batches = []
batches.append(self.select_regions(self.regions_id-remaining))
while len
def plot_regions(self,regions_id,filename=None):
data = self.select_regions(regions_id)
fig = plt.figure()
ax = fig.add_subplot(111,projection=self.wcs)
ax.imshow(data)
if filename:
plt.savefig(filename,dpi=800)
plt.show()
region = regions(mask+1,mask_header)
mask.shape
l = int(np.ceil(max([v.major_axis_length for k,v in region.regions.items()])))
x_n = int(np.ceil(mask.shape[0]/l))
y_n = int(np.ceil(mask.shape[1]/l))
masked_list = []
for n in range(x_n):
for m in range(y_n):
masked_region = np.zeros_like(mask)
masked_region[n*l:(n+1)*l,m*l:(m+1)*l] = 1
masked_list.append(masked_region)
touching = region.find_touching()
region.plot_regions(touching)
len(region.regions_id-touching)
level=0.5
contours = []
for region_id in regions_id:
array = np.zeros_like(mask)
array[mask==region_id] = 1
contour = measure.find_contours(array,level)
contours += contour
touching.update([1,2,3,3,4])
plt.figure(1)
plt.imshow(mask, origin='lower')
for cont in contours:
plt.plot(cont[:,1]+1, cont[:,0]+1, 'k-',lw=0.2) #to make lines look thinner, set lw=0.8 in plt.plot
plt.savefig('test.pdf')
plt.show()
#boundaries saved as wsc so can be loaded into whatever wcs projection you want
contours_WCS = []
for j in range(len(contours_y_x)):
contour_x_y = reverse_columns(contours_y_x[j])
contours_WCS.append(convert_pixel2world(contour_x_y, galaxy_header))
contours_x_y_new = []
for j in range(len(contours_WCS)):
contours_x_y_new.append(convert_world2pixel(contours_WCS[j],
different_galaxy_wcs_header))
plt.figure(2)
plt.clf()
plt.imshow(different_galaxy_wcs, origin='lower', cmap=plt.cm.coolwarm)
for cont in contours_x_y_new:
plt.plot(cont[:,0], cont[:,1], 'k-')
masks_new_wcs = create_masks_from_wcs_contours(contours_WCS=contours_WCS,
contourIDs=contour_id,
header=different_galaxy_wcs_header,
image=different_galaxy_wcs,
binary_mask_out=False)
# just rerunning this to show that I only want binary mask out
masks_new_wcs_binary = create_masks_from_wcs_contours(contours_WCS=contours_WCS,
contourIDs=contour_id,
header=different_galaxy_wcs_header,
image=different_galaxy_wcs,
binary_mask_out=True)
plt.figure(3)
plt.imshow(masks_new_wcs_binary * different_galaxy_wcs, origin='lower', cmap=plt.cm.coolwarm)
```
## File dialog
```
from tkinter import filedialog
def file_save():
f = filedialog.asksaveasfile(mode='w', defaultextension=".out")
if f is None: # asksaveasfile return `None` if dialog closed with "cancel".
return
f.write(template)
f.close() # `()` was missing.
```
## Sankey
```
# -*- coding: utf-8 -*-
"""
Produces simple Sankey Diagrams with matplotlib.
@author: Anneya Golob & marcomanz & pierre-sassoulas & jorwoods
.-.
.--.( ).--.
<-. .-.-.(.-> )_ .--.
`-`( )-' `) )
(o o ) `)`-'
( ) ,)
( () ) )
`---"\ , , ,/`
`--' `--' `--'
| | | |
| | | |
' | ' |
https://github.com/anazalea/pySankey
"""
from collections import defaultdict
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
tab10 = ['#e15759','#4e79a7','#f28e2b','#76b7b2','#59a14e','#edc949','#b07aa2','#ff9da7','#9c755f','#bab0ac']
class PySankeyException(Exception):
pass
class NullsInFrame(PySankeyException):
pass
class LabelMismatch(PySankeyException):
pass
def check_data_matches_labels(labels, data, side):
if len(labels > 0):
if isinstance(data, list):
data = set(data)
if isinstance(data, pd.Series):
data = set(data.unique().tolist())
if isinstance(labels, list):
labels = set(labels)
if labels != data:
msg = "\n"
if len(labels) <= 20:
msg = "Labels: " + ",".join(labels) + "\n"
if len(data) < 20:
msg += "Data: " + ",".join(data)
raise LabelMismatch('{0} labels and data do not match.{1}'.format(side, msg))
def sankey(left, right, leftWeight=None, rightWeight=None, colorDict=None,
leftLabels=None, rightLabels=None, aspect=4, rightColor=False,
fontsize=14, filename=None, closePlot=False):
'''
Make Sankey Diagram showing flow from left-->right
Inputs:
left = NumPy array of object labels on the left of the diagram
right = NumPy array of corresponding labels on the right of the diagram
len(right) == len(left)
leftWeight = NumPy array of weights for each strip starting from the
left of the diagram, if not specified 1 is assigned
rightWeight = NumPy array of weights for each strip starting from the
right of the diagram, if not specified the corresponding leftWeight
is assigned
colorDict = Dictionary of colors to use for each label
{'label':'color'}
leftLabels = order of the left labels in the diagram
rightLabels = order of the right labels in the diagram
aspect = vertical extent of the diagram in units of horizontal extent
rightColor = If true, each strip in the diagram will be be colored
according to its left label
Ouput:
None
'''
if leftWeight is None:
leftWeight = []
if rightWeight is None:
rightWeight = []
if leftLabels is None:
leftLabels = []
if rightLabels is None:
rightLabels = []
# Check weights
if len(leftWeight) == 0:
leftWeight = np.ones(len(left))
if len(rightWeight) == 0:
rightWeight = leftWeight
plt.figure()
# Create Dataframe
if isinstance(left, pd.Series):
left = left.reset_index(drop=True)
if isinstance(right, pd.Series):
right = right.reset_index(drop=True)
dataFrame = pd.DataFrame({'left': left, 'right': right, 'leftWeight': leftWeight,
'rightWeight': rightWeight}, index=range(len(left)))
if len(dataFrame[(dataFrame.left.isnull()) | (dataFrame.right.isnull())]):
raise NullsInFrame('Sankey graph does not support null values.')
# Identify all labels that appear 'left' or 'right'
allLabels = pd.Series(np.r_[dataFrame.left.unique(), dataFrame.right.unique()]).unique()
# Identify left labels
if len(leftLabels) == 0:
leftLabels = pd.Series(dataFrame.left.unique()).unique()
else:
check_data_matches_labels(leftLabels, dataFrame['left'], 'left')
# Identify right labels
if len(rightLabels) == 0:
rightLabels = pd.Series(dataFrame.right.unique()).unique()
else:
check_data_matches_labels(leftLabels, dataFrame['right'], 'right')
# If no colorDict given, make one
if colorDict is None:
colorDict = {}
for i, label in enumerate(allLabels):
colorDict[label] = tab10[i]
else:
missing = [label for label in allLabels if label not in colorDict.keys()]
if missing:
msg = "The colorDict parameter is missing values for the following labels : "
msg += '{}'.format(', '.join(missing))
raise ValueError(msg)
# Determine widths of individual strips
ns_l = defaultdict()
ns_r = defaultdict()
for leftLabel in leftLabels:
leftDict = {}
rightDict = {}
for rightLabel in rightLabels:
leftDict[rightLabel] = dataFrame[(dataFrame.left == leftLabel) & (dataFrame.right == rightLabel)].leftWeight.sum()
rightDict[rightLabel] = dataFrame[(dataFrame.left == leftLabel) & (dataFrame.right == rightLabel)].rightWeight.sum()
ns_l[leftLabel] = leftDict
ns_r[leftLabel] = rightDict
# Determine positions of left label patches and total widths
leftWidths = defaultdict()
for i, leftLabel in enumerate(leftLabels):
myD = {}
myD['left'] = dataFrame[dataFrame.left == leftLabel].leftWeight.sum()
if i == 0:
myD['bottom'] = 0
myD['top'] = myD['left']
else:
myD['bottom'] = leftWidths[leftLabels[i - 1]]['top'] + 0.02 * dataFrame.leftWeight.sum()
myD['top'] = myD['bottom'] + myD['left']
topEdge = myD['top']
leftWidths[leftLabel] = myD
# Determine positions of right label patches and total widths
rightWidths = defaultdict()
for i, rightLabel in enumerate(rightLabels):
myD = {}
myD['right'] = dataFrame[dataFrame.right == rightLabel].rightWeight.sum()
if i == 0:
myD['bottom'] = 0
myD['top'] = myD['right']
else:
myD['bottom'] = rightWidths[rightLabels[i - 1]]['top'] + 0.02 * dataFrame.rightWeight.sum()
myD['top'] = myD['bottom'] + myD['right']
topEdge = myD['top']
rightWidths[rightLabel] = myD
# Total vertical extent of diagram
xMax = topEdge / aspect
# Draw vertical bars on left and right of each label's section & print label
for leftLabel in leftLabels:
plt.fill_between(
[-0.02 * xMax, 0],
2 * [leftWidths[leftLabel]['bottom']],
2 * [leftWidths[leftLabel]['bottom'] + leftWidths[leftLabel]['left']],
color=colorDict[leftLabel],
alpha=0.99
)
plt.text(
-0.05 * xMax,
leftWidths[leftLabel]['bottom'] + 0.5 * leftWidths[leftLabel]['left'],
leftLabel,
{'ha': 'right', 'va': 'center'},
fontsize=fontsize
)
for rightLabel in rightLabels:
plt.fill_between(
[xMax, 1.02 * xMax], 2 * [rightWidths[rightLabel]['bottom']],
2 * [rightWidths[rightLabel]['bottom'] + rightWidths[rightLabel]['right']],
color=colorDict[rightLabel],
alpha=0.99
)
plt.text(
1.05 * xMax,
rightWidths[rightLabel]['bottom'] + 0.5 * rightWidths[rightLabel]['right'],
rightLabel,
{'ha': 'left', 'va': 'center'},
fontsize=fontsize
)
# Plot strips
for leftLabel in leftLabels:
for rightLabel in rightLabels:
labelColor = leftLabel
if rightColor:
labelColor = rightLabel
if len(dataFrame[(dataFrame.left == leftLabel) & (dataFrame.right == rightLabel)]) > 0:
# Create array of y values for each strip, half at left value,
# half at right, convolve
ys_d = np.array(50 * [leftWidths[leftLabel]['bottom']] + 50 * [rightWidths[rightLabel]['bottom']])
ys_d = np.convolve(ys_d, 0.05 * np.ones(20), mode='valid')
ys_d = np.convolve(ys_d, 0.05 * np.ones(20), mode='valid')
ys_u = np.array(50 * [leftWidths[leftLabel]['bottom'] + ns_l[leftLabel][rightLabel]] + 50 * [rightWidths[rightLabel]['bottom'] + ns_r[leftLabel][rightLabel]])
ys_u = np.convolve(ys_u, 0.05 * np.ones(20), mode='valid')
ys_u = np.convolve(ys_u, 0.05 * np.ones(20), mode='valid')
# Update bottom edges at each label so next strip starts at the right place
leftWidths[leftLabel]['bottom'] += ns_l[leftLabel][rightLabel]
rightWidths[rightLabel]['bottom'] += ns_r[leftLabel][rightLabel]
plt.fill_between(
np.linspace(0, xMax, len(ys_d)), ys_d, ys_u, alpha=0.65,
color=colorDict[labelColor]
)
plt.gca().axis('off')
plt.gcf().set_size_inches(8, 6)
plt.show()
if filename != None:
plt.savefig(filename, bbox_inches='tight', dpi=600)
if closePlot:
plt.close()
a = ['Fahrrad','Fahrrad','Auto','Auto','ÖPNV','Auto','Fahrrad']
b = ['Auto','ÖPNV','Auto','Auto','ÖPNV','Auto','Fahrrad']
colorDict = {
'Fahrrad':'#f71b1b',
'Auto':'#1b7ef7',
'ÖPNV':'#f3f71b',
'lime':'#12e23f',
'orange':'#f78c1b'
}
sankey(a,b, aspect=20, colorDict=colorDict,fontsize=12)
```
| github_jupyter |
# NLP Feature Engineering
## Feature Creation
```
# Read in the text data
import pandas as pd
data = pd.read_csv("./data/SMSSpamCollection.tsv", sep='\t')
data.columns = ['label', 'body_text']
```
### Create feature for text message length
```
data['body_len'] = data['body_text'].apply(lambda x: len(x) - x.count(" "))
data.head()
```
### Create feature for % of text that is punctuation
```
import string
# Create a function to count punctuation
def count_punct(text):
count = sum([1 for char in text if char in string.punctuation])
return round(count/(len(text) - text.count(" ")), 3)*100
# Create a column for the % of punctuation in each body text
data['punct%'] = data['body_text'].apply(lambda x: count_punct(x))
data.head()
```
## Evaluate Created Features
```
# Import the dependencies
from matplotlib import pyplot
import numpy as np
%matplotlib inline
# Create a plot that demonstrates the length of the message for 'ham' and 'spam'
bins = np.linspace(0, 200, 40)
pyplot.hist(data[data['label']=='spam']['body_len'], bins, alpha=0.5, normed=True, label='spam')
pyplot.hist(data[data['label']=='ham']['body_len'], bins, alpha=0.5, normed=True, label='ham')
pyplot.legend(loc='upper left')
pyplot.show()
# Create a plot that demonstrates the punctuation % for 'ham' and 'spam'
bins = np.linspace(0, 50, 40)
pyplot.hist(data[data['label']=='spam']['punct%'], bins, alpha=0.5, normed=True, label='spam')
pyplot.hist(data[data['label']=='ham']['punct%'], bins, alpha=0.5, normed=True, label='ham')
pyplot.legend(loc='upper right')
pyplot.show()
```
## Transformation
### Plot the two new features
```
bins = np.linspace(0, 200, 40)
pyplot.hist(data['body_len'], bins)
pyplot.title("Body Length Distribution")
pyplot.show()
bins = np.linspace(0, 50, 40)
pyplot.hist(data['punct%'], bins)
pyplot.title("Punctuation % Distribution")
pyplot.show()
```
### Transform the punctuation % feature
### Box-Cox Power Transformation
**Base Form**: $$ y^x $$
| X | Base Form | Transformation |
|------|--------------------------|--------------------------|
| -2 | $$ y ^ {-2} $$ | $$ \frac{1}{y^2} $$ |
| -1 | $$ y ^ {-1} $$ | $$ \frac{1}{y} $$ |
| -0.5 | $$ y ^ {\frac{-1}{2}} $$ | $$ \frac{1}{\sqrt{y}} $$ |
| 0 | $$ y^{0} $$ | $$ log(y) $$ |
| 0.5 | $$ y ^ {\frac{1}{2}} $$ | $$ \sqrt{y} $$ |
| 1 | $$ y^{1} $$ | $$ y $$ |
| 2 | $$ y^{2} $$ | $$ y^2 $$ |
**Process**
1. Determine what range of exponents to test
2. Apply each transformation to each value of your chosen feature
3. Use some criteria to determine which of the transformations yield the best distribution
| github_jupyter |
# Stochastic Gradient Descent
- 上节梯度下降法如图所示
[](https://imgchr.com/i/8mATJK)
- 我们每次都把所有的梯度算出来,称为**批量梯度下降法**
- 但是这样在样本容量很大时,也是比较耗时的,解决方法是**随机梯度下降法**
[](https://imgchr.com/i/8mALsH)
- 我们随机的取一个 $i$ ,然后用这个 $i$ 得到一个向量,然后向这个方向搜索迭代
[](https://imgchr.com/i/8mAHzD)
- 在随机梯度下降法中,我们不能保证寻找的方向就是损失函数减小的方向
- 更不能保证时减小的最快的方向
- 我们希望 $\eta$ 随着迭代次数增大越来越小,于是 $\eta$ 就有右边表示形式
- 其中 a 和 b 是两个超参数
### 1. 批量梯度下降算法
```
import numpy as np
import matplotlib.pyplot as plt
m = 100000
x = np.random.normal(size = m)
X = x.reshape(-1, 1)
y = 4. * x + 3. + np.random.normal(0, 3, size=m)
def J(theta, X_b, y):
try:
return np.sum((y - X_b.dot(theta))**2) / len(y)
except:
return float('inf')
def dJ(theta, X_b, y):
return X_b.T.dot(X_b.dot(theta) - y) * 2. / len(y)
def gradient_descent(X_b, y, initial_theta, eta, n_iters = 1e4, epsilon=1e-8):
theta = initial_theta
i_iter = 0
while i_iter < n_iters:
gradient = dJ(theta, X_b, y)
last_theta = theta
theta = theta - eta * gradient
if np.abs(J(theta, X_b, y) - J(last_theta, X_b, y)) < epsilon:
break
i_iter += 1
return theta
X_b = np.hstack([np.ones([len(X), 1]), X])
initial_theta = np.zeros(X_b.shape[1])
eta = 0.01
theta = gradient_descent(X_b, y, initial_theta, eta)
theta
```
### 2. 随机梯度下降法
```
# 传入具体的某一行
def dJ(theta, X_b_i, y_i):
return X_b_i.T.dot(X_b_i.dot(theta) - y_i) * 2.
def sgd(X_b, y, initial_theta, n_iters):
t0 = 5
t1 = 50
def learning_rate(cur_iter):
return t0 / (cur_iter + t1)
theta = initial_theta
for cur_iter in range(n_iters):
# 随机取一个 i
rand_i = np.random.randint(len(X_b))
gradient = dJ(theta, X_b[rand_i], y[rand_i])
# 迭代 theta
theta = theta - learning_rate(cur_iter) * gradient
return theta
X_b = np.hstack([np.ones([len(X), 1]), X])
initial_theta = np.zeros(X_b.shape[1])
theta = sgd(X_b, y, initial_theta, n_iters=len(X_b)//3)
# 可以看出,我们只使用了三分之一的样本,就达到了很好的效果
theta
# theta 值和批量梯度下降算法几乎一致
```
### 3. 使用我们自己的SGD
```
from LR.LinearRegression import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit_sgd(X, y, n_iters=2)
lin_reg.coef_
lin_reg.intercept_
```
#### 使用真实数据
```
from sklearn import datasets
boston = datasets.load_boston()
X = boston.data
y = boston.target
X = X[y < 50.0]
y = y[y < 50.0]
from LR.model_selection import train_test_split
# 数据集分割
X_train, X_test, y_train, y_test = train_test_split(X, y, seed=333)
# 归一化处理
from sklearn.preprocessing import StandardScaler
standardScaler = StandardScaler()
standardScaler.fit(X_train)
X_train_standard = standardScaler.transform(X_train)
X_test_standard = standardScaler.transform(X_test)
lin_reg2 = LinearRegression()
%time lin_reg2.fit_sgd(X_train_standard, y_train)
lin_reg2.score(X_test_standard, y_test)
```
### 4. scikit-learn中的SGD
```
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor()
%time sgd_reg.fit(X_train_standard, y_train)
sgd_reg.score(X_test_standard, y_test)
SGDRegressor?
sgd_reg = SGDRegressor(n_iter=100)
%time sgd_reg.fit(X_train_standard, y_train)
sgd_reg.score(X_test_standard, y_test)
```
| github_jupyter |
```
pip install pandas
pip install gym
pip install matplotlib
pip install tensorflow
import numpy as np
import pandas as pd
import inspect
import random
import gym
import sys
import tensorflow as tf
import tensorflow.keras.layers as kl
import tensorflow.keras.losses as kls
import tensorflow.keras.optimizers as ko
import matplotlib.pyplot as plt
from tensorflow.keras import backend as K
from collections import deque
import os
!{sys.executable} -m pip install 'kaggle-environments>=0.1.6'
from kaggle_environments import evaluate, make, utils
env = make("connectx", debug=True)
env.render()
env.agents
env.configuration
env.specification
```
## Create an agent
```
def my_agent(observation, configuration):
from random import choice
return choice([c for c in range(configuration.columns) if observation.board[c] == 0])
trainer = env.train([None, "random"])
observation = trainer.reset()
print("Observation contains:\t", observation)
print("Configuration contains:\t", env.configuration)
my_action = my_agent(observation, env.configuration)
print("My Action", my_action)
observation, reward, done, info = trainer.step(my_action)
env.render(mode="ipython", width=100, height=90, header=False, controls=False)
print("Observation after:\t", observation)
```
## Train your agent
```
trainer = env.train([None, "random"])
observation = trainer.reset()
while not env.done:
my_action = my_agent(observation, env.configuration)
print("My Action", my_action)
observation, reward, done, info = trainer.step(my_action)
print(reward)
env.render(mode="ipython", width=100, height=90, header=False, controls=False)
env.render()
class ConnectX(gym.Env):
def __init__(self):
self.env = make("connectx", debug=True)
self.pair = [None,"negamax"]
self.config = self.env.configuration
self.trainer = self.env.train(self.pair)
config = self.env.configuration
self.action_space = gym.spaces.Discrete(config.columns)
self.observation_space = gym.spaces.Discrete(config.columns * config.rows)
def step(self,action):
return self.trainer.step(action)
def reset(self):
return self.trainer.reset()
def render(self, **kwargs):
return self.env.render(**kwargs)
class ProbabilityDistribution(tf.keras.Model):
def call(self, logits, **kwargs):
return tf.squeeze(tf.random.categorical(logits, 1), axis=-1)
class Model(tf.keras.Model):
def __init__(self, env, num_actions):
super(Model, self).__init__('mlp_policy')
self.env = env
self.num_actions = num_actions
self.hidden1 = kl.Dense(128, activation='relu')
self.hidden2 = kl.Dense(128, activation='relu')
self.value = kl.Dense(1, name='value')
self.logits = kl.Dense(num_actions, name='policy_logits')
self.dist = ProbabilityDistribution()
self.action_ = None
self.value_ = None
self.space = None
self.empty = []
def call(self, inputs, **kwargs):
x = tf.convert_to_tensor(inputs)
hidden_logs = self.hidden1(x)
hidden_vals = self.hidden2(x)
return self.logits(hidden_logs), self.value(hidden_vals)
def action_value(self, obs):
logits, values = self.predict_on_batch(obs)
action = self.dist.predict_on_batch(logits)
return np.squeeze(action, axis = -1), np.squeeze(values, axis=-1)
def preprocess(self, state):
result = state.board[:]
result.append(state.mark)
return result
env = ConnectX()
model = Model(env, num_actions=env.action_space.n)
obs = env.reset()
obs = np.array(model.preprocess(obs))
action, value = model.action_value(obs[None, :])
print("Action: " +str(action)+", Value: " + str(value))
K.clear_session()
class Agent_Advanced:
def __init__(self, model, lr=7e-3, gamma=0.8, value_c=0.5, entropy_c=1e-4):
self.value_c = value_c
self.entropy_c = entropy_c
self.gamma = gamma
self.model = model
self.model.compile(
optimizer=tf.keras.optimizers.RMSprop(learning_rate=lr),
loss=[self._logits_loss, self._value_loss]
)
def train(self, env, batch_sz=64, updates=500):
ep_rewards = [0.0]
next_obs = env.reset()
next_obs = np.array(model.preprocess(next_obs))
actions = np.empty((batch_sz,), dtype=np.int32)
rewards, dones, values = np.zeros((3, batch_sz,))
observations = np.empty((batch_sz,len(next_obs.copy())) + env.observation_space.shape)
for update in range(updates):
for step in range(batch_sz):
observations[step] = next_obs.copy()
actions[step], values[step] = self.model.action_value(next_obs[None, :])
next_obs, rewards[step], dones[step], _ = env.step(int(actions[step]))
if rewards[step] >= 0.5: # Won
rewards[step] = 20
elif rewards[step] == 0.0: # Lost
rewards[step] = -20
else: # Draw
rewards[step] = 0.05
ep_rewards[-1] += rewards[step]
next_obs = np.array(model.preprocess(next_obs))
if dones[step]:
ep_rewards.append(0.0)
next_obs = env.reset()
next_obs = np.array(model.preprocess(next_obs))
print("Episode: %03d, Reward: %03d" % (len(ep_rewards) - 1, ep_rewards[-2]))
_, next_value = self.model.action_value(next_obs[None, :])
returns, advs = self._returns_advantages(rewards, dones, values, next_value)
acts_and_advs = np.concatenate([actions[:, None], advs[:, None]], axis=-1)
losses = self.model.fit(observations, [acts_and_advs, returns])
print("[%d/%d] Losses: %s" % (update + 1, updates, losses.history['loss']))
return ep_rewards
def _returns_advantages(self, rewards, done, values, next_value):
returns = np.append(np.zeros_like(rewards), next_value, axis=-1)
for t in reversed(range(rewards.shape[0])):
returns[t] = rewards[t] + self.gamma * returns[t+1] * (1 - done[t])
returns = returns[:-1]
advantages = returns - values
return returns, advantages
def _value_loss(self, return_, value):
return self.value_c * kls.mean_squared_error(return_, value)
def _logits_loss(self, actions_and_advantages, logits):
actions, advantages = tf.split(actions_and_advantages, 2, axis=-1)
weighted_sparse_ce = kls.SparseCategoricalCrossentropy(from_logits=True)
policy_loss = weighted_sparse_ce(actions, logits, sample_weight=advantages)
probs = tf.nn.softmax(logits)
entropy_loss = kls.categorical_crossentropy(probs, probs)
return policy_loss - self.entropy_c * entropy_loss
env = ConnectX()
model = Model(env, num_actions=env.action_space.n)
model.run_eagerly = True
print("Eager Execution: ", tf.executing_eagerly())
print("Eager Keras Model:", model.run_eagerly)
agent = Agent_Advanced(model)
rewards_history = agent.train(env)
print("Finished training, testing....")
plt.figure(figsize=[20,10])
plt.plot(rewards_history)
plt.xlabel('Episode')
plt.ylabel('Avg rewards ')
plt.show()
```
| github_jupyter |
```
import os
import json
import pathlib
import random
import numpy as np
import matplotlib.pyplot as plt
import imageio
from skimage import transform
from IPython import display
try:
os.mkdir('data')
except FileExistsError:
pass
import tensorflow as tf
# Makes it so any changes in pymedphys is automatically
# propagated into the notebook without needing a kernel reset.
from IPython.lib.deepreload import reload
%load_ext autoreload
%autoreload 2
import pymedphys
from pymedphys._experimental.autosegmentation import indexing, filtering, pipeline, mask
(
_,
_,
_,
ct_uid_to_structure_uid,
structure_uid_to_ct_uids,
_,
structure_names_by_ct_uid,
structure_names_by_structure_set_uid,
_,
_,
) = pipeline.get_dataset_metadata()
structure_names = set()
for _, uid_structure_names in structure_names_by_structure_set_uid.items():
for structure_name in uid_structure_names:
structure_names.add(structure_name)
structure_names
mask_expansion = 5
# Create masks for the following structures, in the following order
structures_to_learn = [
'patient', 'brain', 'eye_left', 'eye_right']
# Use the following to filter the slices used for training, validation,
# and testing
filters = {
"study_set_must_have_all_of": structures_to_learn,
"slice_at_least_one_of": ['brain', 'eye_left', 'eye_right'],
"slice_must_have": ['patient'],
"slice_cannot_have": []
}
structure_uids, ct_uids = pipeline.get_filtered_uids(filters)
structure_uids
np.random.shuffle(ct_uids)
dataset = pipeline.create_dataset(ct_uids, structures_to_learn, expansion=mask_expansion)
def diagnostic_plotting(x_grid, y_grid, input_array, output_array):
plt.figure(figsize=(15,10))
x_grid = x_grid.numpy()
y_grid = y_grid.numpy()
input_array = input_array.numpy()[:,:,0]
output_array = output_array.numpy()
for i in range(output_array.shape[-1]):
contours = mask.get_contours_from_mask(
x_grid, y_grid, output_array[:,:,i])
for contour in contours:
plt.plot(*contour.T)
plt.axis('equal')
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# windowed = np.copy(input_array)
# vmin = 900
# vmax = 1200
# windowed[windowed<vmin] = vmin
# windowed[windowed>vmax] = vmax
plt.pcolormesh(x_grid, y_grid, input_array, shading="nearest")
plt.colorbar()
ax.set_xlim(xlim)
ax.set_ylim(ylim)
max_input = 4095
min_input = 0
input_scale = (max_input + 1) / 256
dimension_downscale = int(8)
for ct_uid, x_grid, y_grid, input_array, output_array in dataset:
ct_uid = ct_uid.numpy().decode()
orbits = tf.math.reduce_max(output_array[:,:,-2:], axis=-1)
output_three_channel = tf.concat([
orbits[:,:,None], # orbits
output_array[:,:,1:2], # brain
output_array[:,:,0:1] # patient
], axis=-1)
# display.display(display.Markdown(f"## {ct_uid}"))
# diagnostic_plotting(x_grid, y_grid, input_array, output_three_channel)
# plt.show()
scaled_input_array = tf.convert_to_tensor(transform.downscale_local_mean(input_array, (dimension_downscale, dimension_downscale, 1)))
scaled_output_array = tf.convert_to_tensor(transform.downscale_local_mean(
output_three_channel, (dimension_downscale, dimension_downscale, 1)))
new_x_grid = tf.convert_to_tensor(x_grid[dimension_downscale//2::dimension_downscale])
new_y_grid = tf.convert_to_tensor(y_grid[dimension_downscale//2::dimension_downscale])
# diagnostic_plotting(new_x_grid, new_y_grid, scaled_input_array, scaled_output_array)
# plt.show()
new_input_array = scaled_input_array.numpy()[:, :, 0]
new_input_array[new_input_array > max_input] = max_input
input_scaled_to_uint8 = (new_input_array / input_scale).astype(np.uint8)
masks_scaled_to_uint8 = ((scaled_output_array.numpy() + 1)/2 * 255).astype(np.uint8)
# plt.imshow(input_scaled_to_uint8)
# plt.show()
# plt.imshow(masks_scaled_to_uint8)
# plt.show()
ct_uid_split = ct_uid.split('.')
patient_component = ".".join(ct_uid_split[0:-2])
slice_component = ".".join(ct_uid_split[-2:])
try:
os.mkdir(f'data/{patient_component}')
except FileExistsError:
pass
imageio.imwrite(f'data/{patient_component}/{slice_component}_image.png', input_scaled_to_uint8)
imageio.imwrite(f'data/{patient_component}/{slice_component}_mask.png', masks_scaled_to_uint8)
```
| github_jupyter |
# Siamese Convolutional Neural Network
```
from model import siamese_CNN
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import pickle
import numpy as np
from pandas import DataFrame
import tensorflow as tf
import keras.backend as K
# model imports
from keras.models import Sequential, Model, Input
from keras.layers import Conv2D, MaxPooling2D, Flatten, Dense
from keras.layers import Dropout, BatchNormalization
from keras.layers import Lambda, concatenate
from keras.initializers import RandomNormal
from tensorflow.keras.regularizers import l2
from keras.optimizers import Adam, RMSprop
from keras.callbacks import EarlyStopping
# plotting
from tensorflow.keras.utils import plot_model
import pydotplus as pydot
import matplotlib.pyplot as plt
%matplotlib inline
```
## Setting up datasets
```
def load_pickle(file):
with open(file, 'rb') as f:
return pickle.load(f)
def load_dataset(i):
print("\nLoading dataset...", end="")
data = load_pickle(PATHS[i][0]) # training data
pairs = load_pickle(PATHS[i][1]) # pairs of data
pairs = [pairs[0], pairs[1]]
targets = load_pickle(PATHS[i][2]) # targets of the data
print("dataset {0} loaded successfully!\n".format(PATHS.index(PATHS[i])))
return data, pairs, targets
def data_shapes():
print("\nNumber of classes : ", data.shape[0])
print("Original signatures : ", len(data[0][0]))
print("Forged signatures : ", len(data[0][1]))
print("Image shape : ", data[0][0][0].shape)
print("Total number of pairs : ", pairs[0].shape[0])
print("Number of pairs for each class : ", pairs[0].shape[0]//data.shape[0])
print("Targets shape : ", targets.shape)
print()
def plot_13(id1, id2, id3):
fig, ax = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(8,8))
ax[0].imshow(pairs[0][id1])
ax[1].imshow(pairs[1][id2])
ax[2].imshow(pairs[1][id3])
# subplot titles
ax[0].set_title('Anchor image of class {0}'.format(id1//42))
ax[1].set_title('Target: {0}'.format(targets[id2]))
ax[2].set_title('Target: {0}'.format(targets[id3]))
fig.tight_layout()
```
## Setting up models
```
def contrastive_loss(y_true, y_pred):
"""Contrastive loss.
if y = true and d = pred,
d(y,d) = mean(y * d^2 + (1-y) * (max(margin-d, 0))^2)
Args:
y_true : true values.
y_pred : predicted values.
Returns:
contrastive loss
"""
margin = 1
return K.mean(y_true * K.square(y_pred) + (1 - y_true) * K.square(K.maximum(margin - y_pred, 0)))
def model_setup(verbose=False):
rms = RMSprop(lr=1e-4, rho=0.9, epsilon=1e-08)
model = siamese_CNN((224, 224, 1))
model.compile(optimizer=rms, loss=contrastive_loss)
if verbose:
model.summary()
tf.keras.utils.plot_model(
model,
show_shapes=True,
show_layer_names=True,
to_file="resources\\model_plot.png"
)
return model
```
## Training
```
def model_training(model, weights_name):
print("\nStarting training!\n")
# hyperparameters
EPOCHS = 100 # number of epochs
BS = 32 # batch size
# callbacks
callbacks = [EarlyStopping(monitor='val_loss', patience=3, verbose=1,)]
history = model.fit(
pairs, targets,
batch_size=BS,
epochs=EPOCHS,
verbose=1,
callbacks=callbacks,
validation_split=0.25,
)
ALL_HISTORY.append(history)
print("\nSaving weight for model...", end="")
siamese_net.save_weights('weights\\{0}.h5'.format(weights_name))
print("saved successfully!")
```
## Evaluation
```
def compute_accuracy_roc(predictions, labels):
"""Compute ROC accuracyand threshold.
Also, plot FAR-FRR curves and P-R curves for input data.
Args:
predictions -- np.array : array of predictions.
labels -- np.array : true labels (0 or 1).
plot_far_frr -- bool : plots curves of True.
Returns:
max_acc -- float : maximum accuracy of model.
best_thresh --float : best threshold for the model.
"""
dmax = np.max(predictions)
dmin = np.min(predictions)
nsame = np.sum(labels == 1) #similar
ndiff = np.sum(labels == 0) #different
step = 0.01
max_acc = 0
best_thresh = -1
frr_plot = []
far_plot = []
pr_plot = []
re_plot = []
ds = []
for d in np.arange(dmin, dmax+step, step):
idx1 = predictions.ravel() <= d # guessed genuine
idx2 = predictions.ravel() > d # guessed forged
tp = float(np.sum(labels[idx1] == 1))
tn = float(np.sum(labels[idx2] == 0))
fp = float(np.sum(labels[idx1] == 0))
fn = float(np.sum(labels[idx2] == 1))
tpr = float(np.sum(labels[idx1] == 1)) / nsame
tnr = float(np.sum(labels[idx2] == 0)) / ndiff
acc = 0.5 * (tpr + tnr)
pr = tp / (tp + fp)
re = tp / (tp + fn)
if (acc > max_acc):
max_acc, best_thresh = acc, d
far = fp / (fp + tn)
frr = fn / (fn + tp)
frr_plot.append(frr)
pr_plot.append(pr)
re_plot.append(re)
far_plot.append(far)
ds.append(d)
plot_metrics = [ds, far_plot, frr_plot, pr_plot, re_plot]
return max_acc, best_thresh, plot_metrics
def model_evaluation(model):
print("\nEvaluating model...", end="")
pred = model.predict(pairs)
acc, thresh, plot_metrics = compute_accuracy_roc(pred, targets)
print("evaluation finished!\n")
ACCURACIES.append(acc)
THRESHOLDS.append(thresh)
PLOTS.append(plot_metrics)
```
## Visualizing models
```
def visualize_history():
losses = ['loss', 'val_loss']
accs = ['accuracy', 'val_accuracy']
fig, ax = plt.subplots(3, 2, sharex=True, sharey=True, figsize=(8,8))
for i in range(3):
for x, y in zip(losses, accs):
ax[i,0].plot(ALL_HISTORY[i].history[x])
ax[i,0].set_title('Losses')
ax[i,1].plot(ALL_HISTORY[i].history[y])
ax[i,1].set_title('Accuracies')
ax[i,0].legend(losses)
ax[i,1].legend(accs)
plt.grid(True)
plt.tight_layout()
def evaluation_plots(metrics):
ds = metrics[0]
far_plot = metrics[1]
frr_plot = metrics[2]
pr_plot = metrics[3]
re_plot = metrics[4]
fig = plt.figure(figsize=(15,6))
# error rate
ax = fig.add_subplot(121)
ax.plot(ds, far_plot, color='red')
ax.plot(ds, frr_plot, color='blue')
ax.set_title('Error rate')
ax.legend(['FAR', 'FRR'])
ax.set(xlabel = 'Thresholds', ylabel='Error rate')
# precision-recall curve
ax1 = fig.add_subplot(122)
ax1.plot(ds, pr_plot, color='green')
ax1.plot(ds, re_plot, color='magenta')
ax1.set_title('P-R curve')
ax1.legend(['Precision', 'Recall'])
ax.set(xlabel = 'Thresholds', ylabel='Error rate')
plt.show()
```
## Everything put together
```
# paths to datasets
PATHS = [
[
'data\\pickle-files\\cedar_pairs1_train.pickle',
'data\\pickle-files\\cedar_pairs1_pairs.pickle',
'data\\pickle-files\\cedar_pairs1_targets.pickle'
],
[
"data\\pickle-files\\bengali_pairs1_pairs.pickle"
'data\\pickle-files\\bengali_pairs1_train.pickle',
'data\\pickle-files\\bengali_pairs1_targets.pickle'
],
[
'data\\pickle-files\\hindi_pairs1_train.pickle',
'data\\pickle-files\\hindi_pairs1_pairs.pickle',
'data\\pickle-files\\hindi_pairs1_targets.pickle'
]
]
# for kaggle
# PATHS = [
# [
# '../usr/lib/preprocess/cedar_pairs1_train.pickle',
# '../usr/lib/preprocess/cedar_pairs1_pairs.pickle',
# '../usr/lib/preprocess/cedar_pairs1_targets.pickle'
# ],
# [
# '../usr/lib/preprocess/bengali_pairs1_train.pickle',
# '../usr/lib/preprocess/bengali_pairs1_pairs.pickle',
# '../usr/lib/preprocess/bengali_pairs1_targets.pickle'
# ],
# [
# '../usr/lib/preprocess/hindi_pairs1_train.pickle',
# '../usr/lib/preprocess/hindi_pairs1_pairs.pickle',
# '../usr/lib/preprocess/hindi_pairs1_targets.pickle'
# ]
# ]
# evaluation
ALL_HISTORY = []
ACCURACIES = []
THRESHOLDS = []
PLOTS = []
for i in range(3):
data, pairs, targets = load_dataset(i)
data_shapes()
for bs in range(0, 3*42, 42):
plot_13(0+bs, 20+bs, 41+bs)
print()
if i == 0:
siamese_net = model_setup(True)
model_training(siamese_net, 'siamese_cedar')
elif i == 1:
siamese_net = model_setup()
model_training(siamese_net, 'siamese_bengali')
elif i == 2:
siamese_net = model_setup()
model_training(siamese_net, 'siamese_hindi')
model_evaluation(siamese_net)
del data
del pairs
del targets
visualize_history()
df = DataFrame.from_dict({'Accuracies': ACCURACIES,
'Thresholds': THRESHOLDS})
df.index = ['Cedar', 'BhSig260 Bengali', 'BhSig260 Hindi']
df
for met in PLOTS:
evaluation_plots(met)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from skhep.dataset.numpydataset import *
import uproot
from skhep.dataset.selection import Selection
import ROOT
from Utilities.utilities import destruct_objects
from Utilities.RooFit import RooDataset, RemoveEmptyBins
from PyLHCb.Root.RooFitUtils import ResidualPlot
import probfit
import iminuit
# Proxies for RooFit classes
RooFit = ROOT.RooFit
RooRealVar = ROOT.RooRealVar
RooArgList = ROOT.RooArgList
RooArgSet = ROOT.RooArgSet
RooDataSet = ROOT.RooDataSet
RooAddPdf = ROOT.RooAddPdf
RooProdPdf = ROOT.RooProdPdf
RooExtendPdf = ROOT.RooExtendPdf
RooConst = ROOT.RooFit.RooConst
RooConst = ROOT.RooFit.RooConst
RooExponential= ROOT.RooExponential
RooStats = ROOT.RooStats
RooGaussian = ROOT.RooGaussian
RooWorkspace = ROOT.RooWorkspace
RooWorkspace.rfimport = getattr(RooWorkspace,'import')
#background only
np.random.seed(10)
tau = -2.0
beta = -1/tau
data = np.random.exponential(beta, 1000)
data = data[(data > 0.1) & (data < 3)]
plt.hist(data, bins=100, histtype='step');
exp_n = probfit.Normalized(probfit.exponential, (0.1,3.))
UL_exp = probfit.UnbinnedLH(exp_n, data)
initial_params = {"lambda": 2.0, "limit_lambda": (0.0, 5.0) , "error_lambda" : 0.05,}
minuit_exp = iminuit.Minuit(UL_exp, **initial_params, pedantic=True)
minuit_exp.migrad();
np.random.seed(0)
tau = -2.0
beta = -1/tau
data = np.random.exponential(beta, 300)
peak = np.random.normal(1.2, 0.1, 10)
data = np.concatenate((data,peak))
data = data[(data > 0.1) & (data < 3)]
plt.hist(data, bins=100, histtype='step');
ws = RooWorkspace("ws")
x = RooRealVar("x","x",0.1,3.0)
ws.rfimport(x)
roodataset = RooDataset( "data", x)
roodataset.fill( data )
roodataset.Print('v')
roodataset.to_wspace( ws )
### signal
mean = RooConst(1.2)
sigma = RooConst(0.1)
gauss = RooGaussian("signal","signal", x, mean, sigma)
nsig = RooRealVar("nsig", "nsig", 0, -10, len((data)))
gauss_norm = RooExtendPdf("gauss_norm", "gauss_norm", gauss, nsig)
### background
tau = RooRealVar("tau", "tau", -2.0, -5, -0.1)
exp = RooExponential("bkg","bkg", x, tau)
nbkg = RooRealVar("nbkg", "nbkg", len(data), 0, len((data))*1.1)
exp_norm = RooExtendPdf("exp_norm", "exp_norm", exp, nbkg)
constraint_tau = RooGaussian("constraint_tau", "constraint_tau", tau, RooConst(-minuit_exp.values["lambda"]), RooConst(minuit_exp.errors["lambda"]))
### total
totpdf = RooAddPdf("totpdf","totpdf",RooArgList(gauss_norm,exp_norm))
#totpdf_c = RooAddPdf("totpdf","totpdf",RooArgList(gauss_norm,exp_norm))
totpdf_c = RooProdPdf("totpdf_c", "totpdf_c", RooArgList(totpdf, constraint_tau))
#ws.rfimport(totpdf)
ws.rfimport(totpdf_c)
ws.Print('V')
dataset = ws.data("data")
fitResult = totpdf_c.fitTo(dataset, ROOT.RooFit.Extended(), ROOT.RooFit.Minos(ROOT.kFALSE), ROOT.RooFit.Save(ROOT.kTRUE), ROOT.RooFit.Constrain(RooArgSet(tau)))
c = ROOT.TCanvas()
frame = x.frame(50)
dataset.plotOn( frame, ROOT.RooFit.Name('data_print'))
RemoveEmptyBins( frame, 'data_print')
totpdf_c.plotOn( frame, ROOT.RooFit.Name('model_print'))
frame.Draw()
Plot = ResidualPlot('title1', frame)
Plot.addResidual( 'data_print', 'model_print', 0.1, 3.0)
Plot.plot()
Plot.canvas.GetListOfPrimitives().At(0).cd()
c.Draw()
ws.defineSet('POI', "nsig")
ws.defineSet('OBS', 'x')
ws.defineSet('NUI', 'nbkg,tau')
conf = RooStats.ModelConfig('model', ws)
conf.SetPdf(ws.pdf('totpdf_c'))
conf.SetParametersOfInterest(ws.set('POI'))
conf.SetObservables(ws.set('OBS'))
conf.SetNuisanceParameters(ws.set('NUI'))
POI = ws.set('POI')
poi = POI.first()
#S+B model
model_sb = conf
model_sb.SetName("MODEL_SB")
#poi.setVal(35)
model_sb.SetSnapshot(RooArgSet(poi))
#BKG only
model_b = conf.Clone()
model_b.SetName("MODEL_B")
oldval = poi.getVal()
poi.setVal(0)
model_b.SetSnapshot( RooArgSet(poi) )
#poi.setVal(oldval)
ws.rfimport(model_sb)
ws.rfimport(model_b)
data = ws.data("data")
model_sb = ws.obj('MODEL_SB')
model_b = ws.obj('MODEL_B')
data.Print()
model_b.Print()
model_sb.Print()
#Execution
calc = ROOT.RooStats.FrequentistCalculator(data, model_b, model_sb)
calc.SetToys(2000,1000)
#calc = ROOT.RooStats.AsymptoticCalculator(data, model_b, model_sb)
#calc = ROOT.RooStats.AsymptoticCalculator(data, model_sb, model_b)
#calc.SetOneSided(True)
#calc.SetQTilde(False)
#calc.SetPrintLevel(0)
#calc.SetOneSidedDiscovery(True)
res = calc.GetHypoTest()
#res = ROOT.RooStats.HypoTestInverter(calc)
res.Print()
test = RooStats.HypoTestInverter(calc)
test.SetConfidenceLevel(0.95)
test.UseCLs(True)
toysmc = test.GetHypoTestCalculator().GetTestStatSampler()
#RooStats.ProfileLikelihoodTestStat.SetAlwaysReuseNLL(True)
profil = RooStats.ProfileLikelihoodTestStat(model_sb.GetPdf())
profil.SetOneSided(True)
toysmc.SetTestStatistic(profil)
test.SetFixedScan(10, 0.1, 25)
r = test.GetInterval()
r.Print()
plot = RooStats.HypoTestInverterPlot("alla","blabal", r)
c = ROOT.TCanvas("Scan")
plot.Draw("CLb 2CL")
c.Draw()
print("\n \n")
print("Obs upper limit {0}".format(r.UpperLimit()))
print("Exp upper limit {0}".format(r.GetExpectedUpperLimit(0)))
print("Exp upper limit 1sigma {0}".format(r.GetExpectedUpperLimit(1)))
print("Exp upper limit 2sigma {0}".format(r.GetExpectedUpperLimit(2)))
print("Exp upper limit -1sigma {0}".format(r.GetExpectedUpperLimit(-1)))
print("Exp upper limit -2sigma {0}".format(r.GetExpectedUpperLimit(-2)))
```
####
| github_jupyter |
# Example 10 A: Inverted Pendulum with Wall
```
import numpy as np
import scipy.linalg as spa
import pypolycontain as pp
import pydrake.solvers.mathematicalprogram as MP
import pydrake.solvers.gurobi as Gurobi_drake
# use Gurobi solver
global gurobi_solver, license
gurobi_solver=Gurobi_drake.GurobiSolver()
license = gurobi_solver.AcquireLicense()
import pypolycontain as pp
import pypolycontain.pwa_control as pwa
import matplotlib.pyplot as plt
```
## Dynamcis and matrices
The system is constrained to $|\theta| \le 0.12$, $|\dot{\theta}| \le 1$, $|u| \le 4$, and the wall is situated at $\theta=0.1$. The problem is to identify a set of states $\mathcal{X} \in \mathbb{R}^2$ and the associated control law $\mu: [-0.12,0.12] \times [-1,1] \rightarrow [-4,4]$ such that all states in $\mathcal{X}$ are steered toward origin in finite time, while respecting the constraints. It is desired that $\mathcal{X}$ is as large as possible. The dynamical system is described as a hybrid system with two modes associated with ``contact-free" and ``contact". The piecewise affine dynamics is given as:
\begin{equation*}
A_1=
\left(
\begin{array}{cc}
1 & 0.01 \\
0.1 & 1
\end{array}
\right),
A_2=
\left(
\begin{array}{cc}
1 & 0.01 \\
-9.9 & 1
\end{array}
\right),
\end{equation*}
\begin{equation*}
B_1=B_2=
\left(
\begin{array}{c}
0 \\ 0.01
\end{array}
\right),
c_1=
\left(
\begin{array}{c}
0 \\ 0
\end{array}
\right) ,
c_2=
\left(
\begin{array}{c}
0 \\ 1
\end{array}
\right),
\end{equation*}
where mode 1 and 2 correspond to contact-free $\theta \le 0.1$ and contact dynamics $\theta >0.1$, respectively.
```
A=np.array([[1,0.01],[0.1,1]])
B=np.array([0,0.01]).reshape(2,1)
c=np.array([0,0]).reshape(2,1)
C=pp.unitbox(N=3).H_polytope
C.h=np.array([0.1,1,4,0.1,1,4]).reshape(6,1)
S1=pwa.affine_system(A,B,c,name='free',XU=C)
# X=pp.zonotope(G=np.array([[0.1,0],[0,1]]))
# U=pp.zonotope(G=np.ones((1,1))*4)
# W=pp.zonotope(G=np.array([[0.1,0],[0,1]]))
# Omega=rci_old(A, B, X, U , W, q=5,eta=0.001)
import pickle
(H,h)=pickle.load(open('example_inverted_pendulum_H.pkl','rb'))
Omega=pp.H_polytope(H, h)
A=np.array([[1,0.01],[-9.9,1]])
B=np.array([0,0.01]).reshape(2,1)
c=np.array([0,1]).reshape(2,1)
C=pp.unitbox(N=3).H_polytope
C.h=np.array([0.12,1,4,-0.1,1,4]).reshape(6,1)
S2=pwa.affine_system(A,B,c,name='contact',XU=C)
myS=pwa.pwa_system()
myS.add_mode(S1)
myS.add_mode(S2)
```
## A Polytopic Trajectory
```
T=50
goal=0.0001*pp.unitbox(2).H_polytope
x0=np.array([0,0.75]).reshape(2,1)
F,FH,_,_,_=pwa.extend(myS,x0,T,[goal],H_rep=False,color='blue')
fig,ax=plt.subplots()
pp.visualize(F,fig=fig,ax=ax,a=0.01,alpha=0.9)
ax.set_xlabel(r'$\theta$',FontSize=30)
ax.set_ylabel(r'$\dot{\theta}$',FontSize=30)
ax.set_title('A Polytopic Trajectory (Blue)',FontSize=30)
ax.axvline(x=0.1,LineWidth=1,linestyle=':',color='black')
```
## My first branch: connect polytopic trajectories
```
T=18
x0=np.array([0.075,0]).reshape(2,1)
F2,_,_,_,_=pwa.extend(myS,x0,T,F,H_rep=False,color='red')
fig,ax=plt.subplots()
pp.visualize(F+F2,fig=fig,ax=ax,a=0.01,alpha=0.9)
ax.set_xlabel(r'$\theta$',FontSize=30)
ax.set_ylabel(r'$\dot{\theta}$',FontSize=30)
ax.set_title('A Branch Added (red)',FontSize=30)
ax.axvline(x=0.1,LineWidth=1,linestyle=':',color='black')
```
## Building A Tree
```
def sampler():
L=np.array([0.12,1])
return np.random.uniform(-L,L).reshape(2,1)
T=10
list_of_H_polytopes=[Omega]
list_of_nodes=[Omega]
stop_sampling=False
sample=lambda :sampler()
branch=0
trajectory={}
i=0
while branch<30 and i<500:
i+=1
print("i:",i, "branch:", branch)
while not stop_sampling:
x0=sample()
flag=pwa.in_the_tree(x0,list_of_H_polytopes)
stop_sampling=not flag
try:
print("sample:",x0.T)
x,u,mu=pwa.point_trajectory(myS,x0,T=60,goal=Omega,Q=np.eye(2)*1)
Y,YY,xx,mumu,G=pwa.extend(myS,x0,T,list_of_nodes)
trajectory[branch]=(x,u,mu,xx,mumu,G)
# Y,YY=extend(x0,T,[Omega])
list_of_nodes.extend(Y)
list_of_H_polytopes.extend(YY)
branch+=1
except:
print('failed to extend')
stop_sampling=False
```
## Visualization
```
fig,ax=plt.subplots()
pp.visualize([Omega]+list_of_nodes,fig=fig,ax=ax,a=0.01,alpha=0.9)
ax.set_xlabel(r'$\theta$',FontSize=30)
ax.set_ylabel(r'$\dot{\theta}$',FontSize=30)
ax.set_title('%d Branches %d AH-polytopes'%(branch,len(list_of_nodes)),FontSize=30)
ax.axvline(x=0.1,LineWidth=1,linestyle=':',color='black')
```
### Studying Coverage
We generate random points and see
```
Trials=200
covered=0
false_positive=0
feasible=0
feasible_but_not_covered_by_N_10=0
for N in range(Trials):
x0=sample()
print(N)
try:
_,_,_=pwa.point_trajectory(myS,x0,T=50,goal=Omega,Q=np.eye(2)*100)
feasible+=1
covered+=pwa.in_the_tree(x0,list_of_H_polytopes)
try:
_,_,_=pwa.point_trajectory(myS,x0,T=10,goal=Omega,Q=np.eye(2)*100)
except:
feasible_but_not_covered_by_N_10+=1
except:
false_positive+=pwa.in_the_tree(x0,list_of_H_polytopes)
print("feasible: %d covered: %d"%(feasible,covered))
print("covered by N=10: %d"%(feasible - feasible_but_not_covered_by_N_10))
print("infeasible: %d false positive because of H-rep over-approximation: %d"%(Trials-feasible,false_positive))
```
| github_jupyter |
# ETHZ: 227-0966-00L
# Quantitative Big Imaging
# May 2, 2018
## Statistics and Reproducibility
```
%load_ext autoreload
%autoreload 2
import seaborn as sns
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["figure.dpi"] = 150
plt.rcParams["font.size"] = 14
plt.rcParams['font.family'] = ['sans-serif']
plt.rcParams['font.sans-serif'] = ['DejaVu Sans']
plt.style.use('ggplot')
sns.set_style("whitegrid", {'axes.grid': False})
```
# Literature / Useful References
### Books
- Jean Claude, Morphometry with R
- [Online](http://link.springer.com/book/10.1007%2F978-0-387-77789-4) through ETHZ
- __Chapter 3__
- [Buy it](http://www.amazon.com/Morphometrics-R-Use-Julien-Claude/dp/038777789X)
- John C. Russ, âThe Image Processing Handbookâ,(Boca Raton, CRC Press)
- Available [online](http://dx.doi.org/10.1201/9780203881095) within domain ethz.ch (or proxy.ethz.ch / public VPN)
- [Hypothesis Testing Chapter](http://www.sagepub.com/upm-data/40007_Chapter8.pdf)
- Grammar of Graphics: Leland and Wilkinson - http://www.springer.com/gp/book/9780387245447
### Videos / Podcasts
- Google/Stanford Statistics Intro
- https://www.youtube.com/watch?v=YFC2KUmEebc
- MCB 140 P-value lecture at UC Berkeley (Audio)
- https://itunes.apple.com/us/itunes-u/mcb-140-fall-2007-general/id461120088?mt=10
- Correlation and Causation (Video)
- https://www.youtube.com/watch?v=YFC2KUmEebc
- Last Week Tonight: Scientific Studies (https://www.youtube.com/watch?v=0Rnq1NpHdmw)
### Papers / Sites
- [Matlab Unit Testing Documentation](http://www.mathworks.ch/ch/help/matlab/matlab-unit-test-framework.html
)
- [Databases Introduction](http://swcarpentry.github.io/sql-novice-survey/)
- [Visualizing Genomic Data](http://circos.ca/documentation/course/visualizing-genomic-data.pdf) (General Visualization Techniques)
- [NIMRod Parameter Studies](http://www.messagelab.monash.edu.au/nimrod)
- M.E. Wolak, D.J. Fairbairn, Y.R. Paulsen (2012) Guidelines for Estimating Repeatability. Methods in Ecology and Evolution 3(1):129-137.
- David J.C. MacKay, Bayesian Interpolartion (1991) [http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.9072]
### Model Evaluation
- [Julia Evans - Recalling with Precision](https://www.youtube.com/watch?v=ryZL4XNUmwo)
- [Stripe's Next Top Model](https://github.com/stripe/topmodel)
### Iris Dataset
- The Iris dataset was used in Fisher's classic 1936 paper, The Use of Multiple Measurements in Taxonomic Problems: http://rcs.chemometrics.ru/Tutorials/classification/Fisher.pdf
# Previously on QBI ...
- Image Enhancment
- Highlighting the contrast of interest in images
- Minimizing Noise
- Understanding image histograms
- Automatic Methods
- Component Labeling
- Single Shape Analysis
- Complicated Shapes
- Distribution Analysis
- Dynamic Experiments
# Quantitative "Big" Imaging
The course has covered imaging enough and there have been a few quantitative metrics, but "big" has not really entered.
What does __big__ mean?
- Not just / even large
- it means being ready for _big data_
- volume, velocity, variety (3 V's)
- scalable, fast, easy to customize
So what is "big" imaging
#### doing analyses in a disciplined manner
- fixed steps
- easy to regenerate results
- no _magic_
#### having everything automated
- 100 samples is as easy as 1 sample
#### being able to adapt and reuse analyses
- one really well working script and modify parameters
- different types of cells
- different regions
# Objectives
1. Scientific Studies all try to get to a single number
- Make sure this number is describing the structure well (what we have covered before)
- Making sure the number is meaningful (__today!__)
1. How do we compare the number from different samples and groups?
- Within a sample or same type of samples
- Between samples
1. How do we compare different processing steps like filter choice, minimum volume, resolution, etc?
1. How do we evaluate our parameter selection?
1. How can we ensure our techniques do what they are supposed to do?
1. How can we visualize so much data? Are there rules?
# Outline
- Motivation (Why and How?)
- Scientific Goals
- Reproducibility
- Predicting and Validating
- Statistical metrics and results
- Parameterization
- Parameter sweep
- Sensitivity analysis
- Unit Testing
- Visualization
# What do we start with?
Going back to our original cell image
1. We have been able to get rid of the noise in the image and find all the cells (lecture 2-4)
1. We have analyzed the shape of the cells using the shape tensor (lecture 5)
1. We even separated cells joined together using Watershed (lecture 6)
1. We have created even more metrics characterizing the distribution (lecture 7)
We have at least a few samples (or different regions), large number of metrics and an almost as large number of parameters to _tune_
### How do we do something meaningful with it?
# Correlation and Causation
One of the most repeated criticisms of scientific work is that correlation and causation are confused.
1. Correlation
- means a statistical relationship
- very easy to show (single calculation)
2. Causation
- implies there is a mechanism between A and B
- very difficult to show (impossible to prove)
# Controlled and Observational
There are two broad classes of data and scientific studies.
### Observational
- Exploring large datasets looking for trends
- Population is random
- Not always hypothesis driven
- Rarely leads to causation
We examined 100 people and the ones with blue eyes were on average 10cm taller
In 100 cake samples, we found a 0.9 correlation between cooking time and bubble size
### Controlled
- Most scientific studies fall into this category
- Specifics of the groups are controlled
- Can lead to causation
We examined 50 mice with gene XYZ off and 50 gene XYZ on and as the foot size increased by 10%
We increased the temperature and the number of pores in the metal increased by 10%
# Simple Model: Magic / Weighted Coin
Since most of the experiments in science are usually specific, noisy, and often very complicated and are not usually good teaching examples
- Magic / Biased Coin
- You buy a _magic_ coin at a shop
- How many times do you need to flip it to _prove_ it is not fair?
- If I flip it 10 times and another person flips it 10 times, is that the same as 20 flips?
- If I flip it 10 times and then multiple the results by 10 is that the same as 100 flips?
- If I buy 10 coins and want to know which ones are fair what do I do?
# Simple Model: Magic / Weighted Coin
1. Each coin represents a stochastic variable $\mathcal{X}$ and each flip represents an observation $\mathcal{X}_i$.
1. The act of performing a coin flip $\mathcal{F}$ is an observation $\mathcal{X}_i = \mathcal{F}(\mathcal{X})$
We normally assume
1. A _fair_ coin has an expected value of $E(\mathcal{X})=0.5 \rightarrow$ 50% Heads, 50% Tails
1. An _unbiased_ flip(er) means
- each flip is independent of the others
$$ P(\mathcal{F}_1(\mathcal{X})*\mathcal{F}_2(\mathcal{X}))= P(\mathcal{F}_1(\mathcal{X}))*P(\mathcal{F}_2(\mathcal{X}))$$
- the expected value of the flip is the same as that of the coin
$$ E(\prod_{i=0}^\infty \mathcal{F}_i(\mathcal{X})) = E(\mathcal{X}) $$
# Simple Model to Reality
### Coin Flip
1. Each flip gives us a small piece of information about the flipper and the coin
1. More flips provides more information
- Random / Stochastic variations in coin and flipper cancel out
- Systematic variations accumulate
### Real Experiment
1. Each measurement tells us about our sample, out instrument, and our analysis
2. More measurements provide more information
- Random / Stochastic variations in sample, instrument, and analysis cancel out
- _Normally_ the analysis has very little to no stochastic variation
- Systematic variations accumulate
# Iris: A more complicated model
Coin flips are very simple and probably difficult to match to another experiment. A very popular dataset for learning about such values beyond 'coin-flips' is called the Iris dataset which covers a number of measurements from different plants and the corresponding species.
```{r, results='asis'}
iris %>% sample_n(5) %>% kable
```
```{r}
iris %>%
mutate(plant.id=1:nrow(iris)) %>%
melt(id.vars=c("Species","plant.id"))->flat_iris
flat_iris %>%
merge(flat_iris,by=c("Species","plant.id")) %>%
ggplot(aes(value.x,value.y,color=Species)) +
geom_jitter()+
facet_grid(variable.x~variable.y,scales="free")+
theme_bw(10)
```
# Reproducibility
A very broad topic with plenty of sub-areas and deeper meanings. We mean two things by reproducibility
### Analysis
The process of going from images to numbers is detailed in a clear manner that _anyone_, _anywhere_ could follow and get the exact (within some tolerance) same numbers from your samples
- No platform dependence
- No proprietary or "in house" algorithms
- No manual _clicking_, _tweaking_, or _copying_
- One script to go from image to result
### Measurement
Everything for analysis + taking a measurement several times (noise and exact alignment vary each time) does not change the statistics _significantly_
- No sensitivity to mounting or rotation
- No sensitivity to noise
- No dependence on exact illumination
# Reproducible Analysis
The basis for reproducible scripts and analysis are scripts and macros. Since we will need to perform the same analysis many times to understand how reproducible it is.
```bash
IMAGEFILE=$1
THRESHOLD=130
matlab -r "inImage=$IMAGEFILE; threshImage=inImage>$THRESHOLD; analysisScript;"
```
- __or__
```java -jar ij.jar -macro TestMacro.ijm blobs.tif```
- __or__
```Rscript -e "library(plyr);..."```
# Comparing Groups: Intraclass Correlation Coefficient
The intraclass correlation coefficient basically looking at how similar objects within a group are compared to between groups
```{r}
ggplot(iris,aes(x=Species,y=Sepal.Width))+
geom_boxplot()+
geom_jitter()+
labs(x="Species",y="Sepal Width",title="Low Group Similarity")+
theme_bw(20)
```
```{r}
ggplot(iris,aes(x=Species,y=Petal.Length))+
geom_boxplot()+
geom_jitter()+
labs(x="Species",y="Petal Length",title="High Group Similarity")+
theme_bw(20)
```
# Intraclass Correlation Coefficient Definition
$$ ICC = \frac{S_A^2}{S_A^2+S_W^2} $$
where
- $S_A^2$ is the variance among groups or classes
- Estimate with the standard deviations of the mean values for each group
- $S_W^2$ is the variance within groups or classes.
- Estimate with the average of standard deviations for each group
- 1 means 100% of the variance is between classes
- 0 means 0% of the variance is between classes
# Intraclass Correlation Coefficient: Values
```{r}
library("ICC")
icVal<-ICCbare(Species,Sepal.Width,data=iris)
ggplot(iris,aes(x=Species,y=Sepal.Width))+
geom_boxplot()+
geom_jitter()+
labs(x="Species",y="Sepal Width",title=sprintf("Low Group Similarity\n ICC:%2.2f",icVal))+
theme_bw(20)
```
```{r}
icVal<-ICCbare(Species,Petal.Length,data=iris)
ggplot(iris,aes(x=Species,y=Petal.Length))+
geom_boxplot()+
geom_jitter()+
labs(x="Species",y="Sepal Width",title=sprintf("High Group Similarity\n ICC:%2.2f",icVal))+
theme_bw(20)
```
# Intraclass Correlation Coefficient: Values for Coin-Flips
We have one biased coin and try to figure out how many flips we need for the ICC to tell the difference to the normal coin
```{r}
name.list<-c("Coin A","Coin B")
test.data<-plyr::ldply(c(1:length(name.list)),function(class.id) {
data.frame(name=name.list[class.id],values=runif(100,max=1+2*(class.id-1))>0.5)
})
icVal<-ICCbare(name,values,data=test.data)
ggplot(test.data,aes(x=name,y=values))+
geom_jitter()+
labs(x="Groups",y="Value",title=sprintf("100 flips\n ICC:%2.2f",icVal))+
theme_bw(20)
```
With many thousands of flips we eventually see a very strong difference but unless it is very strongly biased ICC is a poor indicator for the differences
```{r}
name.list<-c("Coin A","Coin B")
test.data<-plyr::ldply(c(1:length(name.list)),function(class.id) {
data.frame(name=name.list[class.id],values=runif(20000,max=1+2*(class.id-1))>0.5)
})
icVal<-ICCbare(name,values,data=test.data)
ggplot(test.data,aes(x=name,y=values))+
geom_jitter()+
labs(x="Groups",y="Value",title=sprintf("20,000 flips\n ICC:%2.2f",icVal))+
theme_bw(20)
```
# Comparing Groups: Tests
Once the reproducibility has been measured, it is possible to compare groups. The idea is to make a test to assess the likelihood that two groups are the same given the data
1. List assumptions
1. Establish a null hypothesis
- Usually both groups are the same
1. Calculate the probability of the observations given the truth of the null hypothesis
- Requires knowledge of probability distribution of the data
- Modeling can be exceptionally complicated
### Loaded Coin
We have 1 coin from a magic shop
- our assumptions are
- we flip and observe flips of coins accurately and independently
- the coin is invariant and always has the same expected value
- our null hypothesis is the coin is unbiased $E(\mathcal{X})=0.5$
- we can calculate the likelihood of a given observation given the number of flips (p-value)
```{r, results='asis'}
n.flips<-c(1,5,10)
cf.table<-data.frame(No.Flips=n.flips,PAH=paste(round(1000*0.5^n.flips)/10,"%"))
names(cf.table)<-c("Number of Flips","Probability of All Heads Given Null Hypothesis (p-value)")
kable(cf.table)
```
How good is good enough?
# Comparing Groups: Student's T Distribution
Since we do not usually know our distribution very well _or_ have enough samples to create a sufficient probability model
### [Student T Distribution](http://en.wikipedia.org/wiki/Student's_t-distribution)
We assume the distribution of our stochastic variable is normal (Gaussian) and the t-distribution provides an estimate for the mean of the underlying distribution based on few observations.
- We estimate the likelihood of our observed values assuming they are coming from random observations of a normal process
### Student T-Test
Incorporates this distribution and provides an easy method for assessing the likelihood that the two given set of observations are coming from the same underlying process (null hypothesis)
- Assume unbiased observations
- Assume normal distribution
# Multiple Testing Bias
Back to the magic coin, let's assume we are trying to publish a paper, we heard a p-value of < 0.05 (5%) was good enough. That means if we get 5 heads we are good!
```{r, results='asis'}
n.flips<-c(1,4,5)
cf.table<-data.frame(No.Flips=n.flips,PAH=paste(round(1000*0.5^n.flips)/10,"%"))
names(cf.table)<-c("Number of Flips","Probability of All Heads Given Null Hypothesis (p-value)")
kable(cf.table)
```
```{r, results='asis'}
n.friends<-c(1,10,20,40,80)
cfr.table<-data.frame(No.Friends=n.friends,PAH=paste(round((1000*(1-(1-0.5^5)^n.friends)))/10,"%"))
names(cfr.table)<-c("Number of Friends Flipping","Probability Someone Flips 5 heads")
kable(cfr.table)
```
Clearly this is not the case, otherwise we could keep flipping coins or ask all of our friends to flip until we got 5 heads and publish
The p-value is only meaningful when the experiment matches what we did.
- We didn't say the chance of getting 5 heads ever was < 5%
- We said if we have exactly 5 observations and all of them are heads the likelihood that a fair coin produced that result is <5%
Many [methods](http://en.wikipedia.org/wiki/Multiple_comparisons_problem) to correct, most just involve scaling $p$. The likelihood of a sequence of 5 heads in a row if you perform 10 flips is 5x higher.
# Multiple Testing Bias: Experiments
This is very bad news for us. We have the ability to quantify all sorts of interesting metrics
- cell distance to other cells
- cell oblateness
- cell distribution oblateness
So lets throw them all into a magical statistics algorithm and push the __publish__ button
With our p value of less than 0.05 and a study with 10 samples in each group, how does increasing the number of variables affect our result
```{r simcode}
make.random.data<-function(n.groups=2,n.samples=10,n.vars=1,rand.fun=runif,group.off.fun=function(grp.id) 0) {
ldply(1:n.groups,function(c.group) {
data.frame(group=c.group,
do.call(cbind,llply(1:n.vars,function(c.var) group.off.fun(c.group)+rand.fun(n.samples)))
)
})
}
# only works for two groups
all.t.test<-function(in.data) {
group1<-subset(in.data,group==1)[,-1,drop=F]
group2<-subset(in.data,group==2)[,-1,drop=F]
ldply(1:ncol(group1),function(var.id) {
tres<-t.test(group1[,var.id],group2[,var.id])
data.frame(var.col=var.id,
p.val=tres$p.value,
method=tres$method,
var.count=ncol(group1),
sample.count=nrow(in.data))
}
)
}
# run the entire analysis several times to get an average
test.random.data<-function(n.test=10,...) {
ldply(1:n.test,function(c.test) cbind(test.num=c.test,all.t.test(make.random.data(...))))
}
```
```{r rand.sim}
var.range<-round(seq(1,60,length.out=15))
test.cnt<-80
sim.data<-ldply(var.range,function(n.vars) test.random.data(test.cnt,n.vars=n.vars))
sig.likelihood<-ddply(sim.data,.(var.count),function(c.tests) {
data.frame(sig.vars=nrow(subset(c.tests,p.val<=0.05))/length(unique(c.tests$test.num)))
})
```
```{r, fig.height=5}
ggplot(sig.likelihood,
aes(x=var.count,y=sig.vars))+
geom_point()+geom_line()+
labs(x="Number of Variables in Study",y="Number of Significant \n (P<0.05) Findings")+
theme_bw(20)
```
# Multiple Testing Bias: Correction
Using the simple correction factor (number of tests performed), we can make the significant findings constant again
```{r, fig.height=5}
sig.likelihood.corr<-ddply(sim.data,.(var.count),function(c.tests) {
data.frame(sig.vars=nrow(subset(c.tests,p.val<=0.05/var.count))/length(unique(c.tests$test.num)))
})
ggplot(sig.likelihood.corr,
aes(x=var.count,y=sig.vars))+
geom_point()+geom_line(aes(color="Corrected"))+
geom_point(data=sig.likelihood)+
geom_line(data=sig.likelihood,aes(color="Non-Corrected"))+
geom_hline(yintercept=0.05,color="green",alpha=0.4,size=2)+
scale_y_sqrt()+
labs(x="Number of Variables in Study",y="Number of Significant \n (P<0.05) Findings")+
theme_bw(20)
```
So no harm done there we just add this correction factor right?
Well what if we have exactly one variable with shift of 1.0 standard deviations from the other.
```{r rand.sim.diff}
var.range<-round(seq(10,60,length.out=10))
test.cnt<-100
one.diff.sample<-function(grp.id) ifelse(grp.id==2,.10,0)
sim.data.diff<-ldply(var.range,function(n.samples)
test.random.data(test.cnt,n.samples=n.samples,
rand.fun=function(n.cnt) rnorm(n.cnt,mean=1,sd=0.1),
group.off.fun=one.diff.sample))
```
```{r, fig.height=5}
ggplot(sim.data.diff,aes(x=sample.count,y=p.val))+
geom_point()+
geom_smooth(aes(color=" 1 Variable"))+
geom_hline(yintercept=0.05,color="green",alpha=0.4,size=2)+
labs(x="Number of Samples in Study",y="P-Value for a 10% Difference")+
theme_bw(20)
```
# Multiple Testing Bias: Sample Size
```{r rand.sim.mcsample}
var.range<-c(1,5,10,20,100) # variable count
sim.data.psig<-ldply(var.range,function(c.vcnt) {
cbind(var.count=c.vcnt,ddply(sim.data.diff,.(sample.count),function(c.sample)
data.frame(prob.sig=nrow(subset(c.sample,p.val<=0.05/c.vcnt))/nrow(c.sample))
))
})
```
```{r, fig.height=9,fig.width=12}
ggplot(sim.data.psig,aes(x=sample.count,y=100*prob.sig))+
geom_line(aes(color=as.factor(var.count)),size=2)+
ylim(0,100)+
labs(x="Number of Samples in Study",y="Probability of Finding\n Significant Variable (%)",color="Variables")+
theme_bw(20)
```
# Predicting and Validating

- Borrowed from http://peekaboo-vision.blogspot.ch/2013/01/machine-learning-cheat-sheet-for-scikit.html
### Main Categories
- Classification
- Regression
- Clustering
- Dimensionality Reduction
# Overview
Basically all of these are ultimately functions which map inputs to outputs.
The input could be
- an image
- a point
- a feature vector
- or a multidimensional tensor
The output is
- a value (regression)
- a classification (classification)
- a group (clustering)
- a vector / matrix / tensor with _fewer_ degrees of input / less noise as the original data (dimensionality reduction)
### Overfitting
The most serious problem with machine learning and such approachs is overfitting your model to your data. Particularly as models get increasingly complex (random forest, neural networks, deep learning, ...), it becomes more and more difficult to apply common sense or even understand exactly what a model is doing and why a given answer is produced.
```python
magic_classifier = {}
# training
magic_classifier['Dog'] = 'Animal'
magic_classifier['Bob'] = 'Person'
magic_classifier['Fish'] = 'Animal'
```
Now use this classifier, on the training data it works really well
```python
magic_classifier['Dog'] == 'Animal' # true, 1/1 so far!
magic_classifier['Bob'] == 'Person' # true, 2/2 still perfect!
magic_classifier['Fish'] == 'Animal' # true, 3/3, wow!
```
On new data it doesn't work at all, it doesn't even execute.
```python
magic_classifier['Octopus'] == 'Animal' # exception?! but it was working so well
magic_classifier['Dan'] == 'Person' # exception?!
```
The above example appeared to be a perfect trainer for mapping names to animals or people, but it just memorized the inputs and reproduced them at the output and so didn't actually learn anything, it just copied.
# Validation
Relevant for each of the categories, but applied in a slightly different way depending on the group. The idea is two divide the dataset into groups called training and validation or ideally training, validation, and testing. The analysis is then
- developed on __training__
- iteratively validated on __validation__
- ultimately tested on __testing__
# Concrete Example: Classifying Flowers
Here we return to the iris data set and try to automatically classify flowers
```{r, results = 'asis'}
iris %>% sample_n(5) %>% kable(,digits=2)
```
# Dividing the data
We first decide on a split, in this case 60%, 30%, 10% for training, validation, and testing and randomly divide up the data.
```{r, echo=T, results='asis'}
div.iris<-iris %>%
mutate(
# generate a random number uniformally between 0 and 1
rand_value = runif(nrow(iris)),
# divide the data based on how high this number is into different groups
data_div = ifelse(rand_value<0.6,"Training",
ifelse(rand_value<0.9,"Validation",
"Testing")
)
) %>% select(-rand_value) # we don't need this anymore
div.iris %>% sample_n(4) %>% kable(digits=2)
```
Here are two relevant variables plotted against each other
```{r}
ggplot(div.iris,aes(Sepal.Length,Sepal.Width))+
geom_point(aes(shape=data_div,color=Species),size=2)+
labs(shape="Type")+
facet_grid(~data_div)+
coord_equal()+
theme_bw(10)
```
# Using a simple decision tree
Making a decision tree can be done by providing the output (```class```) as a function of the input, in this case just a combination of _x1_ and _y1_ (```~x1+y1```). From this a
```{r, echo=T}
library(rpart)
library(rpart.plot)
training.data <- div.iris %>% subset(data_div == "Training")
dec.tree<-rpart(Species~Sepal.Length+Sepal.Width,data=iris)
```
A tree can be visualized graphically as a trunk (the top most node) dividing progressively into smaller subnodes
```{r}
prp(dec.tree)
```
or as a list of rules to apply
```{r, results='markdown'}
print(dec.tree)
```
Overlaying with the prediction data looks good
```{r}
match_range<-function(ivec,n_length=10) seq(from=min(ivec),to=max(ivec),length=n_length)
pred.map<-expand.grid(Sepal.Length = match_range(training.data$Sepal.Length),
Sepal.Width = match_range(training.data$Sepal.Width))
pred.map$pred_class<-predict(dec.tree,pred.map,type="class")
training.data %>%
mutate(pred_class=predict(dec.tree,training.data,type="class"),
class_result=ifelse(as.character(pred_class)==as.character(Species),"Correct","Incorrect")
)->training.data
ggplot(pred.map,aes(Sepal.Length,Sepal.Width))+
geom_tile(aes(fill=pred_class),alpha=0.5)+
geom_point(data=training.data,aes(color=Species,size=class_result))+
labs(fill="Predicted",size = "Incorrectly\nLabeled")+
theme_bw(20)
```
It struggles more with the validation data since it has never seen it before and it's not quite the same as the training
```{r}
valid.data<-div.iris %>% subset(data_div == "Validation")
valid.data %>%
mutate(pred_class=predict(dec.tree,valid.data,type="class"),
class_result=ifelse(as.character(pred_class)==as.character(Species),"Correct","Incorrect")
)->valid.data
ggplot(pred.map,aes(Sepal.Length,Sepal.Width))+
geom_tile(aes(fill=pred_class),alpha=0.5)+
geom_point(data=valid.data,aes(color=Species,size=class_result))+
labs(fill="Predicted",size = "Incorrectly\nLabeled")+
theme_bw(20)
```
The test data (__normally we would not look at it at all right now and wait until the very end__) looks even worse and an even smaller fraction is correctly matched.
```{r}
valid.data<-div.iris %>% subset(data_div == "Testing")
valid.data %>%
mutate(pred_class=predict(dec.tree,valid.data,type="class"),
class_result=ifelse(as.character(pred_class)==as.character(Species),"Correct","Incorrect")
)->valid.data
ggplot(pred.map,aes(Sepal.Length,Sepal.Width))+
geom_tile(aes(fill=pred_class),alpha=0.5)+
geom_point(data=valid.data,aes(color=Species,size=class_result))+
labs(fill="Predicted",size = "Incorrectly\nLabeled")+
theme_bw(20)
```
# Tricky Concrete Example: Classification
Taking a list of points (feature vectors) where each has an $x1$ and a $y1$ coordinate and a classification (_Happy_ or _Sad_), we can show the data as a table
```{r, results = 'asis'}
spiral.pts <- expand.grid(x = -50:50, y = -50:50) %>%
subset((x==0) | (y==0)) %>%
mutate(
r = sqrt(x^2+y^2),
th = r/60*2*pi,
x1 = cos(th)*x-sin(th)*y,
y1 = sin(th)*x+cos(th)*y,
class = ifelse(x==0,"Happy","Sad")
) %>%
select(x1,y1,class)
kable(spiral.pts %>% sample_n(5),digits=2)
```
Or graphically
```{r}
ggplot(spiral.pts,aes(x1,y1,color=class))+
geom_point()+
theme_bw(20)
```
You can play around with neural networks and this data set at [TensorFlow Playground](playground.tensorflow.org)
# Dividing the data
We first decide on a split, in this case 60%, 30%, 10% for training, validation, and testing and randomly divide up the data.
```{r, echo=T, results='asis'}
div.spiral.pts<-spiral.pts %>%
mutate(
# generate a random number uniformally between 0 and 1
rand_value = runif(nrow(spiral.pts)),
# divide the data based on how high this number is into different groups
data_div = ifelse(rand_value<0.6,"Training",
ifelse(rand_value<0.9,"Validation",
"Testing")
)
) %>% select(-rand_value) # we don't need this anymore
div.spiral.pts %>% sample_n(4) %>% kable(digits=2)
```
```{r}
ggplot(div.spiral.pts,aes(x1,y1))+
geom_point(aes(shape=data_div,color=class),size=2)+
labs(shape="Type")+
facet_wrap(~data_div)+
coord_equal()+
theme_bw(20)
```
# Using a simple decision tree
Making a decision tree can be done by providing the output (```class```) as a function of the input, in this case just a combination of _x1_ and _y1_ (```~x1+y1```). From this a
```{r, echo=T}
library(rpart)
library(rpart.plot)
training.data <- div.spiral.pts %>% subset(data_div == "Training")
dec.tree<-rpart(class~x1+y1,data=training.data)
```
A tree can be visualized graphically as a trunk (the top most node) dividing progressively into smaller subnodes
```{r}
prp(dec.tree)
```
or as a list of rules to apply
```{r, results='markdown'}
print(dec.tree)
```
Overlaying with the prediction data looks good
```{r}
pred.map<-expand.grid(x1 = -50:50, y1 = -50:50)
pred.map$pred_class<-ifelse(predict(dec.tree,pred.map)[,1]>0.5,"Happy","Sad")
training.data$pred_class<-ifelse(predict(dec.tree,training.data)[,1]>0.5,"Happy","Sad")
ggplot(pred.map,aes(x1,y1))+
geom_tile(aes(fill=pred_class),alpha=0.5)+
geom_point(data=training.data,aes(color=class,size=(pred_class!=class)))+
labs(fill="Predicted",size = "Incorrectly\nLabeled")+
theme_bw(20)
```
It struggles more with the validation data since it has never seen it before and it's not quite the same as the training
```{r}
valid.data<-div.spiral.pts %>% subset(data_div == "Validation")
valid.data$pred_class<-ifelse(predict(dec.tree,valid.data)[,1]>0.5,"Happy","Sad")
ggplot(pred.map,aes(x1,y1))+
geom_tile(aes(fill=pred_class),alpha=0.5)+
geom_point(data=valid.data,aes(color=class,size=(pred_class!=class)))+
labs(fill="Predicted",size = "Incorrectly\nLabeled")+
theme_bw(20)
```
The test data (__normally we would not look at it at all right now and wait until the very end__) looks even worse and an even smaller fraction is correctly matched.
```{r}
valid.data<-div.spiral.pts %>% subset(data_div == "Testing")
valid.data$pred_class<-ifelse(predict(dec.tree,valid.data)[,1]>0.5,"Happy","Sad")
ggplot(pred.map,aes(x1,y1))+
geom_tile(aes(fill=pred_class),alpha=0.5)+
geom_point(data=valid.data,aes(color=class,size=(pred_class!=class)))+
labs(fill="Predicted",size = "Incorrectly\nLabeled")+
theme_bw(20)
```
We can choose to make more complicated trees by changing the function to something more detailed like
$$ class = x1+y1+x1^2+y1^2+\sin(x1/5)+\sin(y1/5) $$
```{r, echo=T}
dec.tree<-rpart(class~x1+y1+x1^2+y1^2+sin(x1/5)+sin(y1/5),data=training.data)
prp(dec.tree)
```
```{r}
pred.map$pred_class<-ifelse(predict(dec.tree,pred.map)[,1]>0.5,"Happy","Sad")
training.data$pred_class<-ifelse(predict(dec.tree,training.data)[,1]>0.5,"Happy","Sad")
ggplot(pred.map,aes(x1,y1))+
geom_tile(aes(fill=pred_class),alpha=0.5)+
geom_point(data=training.data,aes(color=class,size=(pred_class!=class)))+
labs(fill="Predicted",size = "Incorrectly\nLabeled")+
theme_bw(20)
```
# Parameters
```{r, show_chain_block}
library(igraph)
make.im.proc.chain<-function(root.node="Raw\nImages",filters=c(),filter.parms=c(),
segmentation=c(),segmentation.parms=c(),
analysis=c(),analysis.parms=c()) {
node.names<-c("Raw\nImages",
filter.parms,filters,
segmentation.parms,segmentation,
analysis.parms,analysis
)
c.mat<-matrix(0,length(node.names),length(node.names))
colnames(c.mat)<-node.names
rownames(c.mat)<-node.names
for(cFilt in filters) {
c.mat["Raw\nImages",cFilt]<-1
for(cParm in filter.parms) c.mat[cParm,cFilt]<-1
for(cSeg in segmentation) {
c.mat[cFilt,cSeg]<-1
for(cParm in segmentation.parms) c.mat[cParm,cSeg]<-1
for(cAnal in analysis) {
c.mat[cSeg,cAnal]<-1
for(cParm in analysis.parms) c.mat[cParm,cAnal]<-1
}
}
}
g<-graph.adjacency(c.mat,mode="directed")
V(g)$degree <- degree(g)
V(g)$label <- V(g)$name
V(g)$color <- "lightblue"
V(g)["Raw\nImages"]$color<-"lightgreen"
for(cAnal in analysis) V(g)[cAnal]$color<-"pink"
V(g)$size<-30
for(cParam in c(filter.parms,segmentation.parms,analysis.parms)) {
V(g)[cParam]$color<-"grey"
V(g)[cParam]$size<-25
}
E(g)$width<-2
g
}
```
How does a standard image processing chain look?
```{r , fig.height=9}
g<-make.im.proc.chain(filters=c("Gaussian\nFilter"),
filter.parms=c("3x3\nNeighbors","0.5 Sigma"),
segmentation=c("Threshold"),
segmentation.parms=c("100"),
analysis=c("Shape\nAnalysis","Thickness\nAnalysis")
)
plot(g)#,layout=layout.circle) #, layout=layout.circle)# layout.fruchterman.reingold)# layout.kamada.kawai)
```
- Green are the images we start with (measurements)
- Blue are processing steps
- Gray are use input parameters
- Pink are the outputs
# The Full Chain
```{r , fig.height=8,fig.width=18}
library(igraph)
g<-make.im.proc.chain(filters=c("Gaussian\nFilter","Median\nFilter","Diffusion\nFilter","No\nFilter",
"Laplacian\nFilter"),
segmentation=c("Threshold","Hysteresis\nThreshold","Automated"),
analysis=c("Shape\nAnalysis","Thickness\nAnalysis","Distribution\nAnalysis",
"Skeleton\nAnalysis","2 Point\nCorr","Curvature")
)
plot(g,layout=layout.reingold.tilford) #, layout=layout.circle)# layout.fruchterman.reingold)# layout.kamada.kawai)
```
# The Full Chain (with Parameters)
```{r , fig.height=9,fig.width=9}
g<-make.im.proc.chain(filters=c("Gaussian\nFilter","Median\nFilter","Diffusion\nFilter"),
filter.parms=c("3x3\nNeighbors","5x5\nNeighbors","7x7\nNeighbors",
"0.5 Sigma","1.0 Sigma","1.2 Sigma"),
segmentation=c("Threshold","Hysteresis\nThreshold","Automated"),
segmentation.parms=paste(seq(90,110,length.out=3)),
analysis=c("Shape\nAnalysis","Thickness\nAnalysis","Distribution\nAnalysis","Skeleton\nAnalysis","2 Point\nCorr")
)
plot(g,layout=layout.lgl(g,maxiter=10000,root=1)) #, layout=layout.circle)# layout.fruchterman.reingold)# layout.kamada.kawai)
```
- A __mess__, over 1080 combinations for just one sample (not even exploring a very large range of threshold values)
- To calculate this for even one sample can take days (weeks, years)
- 512 x 512 x 512 foam sample $\rightarrow$ 12 weeks of processing time
- 1024 x 1024 x 1024 femur bone $\rightarrow$ 1.9 years
- Not all samples are the same
- Once the analysis is run we have a ton of data
- femur bone $\rightarrow$ 60 million shapes analyzed
- What do we even want?
- How do we judge the different results?
# Qualitative vs Quantitative
Given the complexity of the tree, we need to do some pruning
### Qualitative Assessment
- Evaluating metrics using visual feedback
- Compare with expectations from other independent techniques or approach
- Are there artifacts which are included in the output?
- Do the shapes look correct?
- Are they distributed as expected?
- Is their orientation meaningful?

# Quantitative Metrics
With a quantitative approach, we can calculate the specific shape or distribution metrics on the sample with each parameter and establish the relationship between parameter and metric.
### Parameter Sweep
The way we do this is usually a parameter sweep which means taking one (or more) parameters and varying them between the reasonable bounds (judged qualitatively).
```{r, load-metrics}
source('../common/shapeAnalysisProcess.R')
source('../common/commonReportFunctions.R')
# read and correct the coordinate system
thresh.fun<-function(x) {
pth<-rev(strsplit(x,"/")[[1]])[2]
t<-strsplit(pth,"_")[[1]][3]
as.numeric(substring(t,2,nchar(t)))
}
readfcn<-function(x) cbind(compare.foam.corrected(x,
checkProj=F
#force.scale=0.011 # force voxel size to be 11um
),
thresh=thresh.fun(x) # how to parse the sample names
)
# Where are the csv files located
rootDir<-"../common/data/mcastudy"
clpor.files<-Sys.glob(paste(rootDir,"/a*/lacun_0.csv",sep="/")) # list all of the files
# Read in all of the files
all.lacun<-ldply(clpor.files,readfcn,.parallel=T)
```
```{r , fig.height=5}
ggplot(all.lacun,aes(y=VOLUME*1e9,x=thresh))+
geom_jitter(alpha=0.1)+geom_smooth()+
theme_bw(24)+labs(y="Volume (um3)",x="Threshold Value",color="Threshold")+ylim(0,1000)
```
# Is it always the same?
```{r , fig.height=5}
ggplot(subset(all.lacun,thresh %% 1000==0),aes(y=VOLUME*1e9,x=as.factor(thresh)))+
geom_violin()+
theme_bw(24)+labs(y="Volume (um3)",x="Threshold Value",color="Threshold")+ylim(0,1000)
```
```{r , fig.height=5}
ggplot(all.lacun,aes(y=PCA1_Z,x=thresh))+
geom_jitter(alpha=0.1)+geom_smooth()+
theme_bw(24)+labs(y="Orientation",x="Threshold Value",color="Threshold")
```
# Sensitivity
Sensitivity is defined in control system theory as the change in the value of an output against the change in the input.
$$ S = \frac{|\Delta \textrm{Metric}|}{|\Delta \textrm{Parameter}|} $$
Such a strict definition is not particularly useful for image processing since a threshold has a unit of intensity and a metric might be volume which has $m^3$ so the sensitivity becomes volume per intensity.
### Practical Sensitivity
A more common approach is to estimate the variation in this parameter between images or within a single image (automatic threshold methods can be useful for this) and define the sensitivity based on this variation. It is also common to normalize it with the mean value so the result is a percentage.
$$ S = \frac{max(\textrm{Metric})-min(\textrm{Metric})}{avg(\textrm{Metric})} $$
# Sensitivity: Real Measurements
In this graph it is magnitude of the slope. The steeper the slope the more the metric changes given a small change in the parameter
```{r , fig.height=5}
poresum<-function(all.data) ddply(all.data,.(thresh),function(c.sample) {
data.frame(Count=nrow(c.sample),
Volume=mean(c.sample$VOLUME*1e9),
Stretch=mean(c.sample$AISO),
Oblateness=mean(c.sample$OBLATENESS),
#Lacuna_Density_mm=1/mean(c.sample$DENSITY_CNT),
Length=mean(c.sample$PROJ_PCA1*1000),
Width=mean(c.sample$PROJ_PCA2*1000),
Height=mean(c.sample$PROJ_PCA3*1000),
Orientation=mean(abs(c.sample$PCA1_Z)))
})
comb.summary<-cbind(poresum(all.lacun),Phase="Lacuna")
splot<-ggplot(comb.summary,aes(x=thresh))
splot+geom_line(aes(y=Count))+geom_point(aes(y=Count))+scale_y_log10()+
theme_bw(24)+labs(y="Object Count",x="Threshold",color="Phase")
```
Comparing Different Variables we see that the best (lowest) value for the count sensitivity is the highest for the volume and anisotropy.
```{r , fig.height=5}
calc.sens<-function(in.df) {
data.frame(sens.cnt=100*with(in.df,(max(Count)-min(Count))/mean(Count)),
sens.vol=100*with(in.df,(max(Volume)-min(Volume))/mean(Volume)),
sens.stretch=100*with(in.df,(max(Stretch)-min(Stretch))/mean(Stretch))
)
}
sens.summary<-ddply.cutcols(comb.summary,.(cut_interval(thresh,5)),calc.sens)
ggplot(sens.summary,aes(x=thresh))+
geom_line(aes(y=sens.cnt,color="Count"))+
geom_line(aes(y=sens.vol,color="Volume"))+
geom_line(aes(y=sens.stretch,color="Anisotropy"))+
labs(x="Threshold",y="Sensitivity (%)",color="Metric")+
theme_bw(20)
```
### Which metric is more important?
# Unit Testing
In computer programming, unit testing is a method by which individual units of source code, sets of one or more computer program modules together with associated control data, usage procedures, and operating procedures, are tested to determine if they are fit for use.
- Intuitively, one can view a unit as the smallest testable part of an application
- Unit testing is possible with every language
- Most (Java, C++, Matlab, R, Python) have built in support for automated testing and reporting
The first requirement for unit testing to work well is to have you tools divided up into small independent parts (functions)
- Each part can then be tested independently (unit testing)
- If the tests are well done, units can be changed and tested independently
- Makes upgrading or expanding tools _easy_
- The entire path can be tested (integration testing)
- Catches mistakes in integration or _glue_
Ideally with realistic but simulated test data
- The utility of the testing is only as good as the tests you make
## Example
- Given the following function
```function vxCnt=countVoxs(inImage)```
- We can right the following tests
- testEmpty2d
```assert countVoxs(zeros(3,3)) == 0```
- testEmpty3d
```assert countVoxs(zeros(3,3,3)) == 0```
- testDiag3d
```assert countVoxs(eye(3)) == 3```
# Unit Testing: Examples
- Given the following function
```function shapeTable=shapeAnalysis(inImage)```
We should decompose the task into sub-components
- ```function clImage=componentLabel(inImage)```
- ```function objInfo=analyzeObject(inObject)```
- ```function vxCnt=countVoxs(inObject)```
- ```function covMat=calculateCOV(inObject)```
- ```function shapeT=calcShapeT(covMat)```
- ```function angle=calcOrientation(shapeT)```
- ```function aniso=calcAnisotropy(shapeT)```
# Unit Testing in ImageJ
[On this page](https://github.com/imagej/ij1-tests/blob/master/src/test/java/ij/VirtualStackTest.java)
<iframe src="https://github.com/imagej/ij1-tests/blob/master/src/test/java/ij/VirtualStackTest.java" width='100%' height='800'></iframe>
# Unit Testing in KNIME
[Read more](https://tech.knime.org/community/developers) and [Here](https://www.knime.org/files/kos-11/KNIME_Testing.pdf)
The Java-based unit-testing can be used (JUnit) before any of the plugins are compiled, additionally entire workflows can be made to test the objects using special testing nodes like
- difference node (check the if two values are different)

- disturber node (insert random / missing values to determine fault tolerance)
# Unit Testing in Python
## PyTest
Packages like PyTest are well suited for larger projects where you make a set of specific tests to run each time the project is updated.
### Scikit Image
https://github.com/scikit-image/scikit-image/tree/master/skimage
- Test Watershed https://github.com/scikit-image/scikit-image/blob/16d3fd07e7d882d7f6b74e8dc4028ff946ac7e63/skimage/morphology/tests/test_watershed.py#L79
- Test Connected Components https://github.com/scikit-image/scikit-image/blob/16d3fd07e7d882d7f6b74e8dc4028ff946ac7e63/skimage/morphology/tests/test_ccomp.py#L13
```python
class TestWatershed(unittest.TestCase):
eight = np.ones((3, 3), bool)
def test_watershed01(self):
"watershed 1"
data = np.array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 1, 0, 0, 0, 1, 0],
[0, 1, 0, 0, 0, 1, 0],
[0, 1, 0, 0, 0, 1, 0],
[0, 1, 1, 1, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]], np.uint8)
markers = np.array([[ -1, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 1, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0]],
np.int8)
out = watershed(data, markers, self.eight)
expected = np.array([[-1, -1, -1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1, -1],
[-1, 1, 1, 1, 1, 1, -1],
[-1, 1, 1, 1, 1, 1, -1],
[-1, 1, 1, 1, 1, 1, -1],
[-1, 1, 1, 1, 1, 1, -1],
[-1, 1, 1, 1, 1, 1, -1],
[-1, -1, -1, -1, -1, -1, -1],
[-1, -1, -1, -1, -1, -1, -1]])
error = diff(expected, out)
assert error < eps
```
## DocTests
Keep the tests in the code itself: https://github.com/scikit-image/scikit-image/blob/16d3fd07e7d882d7f6b74e8dc4028ff946ac7e63/skimage/filters/thresholding.py#L886
```python
def apply_hysteresis_threshold(image, low, high):
"""Apply hysteresis thresholding to `image`.
This algorithm finds regions where `image` is greater than `high`
OR `image` is greater than `low` *and* that region is connected to
a region greater than `high`.
Parameters
----------
image : array, shape (M,[ N, ..., P])
Grayscale input image.
low : float, or array of same shape as `image`
Lower threshold.
high : float, or array of same shape as `image`
Higher threshold.
Returns
-------
thresholded : array of bool, same shape as `image`
Array in which `True` indicates the locations where `image`
was above the hysteresis threshold.
Examples
--------
>>> image = np.array([1, 2, 3, 2, 1, 2, 1, 3, 2])
>>> apply_hysteresis_threshold(image, 1.5, 2.5).astype(int)
array([0, 1, 1, 1, 0, 0, 0, 1, 1])
References
----------
.. [1] J. Canny. A computational approach to edge detection.
IEEE Transactions on Pattern Analysis and Machine Intelligence.
1986; vol. 8, pp.679-698.
DOI: 10.1109/TPAMI.1986.4767851
"""
low = np.clip(low, a_min=None, a_max=high) # ensure low always below high
mask_low = image > low
mask_high = image > high
```
# Unit Testing Jupyter
Working primarily in notebooks makes regular testing more difficult but not impossible. If we employ a few simple tricks we can use doctesting seamlessly inside of Jupyter. We can make what in python is called an annotatation to setup this code.
```
import doctest
import copy
import functools
def autotest(func):
globs = copy.copy(globals())
globs.update({func.__name__: func})
doctest.run_docstring_examples(
func, globs, verbose=True, name=func.__name__)
return func
@autotest
def add_5(x):
"""
Function adds 5
>>> add_5(5)
10
"""
return x+5
from skimage.measure import label
import numpy as np
@autotest
def simple_label(x):
"""
Label an image
>>> test_img = np.eye(3)
>>> test_img
array([[1., 0., 0.],
[0., 1., 0.],
[0., 0., 1.]])
>>> simple_label(test_img)
array([[1, 0, 0],
[0, 1, 0],
[0, 0, 1]], dtype=int64)
>>> test_img[1,1] = 0
>>> simple_label(test_img)
array([[1, 0, 0],
[0, 0, 0],
[0, 0, 2]], dtype=int64)
"""
return label(x)
```
# Unit Testing Matlab
https://www.mathworks.com/help/matlab/matlab-unit-test-framework.html
# Test Driven Programming
Test Driven programming is a style or approach to programming where the tests are written before the functional code. Like very concrete specifications. It is easy to estimate how much time is left since you can automatically see how many of the tests have been passed. You and your collaborators are clear on the utility of the system.
1. shapeAnalysis must give an anisotropy of 0 when we input a sphere
1. shapeAnalysis must give the center of volume within 0.5 pixels
1. shapeAnalysis must run on a 1000x1000 image in 30 seconds
# Continuous Integration
Conntinuous integration is the process of running tests automatically everytime changes are made. This is possible to setup inside of many IDEs and is offered as a commercial service from companies like CircleCI and Travis. We use them for the QBI course to make sure all of the code in the slides are correct. Projects like scikit-image use them to ensure changes that are made do not break existing code without requiring manual checks
# Visualization
One of the biggest problems with _big_ sciences is trying to visualize a lot of heterogeneous data.
- Tables are difficult to interpret
- 3D Visualizations are very difficult to compare visually
- Contradictory necessity of simple single value results and all of the data to look for trends and find problems
# Bad Graphs
There are too many graphs which say
- âmy data is very complicatedâ
- âI know how to use __ toolbox in Matlab/R/Mathematicaâ


- Most programs by default make poor plots
- Good visualizations takes time


# Key Ideas
1. What is my message?
1. Does the graphic communicate it clearly?
1. Is a graphic representation really necessary?
-
1. Does every line / color serve a purpose?
- Pretend ink is very expensive
### Simple Rules
1. Never use 3D graphics when it can be avoided (unless you want to be deliberately misleading), our visual system is not well suited for comparing heights of different

1. Pie charts can also be hard to interpret
1. Background color should almost always be white (not light gray)
1. Use color palettes adapted to human visual sensitivity
# What is my message
- Plots to "show the results" or "get a feeling" are usually not good
```{r, fig.height=7}
xd<-runif(80)
test.data<-data.frame(x=xd,y=xd+runif(80),z=runif(80))
plot(test.data)
```
- Focus on a single, simple message
- X is a little bit correlated with Y
```{r, fig.height=7}
ggplot(test.data,aes(x,y))+
geom_point()+geom_smooth(method="lm")+
coord_equal()+
labs(title="X is weakly correlated with Y")+
theme_bw(20)
```
# Does my graphic communicate it clearly?
- Too much data makes it very difficult to derive a clear message
```{r, fig.height=7}
xd<-runif(5000)
test.data<-data.frame(x=xd,y=(xd-0.5)*runif(5000))
ggplot(test.data,aes(x,y))+
geom_point()+
coord_equal()+
theme_bw(20)
```
- Filter and reduce information until it is extremely simple
```{r, fig.height=4}
ggplot(test.data,aes(x,y))+
stat_binhex(bins=20)+
geom_smooth(method="lm",aes(color="Fit"))+
coord_equal()+
theme_bw(20)+guides(color=F)
```
```{r, fig.height=4}
ggplot(test.data,aes(x,y))+
geom_density2d(aes(color="Contour"))+
geom_smooth(method="lm",aes(color="Linear Fit"))+
coord_equal()+
labs(color="Type")+
theme_bw(20)
```
# Grammar of Graphics
- What is a grammar?
- Set of rules for constructing and validating a sentence
- Specifies the relationship and order between the words constituting the sentence
- How does this apply to graphics?
- If we develop a consistent way of expressing graphics (sentences) in terms of elements (words) we can compose and decompose graphics easily
- The most important modern work in graphical grammars is âThe Grammar of Graphicsâ by Wilkinson, Anand, and Grossman (2005). This work built on earlier work by Bertin (1983) and proposed a grammar that can be used to describe and construct a wide range of statistical graphics.
- This can be applied in R using the ggplot2 library (<small>H. Wickham. ggplot2: elegant graphics for data analysis. Springer New York, 2009.</small>)
# Grammar Explained
Normally we think of plots in terms of some sort of data which is fed into a plot command that produces a picture
- In Excel you select a range and plot-type and click "Make"
- In Matlab you run ```plot(xdata,ydata,color/shape)```
1. These produces entire graphics (sentences) or at least phrases in one go and thus abstract away from the idea of grammar.
1. If you spoke by finding entire sentences in a book it would be very ineffective, it is much better to build up word by word
### Grammar
Separate the graph into its component parts
1. Data Mapping
- $var1 \rightarrow x$, $var2 \rightarrow y$

1. Points
1. Axes / Coordinate System
1. Labels / Annotation
Construct graphics by focusing on each portion independently.
# Wrapping up
- I am not a statistician
- This is not a statistics course
- If you have questions or concerns, Both ETHZ and Uni Zurich offer __free__ consultation with real statisticians
- They are rarely bearers of good news
- Simulations (even simple ones) are very helpful (see [StatisticalSignificanceHunter
](knime://LOCAL/Exercise%209%20StatsRepro/StatisticalSignificanceHunter
))
- Try and understand the tests you are performing
| github_jupyter |
## TODO: Convert to Python
## Setup Connection to Kafka
```
import org.apache.spark.sql.functions.get_json_object
import org.apache.spark.sql.functions.json_tuple
streamingInputDF = \
spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "<server:ip>") \
.option("subscribe", "topic1") \
.option("startingOffsets", "latest") \
.option("minPartitions", "10") \
.option("failOnDataLoss", "true") \
.load()
```
### streamingInputDF.printSchema
```
root
|-- key: binary (nullable = true)
|-- value: binary (nullable = true)
|-- topic: string (nullable = true)
|-- partition: integer (nullable = true)
|-- offset: long (nullable = true)
|-- timestamp: timestamp (nullable = true)
|-- timestampType: integer (nullable = true)
```
### Sample Message
```
{
"city": "",
"country": "United States",
"countryCode": "US",
"isp": "",
"lat": 0.00, "lon": 0.00,
"query": "",
"region": "CA",
"regionName": "California",
"status": "success",
"hittime": "2017-02-08T17:37:55-05:00",
"zip": "38917"
}
```
## GroupBy, Count
```
import org.apache.spark.sql.functions._
var streamingSelectDF =
streamingInputDF
.select(get_json_object(($"value").cast("string"), "$.zip").alias("zip"))
.groupBy($"zip")
.count()
```
## Window
```
import org.apache.spark.sql.functions._
var streamingSelectDF =
streamingInputDF
.select(get_json_object(($"value").cast("string"), "$.zip").alias("zip"), get_json_object(($"value").cast("string"), "$.hittime").alias("hittime"))
.groupBy($"zip", window($"hittime".cast("timestamp"), "10 minute", "5 minute", "2 minute"))
.count()
```
### Memory Output
```
import org.apache.spark.sql.streaming.ProcessingTime
val query =
streamingSelectDF
.writeStream
.format("memory")
.queryName("isphits")
.outputMode("complete")
.trigger(ProcessingTime("25 seconds"))
.start()
```
### Console Output
```
import org.apache.spark.sql.streaming.ProcessingTime
val query =
streamingSelectDF
.writeStream
.format("console")
.outputMode("complete")
.trigger(ProcessingTime("25 seconds"))
.start()
```
### File Output with Partitions
```
import org.apache.spark.sql.functions._
var streamingSelectDF =
streamingInputDF
.select(get_json_object(($"value").cast("string"), "$.zip").alias("zip"), get_json_object(($"value").cast("string"), "$.hittime").alias("hittime"), date_format(get_json_object(($"value").cast("string"), "$.hittime"), "dd.MM.yyyy").alias("day"))
.groupBy($"zip")
.count()
.as[(String, String)]
```
### Create Table
```
%sql CREATE EXTERNAL TABLE test_par
(hittime string)
PARTITIONED BY (zip string, day string)
STORED AS PARQUET
LOCATION '/mnt/sample/test-data'
```
### Add Partition
```
%sql ALTER TABLE test_par ADD IF NOT EXISTS
PARTITION (zip='38907', day='08.02.2017') LOCATION '/mnt/sample/test-data/zip=38907/day=08.02.2017'
```
### Select
```
%sql select * from test_par
### JDBC Sink
import java.sql._
class JDBCSink(url:String, user:String, pwd:String) extends ForeachWriter[(String, String)] {
val driver = "com.mysql.jdbc.Driver"
var connection:Connection = _
var statement:Statement = _
def open(partitionId: Long,version: Long): Boolean = {
Class.forName(driver)
connection = DriverManager.getConnection(url, user, pwd)
statement = connection.createStatement
true
}
def process(value: (String, String)): Unit = {
statement.executeUpdate("INSERT INTO zip_test " +
"VALUES (" + value._1 + "," + value._2 + ")")
}
def close(errorOrNull: Throwable): Unit = {
connection.close
}
}
val url="jdbc:mysql://<mysqlserver>:3306/test"
val user ="user"
val pwd = "pwd"
val writer = new JDBCSink(url,user, pwd)
val query =
streamingSelectDF
.writeStream
.foreach(writer)
.outputMode("update")
.trigger(ProcessingTime("25 seconds"))
.start()
```
## Kafka Sink
```
import java.util.Properties
import kafkashaded.org.apache.kafka.clients.producer._
import org.apache.spark.sql.ForeachWriter
class KafkaSink(topic:String, servers:String) extends ForeachWriter[(String, String)] {
val kafkaProperties = new Properties()
kafkaProperties.put("bootstrap.servers", servers)
kafkaProperties.put("key.serializer", "kafkashaded.org.apache.kafka.common.serialization.StringSerializer")
kafkaProperties.put("value.serializer", "kafkashaded.org.apache.kafka.common.serialization.StringSerializer")
val results = new scala.collection.mutable.HashMap[String, String]
var producer: KafkaProducer[String, String] = _
def open(partitionId: Long,version: Long): Boolean = {
producer = new KafkaProducer(kafkaProperties)
true
}
def process(value: (String, String)): Unit = {
producer.send(new ProducerRecord(topic, value._1 + ":" + value._2))
}
def close(errorOrNull: Throwable): Unit = {
producer.close()
}
}
val topic = "<topic2>"
val brokers = "<server:ip>"
val writer = new KafkaSink(topic, brokers)
val query =
streamingSelectDF
.writeStream
.foreach(writer)
.outputMode("update")
.trigger(ProcessingTime("25 seconds"))
.start()
```
| github_jupyter |
## WORD2VEC
```
import collections
import math
import os
import random
import zipfile
import numpy as np
from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import tensorflow as tf
%matplotlib inline
print ("Packages loaded")
```
## DOWNLOAD CORPUS
```
folder_dir = "data"
file_name = "text8.zip"
file_path = os.path.join(folder_dir, file_name)
url = 'http://mattmahoney.net/dc/'
if not os.path.exists(file_path):
print ("No file found. Start downloading")
downfilename, _ = urllib.request.urlretrieve(
url + file_name, file_path)
print ("'%s' downloaded" % (downfilename))
else:
print ("File already exists")
```
## CHECK
```
statinfo = os.stat(file_path)
expected_bytes = 31344016
if statinfo.st_size == expected_bytes:
print ("I guess we have correct file at '%s'" % (file_path))
else:
print ("Something's wrong with the file at '%s'" % (file_path))
```
## UNZIP
```
def read_data(filename):
with zipfile.ZipFile(filename) as f:
data = f.read(f.namelist()[0]).split()
return data
words = read_data(file_path)
print ("Type of 'words' is %s / Length is %d "
% (type(words), len(words)))
print ("'words' look like \n %s" %(words[0:100]))
```
## MAKE DICTIONARY
```
vocabulary_size = 50000
count = [['UNK', -1]]
count.extend(collections.Counter(words)
.most_common(vocabulary_size - 1)) # -1 is for UNK
print ("Type of 'count' is %s / Length is %d " % (type(count), len(count)))
print ("'count' looks like \n %s" % (count[0:10]))
```
### DICTIONARY
```
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
print ("Type of 'dictionary' is %s / Length is %d "
% (type(dictionary), len(dictionary)))
```
### REVERSE DICTIONARY
```
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
print ("Type of 'reverse_dictionary' is %s / Length is %d "
% (type(reverse_dictionary), len(reverse_dictionary)))
```
## MAKE DATA
```
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count += 1
data.append(index)
count[0][1] = unk_count
```
### 'DICTIONARY' CONVERTS WORD TO INDEX
### 'REVERSE_DICTIONARY' CONVERTS INDEX TO WORD
```
print ("Most common words (+UNK) are: %s" % (count[:5]))
```
### DATA (INDICES)
```
print ("Sample data: %s" % (data[:10]))
```
### CONVERT TO CHAR
```
print ("Sample data corresponds to\n__________________")
for i in range(10):
print ("%d->%s" % (data[i], reverse_dictionary[data[i]]))
```
## BATCH-GENERATING WITH SKIP-GRAM MODEL
```
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips): # '//' makes the result an integer, e.g., 7//3 = 2
target = skip_window
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
```
### EXAMPLES FOR GENERATING BATCH AND LABELS
```
data_index = 0
batch, labels = generate_batch(batch_size=8, num_skips=2, skip_window=1)
print ("Type of 'batch' is %s / Length is %d "
% (type(batch), len(batch)))
print ("Type of 'labels' is %s / Length is %d "
% (type(labels), len(labels)))
print ("'batch' looks like \n %s" % (batch))
print ("'labels' looks like \n %s" % (labels))
for i in range(8):
print ("%d -> %d"
% (batch[i], labels[i, 0])),
print ("\t%s -> %s"
% (reverse_dictionary[batch[i]]
, reverse_dictionary[labels[i, 0]]))
```
## BUILD A SKIP-GRAM MODEL
```
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input
print ("Parameters ready")
# Random validation set to sample nearest neighbors.
valid_size = 32 # Random set of words to evaluate similarity
valid_window = 200 # Only pick validation samples in the top 200
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
print (valid_examples)
```
## DEFINE NETWORK
```
# Construct the word2vec model
train_inputs = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Look up embeddings for inputs. (vocabulary_size = 50,000)
with tf.variable_scope("EMBEDDING"):
with tf.device('/cpu:0'):
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size]
, -1.0, 1.0))
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Construct the variables for the NCE loss
with tf.variable_scope("NCE_WEIGHT"):
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
print ("Network ready")
```
## DEFINE FUNCTION
```
with tf.device('/cpu:0'):
# Loss function
num_sampled = 64 # Number of negative examples to sample.
"""
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases, embed
, train_labels, num_sampled, vocabulary_size))
"""
loss = tf.reduce_mean(
tf.nn.nce_loss(weights=nce_weights,
biases=nce_biases,
labels=train_labels,
inputs=embed,
num_sampled=num_sampled,
num_classes=vocabulary_size))
# Optimizer
optm = tf.train.GradientDescentOptimizer(1.0).minimize(loss)
# Similarity measure (important)
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings
, valid_dataset)
siml = tf.matmul(valid_embeddings, normalized_embeddings
, transpose_b=True)
print ("Functions Ready")
```
## TRAIN
```
# Train!
sess = tf.Session()
sess.run(tf.global_variables_initializer())
average_loss = 0
num_steps = 100001
for iter in xrange(num_steps):
batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}
_, loss_val = sess.run([optm, loss], feed_dict=feed_dict)
average_loss += loss_val
if iter % 2000 == 0:
average_loss /= 2000
print ("Average loss at step %d is %.3f" % (iter, average_loss))
if iter % 10000 == 0:
siml_val = sess.run(siml)
for i in xrange(valid_size): # Among valid set
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 6 # number of nearest neighbors
nearest = (-siml_val[i, :]).argsort()[1:top_k+1]
log_str = "Nearest to '%s':" % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log_str = "%s '%s'," % (log_str, close_word)
print(log_str)
# Final embeding
final_embeddings = sess.run(normalized_embeddings)
```
## VISUALIZE
```
def plot_with_labels(low_dim_embs, labels, filename='tsne.png'):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) #in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i,:]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
plt.show()
# Plot
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [reverse_dictionary[i] for i in xrange(plot_only)]
plot_with_labels(low_dim_embs, labels)
```
| github_jupyter |
# End-to-end demo of the ``stadv`` package
We use a small CNN pre-trained on MNIST and try and fool the network using *Spatially Transformed Adversarial Examples* (stAdv).
### Import the relevant libraries
```
%matplotlib inline
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import os
import numpy as np
import tensorflow as tf
import stadv
# dependencies specific to this demo notebook
import matplotlib.pyplot as plt
import idx2numpy
```
### Load MNIST data
The test data for the MNIST dataset should be downloaded from http://yann.lecun.com/exdb/mnist/,
decompressed, and put in a directory ``mnist_data_dir``.
This can be done in command line with:
```
wget http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz && gunzip -f t10k-images-idx3-ubyte.gz
wget http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz && gunzip -f t10k-labels-idx1-ubyte.gz
```
```
mnist_data_dir = '.'
mnist_images = idx2numpy.convert_from_file(os.path.join(mnist_data_dir, 't10k-images-idx3-ubyte'))
mnist_labels = idx2numpy.convert_from_file(os.path.join(mnist_data_dir, 't10k-labels-idx1-ubyte'))
mnist_images = np.expand_dims(mnist_images, -1)
print("Shape of images:", mnist_images.shape)
print("Range of values: from {} to {}".format(np.min(mnist_images), np.max(mnist_images)))
print("Shape of labels:", mnist_labels.shape)
print("Range of values: from {} to {}".format(np.min(mnist_labels), np.max(mnist_labels)))
```
### Definition of the graph
The CNN we consider is using the `layers` module of TensorFlow. It was heavily inspired by this tutorial: https://www.tensorflow.org/tutorials/layers
```
# definition of the inputs to the network
images = tf.placeholder(tf.float32, shape=[None, 28, 28, 1], name='images')
flows = tf.placeholder(tf.float32, [None, 2, 28, 28], name='flows')
targets = tf.placeholder(tf.int64, shape=[None], name='targets')
tau = tf.placeholder_with_default(
tf.constant(0., dtype=tf.float32),
shape=[], name='tau'
)
# flow-based spatial transformation layer
perturbed_images = stadv.layers.flow_st(images, flows, 'NHWC')
# definition of the CNN in itself
conv1 = tf.layers.conv2d(
inputs=perturbed_images,
filters=32,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu
)
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=tf.nn.relu
)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
pool2_flat = tf.reshape(pool2, [-1, 7 * 7 * 64])
logits = tf.layers.dense(inputs=pool2_flat, units=10)
# definition of the losses pertinent to our study
L_adv = stadv.losses.adv_loss(logits, targets)
L_flow = stadv.losses.flow_loss(flows, padding_mode='CONSTANT')
L_final = L_adv + tau * L_flow
grad_op = tf.gradients(L_final, flows, name='loss_gradient')[0]
```
### Import the learned weights
The network has been trained independently and its learned weights are shipped with the demo. The final error on the test set is of 1.3%.
```
init = tf.global_variables_initializer()
saver = tf.train.Saver()
sess = tf.Session()
sess.run(init)
saver.restore(sess, os.path.join('saved_models', 'simple_mnist'))
```
### Test the model on a single image
The test image is randomly picked from the test set of MNIST. Its target label is also selected randomly.
```
i_random_image = np.random.randint(0, len(mnist_images))
test_image = mnist_images[i_random_image]
test_label = mnist_labels[i_random_image]
random_target = np.random.choice([num for num in range(10) if num != test_label])
print("Considering image #", i_random_image, "from the test set of MNIST")
print("Ground truth label:", test_label)
print("Randomly selected target label:", random_target)
# reshape so as to have a first dimension (batch size) of 1
test_image = np.expand_dims(test_image, 0)
test_label = np.expand_dims(test_label, 0)
random_target = np.expand_dims(random_target, 0)
# with no flow the flow_st is the identity
null_flows = np.zeros((1, 2, 28, 28))
pred_label = np.argmax(sess.run(
[logits],
feed_dict={images: test_image, flows: null_flows}
))
print("Predicted label (no perturbation):", pred_label)
```
### Where the magic takes place
Optimization of the flow so as to minimize the loss using an L-BFGS-B optimizer.
```
results = stadv.optimization.lbfgs(
L_final,
flows,
# random initial guess for the flow
flows_x0=np.random.random_sample((1, 2, 28, 28)),
feed_dict={images: test_image, targets: random_target, tau: 0.05},
grad_op=grad_op,
sess=sess
)
print("Final loss:", results['loss'])
print("Optimization info:", results['info'])
test_logits_perturbed, test_image_perturbed = sess.run(
[logits, perturbed_images],
feed_dict={images: test_image, flows: results['flows']}
)
pred_label_perturbed = np.argmax(test_logits_perturbed)
print("Predicted label after perturbation:", pred_label_perturbed)
```
### Show the results
```
image_before = test_image[0, :, :, 0]
image_after = test_image_perturbed[0, :, :, 0]
difference = image_after - image_before
max_diff = abs(difference).max()
plt.rcParams['figure.figsize'] = [10, 10]
f, (ax1, ax2, ax3) = plt.subplots(1, 3)
ax1.imshow(image_before)
ax1.set_title("True: {} - Pred: {} - Target: {}".format(test_label[0], pred_label, random_target[0]))
ax1.axis('off')
ax2.imshow(image_after)
ax2.set_title("Pred: {} - Loss: {}".format(pred_label_perturbed, round(results['loss'], 2)))
ax2.axis('off')
ax3.imshow(difference)
ax3.set_title("Max Difference: {}".format(round(max_diff, 2)))
ax3.axis('off')
plt.show()
```
| github_jupyter |
# Setup
```
from math import floor, ceil
from multiprocessing import Pool, cpu_count
from pathlib import Path
from python_speech_features import logfbank
from python_speech_features import mfcc
from scipy.io import wavfile
from time import time
import glob
import hashlib
import numpy as np
import os
import pickle
import random
import re
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
USE_CUDA = torch.cuda.is_available()
MAX_NUM_WAVS_PER_CLASS = 2**27 - 1 # ~134M
SAMPLE_RATE = 16000
MFCC_SIZE = 5000
BATCH_SIZE = 10000
ALL_LABELS = ['yes', 'no', 'up', 'down', 'left', 'right',
'on', 'off', 'stop', 'go', 'unknown']
LABEL_MAPPING = {name:i for i, name in enumerate(ALL_LABELS)}
```
# Data Import
```
def which_set(filename, validation_percentage, testing_percentage):
"""Determines which data partition the file should belong to.
We want to keep files in the same training, validation, or testing sets even
if new ones are added over time. This makes it less likely that testing
samples will accidentally be reused in training when long runs are restarted
for example. To keep this stability, a hash of the filename is taken and used
to determine which set it should belong to. This determination only depends on
the name and the set proportions, so it won't change as other files are added.
It's also useful to associate particular files as related (for example words
spoken by the same person), so anything after '_nohash_' in a filename is
ignored for set determination. This ensures that 'bobby_nohash_0.wav' and
'bobby_nohash_1.wav' are always in the same set, for example.
Args:
filename: File path of the data sample.
validation_percentage: How much of the data set to use for validation.
testing_percentage: How much of the data set to use for testing.
Returns:
String, one of 'training', 'validation', or 'testing'.
"""
base_name = os.path.basename(filename)
# ignore anything after '_nohash_' in the file name
hash_name = re.sub(r'_nohash_.*$', '', base_name)
# hash(filename) -> value to split into training/testing/validation
hash_name_hashed = hashlib.sha1(str.encode(hash_name)).hexdigest()
percentage_hash = ((int(hash_name_hashed, 16) %
(MAX_NUM_WAVS_PER_CLASS + 1)) *
(100.0 / MAX_NUM_WAVS_PER_CLASS))
if percentage_hash < validation_percentage:
result = 'validation'
elif percentage_hash < (testing_percentage + validation_percentage):
result = 'testing'
else:
result = 'training'
return result
def convert_label_to_id(label):
"""Convert label to its ID for prediction."""
if label not in ALL_LABELS:
label = 'unknown'
return LABEL_MAPPING[label]
def pad_sound_to_one_second(data):
if data.shape[0] != SAMPLE_RATE:
padding_needed = SAMPLE_RATE - data.shape[0]
front_padding = padding_needed // 2
end_padding = padding_needed - front_padding
data = np.concatenate(([0]*front_padding, data, [0]*end_padding))
return data
path_prefix = "Data/train/audio/"
# Split data into 80% training, 10% validation, 10% testing
validation_percentage = 10
testing_percentage = 10
datasets = {'validation': [], 'testing': [], 'training': []}
for filename in glob.glob(path_prefix + "*/*.wav"):
_, _, _, label, sound_filename = filename.split("/")
if label == "_background_noise_":
continue
dataset_name = which_set(sound_filename,
validation_percentage,
testing_percentage)
# List[(label, label_id, sound_filename), ...]
datasets[dataset_name].append((label,
convert_label_to_id(label),
sound_filename))
for name, labelled_sounds in datasets.items():
print("{:>10} count: {}".format(name, len(labelled_sounds)))
# Shuffle training data to improve performance
# random.shuffle(datasets['training'])
print("\n{} training labels: {}".format(len(ALL_LABELS), ALL_LABELS))
def import_dataset_to_torch(name, progress=False):
label_ids = []
samples = []
count = 0
training_count = len(datasets[name])
for label, label_id, filename in datasets[name]:
full_path = path_prefix + label + "/" + filename
sample_rate, data = wavfile.read(full_path)
data = pad_sound_to_one_second(data).astype(np.int16)
assert sample_rate == SAMPLE_RATE
assert data.shape[0] == SAMPLE_RATE
label_ids.append(torch.LongTensor([label_id]))
samples.append(torch.from_numpy(data))
if progress and len(label_ids) == training_count:
print("Import Progress for {}: {:.1f}%".format(name, 100*len(label_ids)/training_count))
samples = torch.stack(samples)
samples = samples.type(torch.float)
label_ids = torch.cat(label_ids)
return samples, label_ids
train_samples, train_label_ids = import_dataset_to_torch('training', progress=True)
validation_samples, validation_label_ids = import_dataset_to_torch('validation', progress=True)
test_samples, test_label_ids = import_dataset_to_torch('testing', progress=True)
```
# Feature Engineering
```
def mel_frequency_cepstral_coefficients(file_samples):
"""Converts wav samples into MFCC coefficients.
:return: numpy array of shape (num_frames, num_cep)"""
mfcc_feat = mfcc(file_samples, SAMPLE_RATE, winlen=0.01, numcep=50, nfilt=50)
fbank_feat = logfbank(file_samples, SAMPLE_RATE, winlen=0.01, nfilt=50).flatten()
return fbank_feat
def parallel_mfcc(samples):
# torch.Tensor doesn't seem to be thread-safe, so we have to pass to numpy and back
with Pool(processes=cpu_count()) as pool:
train_mfcc = pool.map(mel_frequency_cepstral_coefficients, (row.numpy() for row in samples))
return [torch.from_numpy(row).type(torch.float) for row in train_mfcc]
try:
# If we've already computed these, retrieve them from disk
train_mfcc, train_id_labels = pickle.load(open("Data/train_mfcc.p", "rb"))
validation_mfcc, train_id_labels = pickle.load(open("Data/validation_mfcc.p", "rb"))
test_mfcc, train_id_labels = pickle.load(open("Data/testing_mfcc.p", "rb"))
print("Retrieved MFCCs from disk")
if 'train_samples' in vars() or 'train_samples' in globals():
del train_samples
except FileNotFoundError:
print("MFCCs not found on disk, computing...")
# Otherwise, compute MFCCs and store them on disk
start = time()
train_mfcc = torch.stack(parallel_mfcc(train_samples))
print("Training data MFCC took {:.1f} s".format(time() - start))
start = time()
validation_mfcc = torch.stack(parallel_mfcc(validation_samples))
test_mfcc = torch.stack(parallel_mfcc(test_samples))
print("Validation and testing MFCC took {:.1f} s".format(time()-start))
pickle.dump((train_mfcc, train_label_ids),
open("Data/train_mfcc.p", "wb"))
pickle.dump((validation_mfcc, validation_label_ids),
open("Data/validation_mfcc.p", "wb"))
pickle.dump((test_mfcc, test_label_ids),
open("Data/testing_mfcc.p", "wb"))
```
# Neural Network Architecture
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# fully connected layers:
# 5000->50->50->11
self.fc1 = nn.Linear(MFCC_SIZE, 50)
self.fc2 = nn.Linear(50, 50)
self.fc3 = nn.Linear(50, len(ALL_LABELS))
self.dropout = nn.Dropout(p=0.1)
# torch.nn.init.xavier_uniform_(self.fc1.weight)
# torch.nn.init.xavier_uniform_(self.fc2.weight)
# torch.nn.init.xavier_uniform_(self.fc3.weight)
def forward(self, x):
x = self.dropout(F.leaky_relu(self.fc1(x)))
x = self.dropout(F.leaky_relu(self.fc2(x)))
x = self.dropout(F.leaky_relu(self.fc3(x)))
return x
def num_flat_features(self, x):
size = x.size()[1:] # all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
def accuracy(name, print_results=False):
# Check classification accuracy on the validation/testing set
correct = 0
total = 0
samples, label_ids = None, None
if name == 'training':
samples = train_mfcc
label_ids = train_label_ids
elif name == 'validation':
samples = validation_mfcc
label_ids = validation_label_ids
elif name == 'testing':
samples = test_mfcc
label_ids = test_label_ids
else:
assert False, "{} not supported in accuracy".format(name)
with torch.no_grad():
outputs = net(samples)
_, predicted = torch.max(outputs.data, 1)
total += label_ids.size()[0]
correct += (predicted == label_ids).sum().item()
percentage_correct = 100*correct/total
if print_results:
# Random guessing here is 1/12 ~ 8.3%
print("Accuracy of the network on {} {} sound clips: {:.1f}%"
"".format(len(samples), name, percentage_correct))
return percentage_correct
# net = Net()
net = torch.load("Data/network_state")
net.eval()
print(net)
# Correct for class sizes in loss function
train_class_weights = [1/(train_label_ids == label).sum().item() for label in range(len(ALL_LABELS))]
train_class_weights = torch.Tensor(train_class_weights)
if USE_CUDA:
device = torch.device("cuda:0" if USE_CUDA else "cpu")
net.to(device)
train_class_weights = train_class_weights.cuda()
train_class_weights.to(device)
# Classification Cross-Entropy and RMSprop
criterion = nn.CrossEntropyLoss(weight=train_class_weights)
optimizer = optim.RMSprop(net.parameters(), lr=1e-8)
last_validation_accuracy = 0
start = time()
for epoch in range(500): # loop over the dataset multiple times
current_validation_accuracy = 0
running_loss = 0.0
for batch_number, i in enumerate(range(0, len(train_mfcc), BATCH_SIZE)):
# get the inputs
inputs, labels = train_mfcc[i:i+BATCH_SIZE], \
train_label_ids[i:i+BATCH_SIZE]
if USE_CUDA:
inputs, labels = inputs.to(device), labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
# if batch_number % 5000 == 4999: # print every 5K mini-batches
# print('[%d, %5d] loss: %.3f' %
# (epoch + 1, batch_number + 1, running_loss / 5000))
# running_loss = 0.0
if epoch % 100 == 0:
if USE_CUDA:
net.to("cpu")
with torch.no_grad():
current_training_accuracy = accuracy('training')
current_validation_accuracy = accuracy('validation')
print("Epoch {:>4}: loss: {:>10.5f}, "
"Training accuracy: {:>5.1f}%, "
"Validation accuracy: {:>5.1f}%, "
"".format(epoch + 1, running_loss, current_training_accuracy, current_validation_accuracy))
# if current_validation_accuracy < 60 or current_validation_accuracy > 0.9*last_validation_accuracy:
# last_validation_accuracy = current_validation_accuracy
# else:
# break
if USE_CUDA:
net.to(device)
# Shuffle training data
# idx = torch.randperm(train_mfcc.size()[0])
# train_mfcc = train_mfcc[idx]
# train_label_ids = train_label_ids[idx]
print('Finished Training in {:.2f} s'.format(time()-start))
if USE_CUDA:
net.to("cpu")
# Check final validation and test accuracies
_ = accuracy('training', print_results=True)
_ = accuracy('validation', print_results=True)
_ = accuracy('testing', print_results=True)
def print_confusion_matrix(inputs, ground_truth_labels):
"""Prints normalized confusion matrix with ground_truth columns and prediction rows."""
confusion_matrix = np.zeros((len(ALL_LABELS), len(ALL_LABELS)), dtype=np.int)
outputs = net(inputs)
_, predicted_labels = torch.max(outputs, 1)
for ground_truth, prediction in zip(ground_truth_labels, predicted_labels):
confusion_matrix[prediction, ground_truth] += 1
np.set_printoptions(precision=2, suppress=True)
confusion_matrix = confusion_matrix.astype(np.float)
for i in range(len(ALL_LABELS)):
confusion_matrix[:, i] = confusion_matrix[:, i] / (ground_truth_labels == i).sum().item()
for i in range(len(ALL_LABELS)):
print(confusion_matrix[i, :])
print("Validation set confusion matrix")
print_confusion_matrix(validation_mfcc, validation_label_ids)
print("\n\nTest set confusion matrix")
print_confusion_matrix(test_mfcc, test_label_ids)
# torch.save(net, "Data/network_state")
```
| github_jupyter |
```
import numpy as np
import itertools
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from six import StringIO
import pydotplus#
import matplotlib.image as mpimg
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from IPython.display import Image
!pip install wget
!wget https://raw.githubusercontent.com/diogocortiz/Crash-Course-IA/master/ArvoreDecis%C3%A3o/dataset_einstein.csv
df = pd.read_csv('dataset_einstein.csv', delimiter=';')
print(df.head(5))
count_row = df.shape[0] # PEGANDO OS NÚMEROS DE REGISTROS
count_col = df.shape[1] # PEGANDO OS NUMEROS DE COLUNAS
print(count_row)
print(count_col)
# Drop dados NaN
df = df.dropna()
print(df.head(5))
print('Quantidade de campos(colunas): ', df.shape[1])
print('Total de registros:', df.shape[0])
print ('Total de registros negativos: ', df[df['SARS-Cov-2 exam result'] =='negative'].shape[0])
print ('Total de registros positivos: ', df[df['SARS-Cov-2 exam result'] =='positive'].shape[0],"\n\n")
Y = df['SARS-Cov-2 exam result'].values
print(Y)
# X == FEATURES
# VAMOS PEGAR OS CAMPOS DE TREINAMENTO (Hemoglobin, Leukocytes, Basophils, Proteina C reativa mg/dL)
X = df[['Hemoglobin', 'Leukocytes', 'Basophils','Proteina C reativa mg/dL']].values
print("\n\n")
# VAMOS MOSTRAR X
print(X)
X_treino, X_teste, Y_treino, Y_teste = train_test_split(X, Y, test_size=0.2, random_state=3)
algortimo_arvore = DecisionTreeClassifier(criterion='entropy', max_depth=5)
modelo = algortimo_arvore.fit(X_treino, Y_treino)
print (modelo.feature_importances_)
nome_features = ['Hemoglobin', 'Leukocytes', 'Basophils','Proteina C reativa mg/dL']
nome_classes = modelo.classes_
# MONTAR A IMAGEM DA ÁRVORE
dot_data = StringIO()
#dot_data = tree.export_graphviz(my_tree_one, out_file=None, feature_names=featureNames)
export_graphviz(modelo, out_file=dot_data, filled=True, feature_names=nome_features, class_names=nome_classes, rounded=True, special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
graph.write_png("arvore.png")
Image('arvore.png')
importances = modelo.feature_importances_
indices = np.argsort(importances)[::-1]
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%f)" % (f + 1, indices[f], importances[indices[f]]))
f, ax = plt.subplots(figsize=(11, 9))
plt.title("Feature ranking", fontsize = 20)
plt.bar(range(X.shape[1]), importances[indices],
color="b",
align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.ylabel("importance", fontsize = 18)
plt.xlabel("index of the feature", fontsize = 18)
plt.show()
#Indice das features
# 0 - 'Hemoglobin',
# 1 - 'Leukocytes'
# 2 - 'Basophils',
# 3 - 'Proteina C reativa mg/dL']
# APLICANDO O MODELO NA BASE DE TESTES E ARMAZENDO O RESULTADO EM Y_PREDICOES
Y_predicoes = modelo.predict(X_teste)
#AVALIAÇÃO DO MODELO
#VAMOS AVALIAR O VALOR REAL DO DATASET Y_TESTE COM AS PREDIÇÕES
print("ACURÁCIA DA ÁRVORE: ", accuracy_score(Y_teste, Y_predicoes))
print (classification_report(Y_teste, Y_predicoes))
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Matriz de Confusão Normalizada")
else:
print('Matriz de Confusão sem normalizacão ')
print(cm)
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('Rótulo real')
plt.xlabel('Rótulo prevista')
matrix_confusao = confusion_matrix(Y_teste, Y_predicoes)
plt.figure()
plot_confusion_matrix(matrix_confusao, classes=nome_classes,
title='Matrix de Confusao')
```
| github_jupyter |
```
# default_exp utils
```
# utils
> Provides different util functions
```
#export
import json
from copy import deepcopy
import numpy as np
from PIL import Image
from icevision.core.mask import EncodedRLEs, MaskArray
from pycocotools import mask as mask_utils
```
## Test data setup
```
import icedata
from icevision.data.data_splitter import SingleSplitSplitter
test_data_path_instance_segmentation = icedata.pennfudan.load_data()
test_instance_segmentation_parser = icedata.pennfudan.parser(data_dir=test_data_path_instance_segmentation)
test_instance_segmentation_records = test_instance_segmentation_parser.parse(SingleSplitSplitter())[0]
test_instance_segmentation_class_map = test_instance_segmentation_records[0].detection.class_map
```
## Instance segmentation
```
#export
def erles_to_string(erles):
erles_copy = deepcopy(erles)
erles_copy["counts"] = erles_copy["counts"].decode("utf-8")
return json.dumps(erles_copy)
#hide
test_erles = test_instance_segmentation_records[0].as_dict()["detection"]["masks"][0].to_erles(None, None).erles
test_string_erles = erles_to_string(test_erles[0])
assert test_string_erles == '{"size": [536, 559], "counts": "ecc22g`00O2O0O1O100O1O00100O001O10O01O1O0010OO2N2M2O2M2O2M2O2M2O2N1N3N1N3N1N3N0O01O01O00000O1000000O2O001`NbNfC^1Z<dNcC^1\\\\<eNaC[1_<gN_CZ1`<iN\\\\CX1e<iNXCY1h<iNUCX1o9cNhG6VNX1l9mNjGLXNX1n9oNhGHXNZ1Q:ROcGDZN\\\\1R:SObGBZN[1U:UO_G@ZN]1W:TO_G^OYN^1Y:UO]G]OXN`1[:UO[G[OXNa1^:UOYG[OWNa1`:VOXGXOVNc1c:VOVGWOUNd1g:UOSGWOTNf1j:SOQGWOTNf1m:SOoF[1S9dNhF`1Z9`NVFo1k9QNhEZ2Z:iMVEb2l:d11N2N2O1N3M3N2M3M3N2M2O200YKbDS4R<01O1O10O4L3N3L3M4M2M4ZE\\\\Ko8d4PG^Ko8b4PG^KR9`4nF`KS9_4lFbKU9]4kFcKX9Z4hFeK]9W4cFiKb9V4ZFjKj9V4QFkKT:T4gEmK]:h1jD6d0SNh:a1hD<:UNX;T1bDh01UNf;i0^DR1GVNU<>WD]1_OVNc<3SDU2W<`MmC]2Y=N3M2N3M2N3M2N3M2N3M3M3M3M3M3M2N3L4M3M3M3M6J5K6J6J^SV4"}'
#export
def erles_to_counts_to_utf8(erles):
erles_copy = deepcopy(erles)
for entry in erles_copy:
entry["counts"] = entry["counts"].decode("utf-8")
return erles_copy
#hide
test_erles_with_utf_8_counts = erles_to_counts_to_utf8(test_erles)
for erles in test_erles_with_utf_8_counts:
assert isinstance(erles["counts"], str)
#export
def string_to_erles(erles_string):
erles = json.loads(erles_string)
erles["counts"] = erles["counts"].encode()
return erles
#hide
erles_string = json.dumps(erles_to_counts_to_utf8(test_erles)[0])
test_erles_from_string = string_to_erles(erles_string)
assert isinstance(test_erles_from_string["counts"], bytes)
#export
def correct_mask(mask_array, pad_x, pad_y, width, height):
# correct mask
corrected_mask_array = mask_array.transpose(2, 0, 1)
if round(pad_x/2) > 0:
corrected_mask_array=corrected_mask_array[:,:,round(pad_x/2):round(-pad_x/2)]
if round(pad_y/2) > 0:
corrected_mask_array=corrected_mask_array[:,round(pad_y/2):round(-pad_y/2),:]
corrected_mask_array = np.array(Image.fromarray(corrected_mask_array[0,:,:]).resize([width, height], Image.NEAREST))
corrected_mask_array = np.expand_dims(corrected_mask_array, 0)
# convert mask array to mask and get erles (only one erles exist!)
corrected_mask = MaskArray(corrected_mask_array)
return corrected_mask
#hide
test_mask = np.zeros([1, 30, 30])
test_mask_corrected = correct_mask(test_mask, 10, 0, 30, 20)
assert test_mask_corrected.data.shape == (1, 20, 30)
#export
def decorrect_mask(mask_array, pad_x, pad_y, width, height):
corrected_mask_array = mask_array.transpose(2, 0, 1)
# resize
corrected_mask_array = np.array(Image.fromarray(corrected_mask_array[0,:,:]).resize([width, height], Image.NEAREST))
corrected_mask_array = np.expand_dims(corrected_mask_array, 0)
# pad
corrected_mask_array = np.pad(corrected_mask_array, [[0,0], [pad_y, pad_y], [pad_x, pad_x],])
corrected_mask = MaskArray(corrected_mask_array)
return corrected_mask
test_mask = np.ones([1, 10,10])
test_mask_decorrected = decorrect_mask(test_mask, 1, 2, 5, 5)
assert test_mask_decorrected.shape == (1,9,7)
```
| github_jupyter |
```
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
#building all kinds of evaluating parameters
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from sklearn.metrics import precision_score, recall_score
from sklearn.metrics import f1_score, matthews_corrcoef
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import RobustScaler
from google.colab import drive
drive.mount('/content/drive/')
p = np.linspace(0.01, 1.0, 50)
# print (p)
logp = -np.log(p)
ones = np.ones(len(p))
def fterm(gamma):
return (ones - p)**(gamma)
fl2 = fterm(2) * logp
fl0 = fterm(0) * logp
fl5 = fterm(5) * logp
fl0d5 = fterm(0.5) * logp
fig=plt.figure(figsize=(9, 7))
plt.plot(p, logp, 'b-', label='CE Loss')
plt.plot(p, fl0d5, linestyle='-.', color='magenta', label=r'Focal Loss: $\gamma=0.5$')
plt.plot(p, fl2, 'r.', label=r'Focal Loss: $\gamma=2$')
plt.plot(p, fl5, linestyle='--', color='orange', label=r'Focal Loss: $\gamma=5$')
# plt.text(0.1, 4, )
plt.legend(fontsize=12)
plt.xlabel('Probability of Ground Truth Class', fontsize=12)
plt.ylabel('Loss', fontsize=12)
# print (fterm(2))
# plt.savefig('/content/drive/My Drive/Colab Notebooks/Focal_Loss.png', dpi=200)
credit_df = pd.read_csv('/content/drive/My Drive/Colab Notebooks/creditcard.csv', sep=',')
credit_df.head(3)
print ('dataframe shape: ', credit_df.shape)
# check the label distribution
sns.countplot(x='Class', data=credit_df, palette='hls')
plt.xlabel('Label', fontsize=12)
plt.ylabel('Counts', fontsize=12)
plt.show()
```
we see here extremely imbalanced data in action here. Fradulent cases are negligible and that's why it gets difficult to identify the fraud cases ...
```
print ('Cases without frauds: ', len(credit_df[credit_df['Class']==0]))
print ('Cases with frauds: ', len(credit_df[credit_df['Class']==1]))
print ('check for null values in each column: ', credit_df.isna().sum())
```
No null values in the data-frame!!
If you see values in the respective columns, _Amount_ and _Time_ are not scaled but other columns are scaled. So to include Amount and Time as features we need to scale them.
Here we can use StandardScaler/RobustScaler to scale these features.
```
## Scale Amount and Time
robust_scaler = RobustScaler()
credit_df[['Amount', 'Time']] = robust_scaler.fit_transform(credit_df[['Amount', 'Time']])
credit_df.head(3)
X_labels = credit_df.drop(['Class'], axis=1)
y_labels = credit_df['Class']
print ('x shape and type: ', X_labels.shape, type(X_labels))
print ('y shape: ', y_labels.shape)
X_labels = X_labels.to_numpy(dtype=np.float64)
y_labels = y_labels.to_numpy(dtype=np.float64)
print ('x shape and type: ', X_labels.shape, type(X_labels))
print ('check few yvals: ', y_labels[0:3])
y_lab_cat = tf.keras.utils.to_categorical(y_labels, num_classes=2, dtype='float32')
print ('categorical y_lab: ', y_lab_cat[0:3])
x_train, x_test, y_train, y_test = train_test_split(X_labels, y_lab_cat, test_size=0.3,
stratify=y_lab_cat, shuffle=True)
# stratify ensure same proportion of class labels
print('train data shape and type: ', x_train.shape, type(x_train))
print ('test data shape: ', x_test.shape)
print ('check few y vals: ', y_train[0:3])
from tensorflow.keras.layers import Dense, Input, Activation, Dropout
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import activations
from tensorflow import keras
from keras import backend as K
def simple_model():
input_data = Input(shape=(x_train.shape[1], ))
x = Dense(64)(input_data)
x = Activation(activations.relu)(x)
x = Dense(32)(x)
x = Activation(activations.relu)(x)
x = Dense(2)(x)
x = Activation(activations.softmax)(x)
model = Model(inputs=input_data, outputs=x, name='Simple_Model')
return model
def simple_model1():
input_data = Input(shape=(x_train.shape[1], ))
x = Dense(64)(input_data)
x = Activation(activations.relu)(x)
x = Dense(32)(x)
x = Activation(activations.relu)(x)
x = Dense(2)(x)
x = Activation(activations.softmax)(x)
model = Model(inputs=input_data, outputs=x, name='Simple_Model')
return model
simple_model = simple_model()
simple_model.summary()
# simple_model.compile(optimizer=Adam(learning_rate=1e-2), loss='sparse_categorical_crossentropy',
# metrics=['acc'])
simple_model.compile(optimizer=Adam(learning_rate=5e-3), loss='categorical_crossentropy',
metrics=['acc'])
simple_model.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)
def conf_matrix(predictions, class_types, y_test_argmax=False):
''' Plots conf. matrix and classification report '''
if y_test_argmax==True:
cm=confusion_matrix(np.argmax(y_test, axis=1), np.argmax(np.round(predictions), axis=1))
cr=classification_report(np.argmax(y_test, axis=1),
np.argmax(np.round(predictions), axis=1),
target_names=[class_types[i] for i in range(len(class_types))])
print(cr)
else:
cm=confusion_matrix(y_test, np.argmax(np.round(predictions), axis=1))
print("Classification Report:\n")
cr=classification_report(y_test,
np.argmax(np.round(predictions), axis=1),
target_names=[class_types[i] for i in range(len(class_types))])
print(cr)
plt.figure(figsize=(6,6))
sns_hmp = sns.heatmap(cm, annot=True, xticklabels = [class_types[i] for i in range(len(class_types))],
yticklabels = [class_types[i] for i in range(len(class_types))], fmt="d", cmap="YlGnBu")
plt.xlabel('Pred Class')
plt.ylabel('True Class')
fig = sns_hmp.get_figure()
# fig.savefig('/content/gdrive/My Drive/Colab Notebooks/resnet/heatmap.png', dpi=250)
y_pred = simple_model.predict(x_test, batch_size=64)
# y_pred_argmax = np.argmax(y_pred, axis=1)
conf_matrix(y_pred, ['Real', 'Fraud'], y_test_argmax=True)
print ('check first 3 y pred: ', y_pred[0:3])
print ('check first 3 y_test: ', y_test[0:3])
print ('check argmax of pred:', np.argmax(np.round(y_pred[0:3]), axis=1))
```
### Focal Loss:
We will see that cross-entropy loss is a special case of Focal loss.
Definition :
$\text{FL}(p_t) = - \alpha _t \, \left(1-p_t\right)^{\gamma}\, \text{log}(p_t)$
$$\begin{equation}p_t = \begin{cases}p\, ; & \text{if}\, \, y = 1 \\ 1-p\, ; & \text{otherwise} \end{cases} \end{equation}$$
When $\gamma = 0, \alpha _t = 1$, then FL is Cross Entropy loss.
```
## how does it affect ?
y_example_true = [1.0]
y_example_pred = [0.90]
y_example_pred0 = [0.95]
y_example_pred1 = [0.20]
binary_ce = tf.keras.losses.BinaryCrossentropy()
loss_close = binary_ce(y_example_true, y_example_pred).numpy()
loss_veryclose = binary_ce(y_example_true, y_example_pred0).numpy()
loss_far = binary_ce(y_example_true, y_example_pred1).numpy()
print ('CE loss when pred is close to true: ', loss_close)
print ('CE loss when pred is very close to true: ', loss_veryclose)
print ('CE loss when pred is far from true: ', loss_far)
focal_factor_close = (1.0-0.90)**2 ## (take gamma = 2, as in paper)
focal_factor_veryclose = (1.0-0.95)**2
focal_factor_far = (1.0-0.20)**2 ## ()
print ('\n')
print ('focal loss when pred is close to true: ', loss_close*focal_factor_close)
print ('focal loss when pred is very close to true: ', loss_veryclose*focal_factor_veryclose)
print ('focal loss when pred is far from true: ', loss_far*focal_factor_far)
```
Penalize way more when prediction is far from true, i.e. focus on the wrongly classified samples
```
import tensorflow_addons as tfa
fl_tensorflow = tfa.losses.SigmoidFocalCrossEntropy(alpha=1.0, gamma=2.0)
fl_close = fl_tensorflow(y_example_true, y_example_pred).numpy()
fl_far = fl_tensorflow(y_example_true, y_example_pred1).numpy()
print ('check values: ', fl_close, fl_far)
```
Custom loss function
```
def focal_loss_custom(alpha, gamma):
def binary_focal_loss(y_true, y_pred):
fl = tfa.losses.SigmoidFocalCrossEntropy(alpha=alpha, gamma=gamma)
# pred_argmax = np.argmax(np.round(y_pred), axis=1)
# pred_argmax_tensor = tf.convert_to_tensor(pred_argmax, dtype=tf.float32)
# y_true_tensor = tf.convert_to_tensor(y_true, dtype=tf.float32)
y_true_K = K.ones_like(y_true)
focal_loss = fl(y_true, y_pred)
# focal_loss = tf.reduce_mean(focal_loss)
return focal_loss
return binary_focal_loss
simple_model1 = simple_model1()
simple_model1.compile(optimizer=Adam(learning_rate=5e-3), loss=focal_loss_custom(alpha=0.2, gamma=2.0), metrics=['acc'])
simple_model1.fit(x_train, y_train, validation_split=0.2, epochs=5, shuffle=True, batch_size=256)
y_pred = simple_model1.predict(x_test, batch_size=32)
confusion_matrix(np.argmax(y_test, axis=1), np.argmax(np.round(y_pred), axis=1))
conf_matrix(y_pred, ['Real', 'Fraud'], y_test_argmax=True)
print ('check first 3 ypred: ', y_pred[0:4])
print ('check first 3 ytest: ', y_test[0:4])
print (np.argmax(np.round(y_pred[0:4]), axis=1))
print (tf.cast(y_test[0:4], dtype=tf.float32))
print (K.ones_like(y_test[0:4]))
```
| github_jupyter |
<hr style="height:2px;">
# Demo: Neural network training for joint denoising and surface projection of *Drosophila melanogaster* wing
This notebook demonstrates training a CARE model for a 3D → 2D denoising+projection task, assuming that training data was already generated via [1_datagen.ipynb](1_datagen.ipynb) and has been saved to disk to the file ``data/my_training_data.npz``.
Note that training a neural network for actual use should be done on more (representative) data and with more training time.
More documentation is available at http://csbdeep.bioimagecomputing.com/doc/.
```
from __future__ import print_function, unicode_literals, absolute_import, division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from tifffile import imread
from csbdeep.utils import axes_dict, plot_some, plot_history
from csbdeep.utils.tf import limit_gpu_memory
from csbdeep.io import load_training_data
from csbdeep.models import Config, ProjectionCARE
```
The TensorFlow backend uses all available GPU memory by default, hence it can be useful to limit it:
```
# limit_gpu_memory(fraction=1/2)
```
<hr style="height:2px;">
# Training data
Load training data generated via [1_datagen.ipynb](1_datagen.ipynb), use 10% as validation data.
```
(X,Y), (X_val,Y_val), axes = load_training_data('data/my_training_data.npz', validation_split=0.1, verbose=True)
c = axes_dict(axes)['C']
n_channel_in, n_channel_out = X.shape[c], Y.shape[c]
plt.figure(figsize=(12,5))
plot_some(X_val[:5],Y_val[:5])
plt.suptitle('5 example validation patches (top row: source, bottom row: target)');
```
<hr style="height:2px;">
# CARE model
Before we construct the actual CARE model, we have to define its configuration via a `Config` object, which includes
* parameters of the underlying neural network,
* the learning rate,
* the number of parameter updates per epoch,
* the loss function, and
* whether the model is probabilistic or not.
The defaults should be sensible in many cases, so a change should only be necessary if the training process fails.
---
<span style="color:red;font-weight:bold;">Important</span>: Note that for this notebook we use a very small number of update steps per epoch for immediate feedback, whereas this number should be increased considerably (e.g. `train_steps_per_epoch=400`) to obtain a well-trained model.
```
config = Config(axes, n_channel_in, n_channel_out, unet_n_depth=3, train_batch_size=8, train_steps_per_epoch=20)
print(config)
vars(config)
```
We now create a CARE model with the chosen configuration:
```
model = ProjectionCARE(config, 'my_model', basedir='models')
```
Note that there are additional parameters for the projection part of the CARE model. If you need to change them, you can do so by specifying them with the prefix `proj_` when creating the `Config` above. For example, use `proj_n_filt = 16` to change the parameter `n_filt` of the `ProjectionParameters` shown below.
```
model.proj_params
```
<hr style="height:2px;">
# Training
Training the model will likely take some time. We recommend to monitor the progress with [TensorBoard](https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard) (example below), which allows you to inspect the losses during training.
Furthermore, you can look at the predictions for some of the validation images, which can be helpful to recognize problems early on.
You can start TensorBoard from the current working directory with `tensorboard --logdir=.`
Then connect to [http://localhost:6006/](http://localhost:6006/) with your browser.

```
history = model.train(X,Y, validation_data=(X_val,Y_val))
```
Plot final training history (available in TensorBoard during training):
```
print(sorted(list(history.history.keys())))
plt.figure(figsize=(16,5))
plot_history(history,['loss','val_loss'],['mse','val_mse','mae','val_mae']);
```
<hr style="height:2px;">
# Evaluation
Example results for validation images.
```
plt.figure(figsize=(12,7))
_P = model.keras_model.predict(X_val[:5])
if config.probabilistic:
_P = _P[...,:(_P.shape[-1]//2)]
plot_some(X_val[:5],Y_val[:5],_P,pmax=99.5)
plt.suptitle('5 example validation patches\n'
'top row: input (source), '
'middle row: target (ground truth), '
'bottom row: predicted from source');
```
<hr style="height:2px;">
# Export model to be used with CSBDeep **Fiji** plugins and **KNIME** workflows
See https://github.com/CSBDeep/CSBDeep_website/wiki/Your-Model-in-Fiji for details.
```
model.export_TF()
```
| github_jupyter |
# ReGraph tutorial (NetworkX backend)
## Part 1: Rewriting simple graph with attributes
This notebook consists of simple examples of usage of the ReGraph library
```
from regraph import NXGraph, Rule
from regraph import plot_graph, plot_instance, plot_rule
%matplotlib inline
```
### 1. Creating and modifying a graph object
ReGraph implements a wrapper around NetworkX directed graph objects (`nx.DiGraph`) through the `NXGraph` class.
```
# Create an empty graph object
graph = NXGraph()
# Add a list of nodes, optionally with attributes
graph.add_nodes_from(
[
'Alice',
('Bob', {'age': 15, 'gender': 'male'}),
('Jane', {'age': 40, 'gender': 'female'}),
('Eric', {'age': 55, 'gender': 'male'})
])
# Add a list of edges, optionally with attributes
graph.add_edges_from([
("Alice", "Bob"),
("Jane", "Bob", {"type": "parent", "since": 1993}),
("Eric", "Jane", {"type": "friend", "since": 1985}),
("Eric", "Alice", {"type": "parent", "since": 1992}),
])
# Print a list of nodes and edges with data attached to them
print("List of nodes: ")
for n, attrs in graph.nodes(data=True):
print("\t", n, attrs)
print("List of edges: ")
for s, t, attrs in graph.edges(data=True):
print("\t{}->{}".format(s, t), attrs)
# Add individual nodes and edges
graph.add_node('Sandra', {'age': 45, 'gender': 'female'})
graph.add_edge("Sandra", "Eric", {"type": "spouse", "since": 1990})
graph.add_edge("Eric", "Sandra", {"type": "spouse", "since": 1990})
graph.add_edge("Sandra", "Alice", {"type": "parent", "since": 1992})
# Add node and edge attributes
graph.add_node_attrs("Alice", {"age": 18, "gender": "female"})
graph.add_edge_attrs("Alice", "Bob", {"type": "friend", "since": 2004})
# Get attributes of nodes and edges
print("New Alice attibutes: ", graph.get_node("Alice"))
print("New Alice->Bob attributes: ", graph.get_edge("Alice", "Bob"))
```
Note that the attributes of the nodes/edges are converted to `regraph.attribute_sets.FiniteSet` objects. See the tutorial on advanced attribute values (`Tutorial_advanced_attributes.ipynb`) for more details on the underlying data structures.
```
for k, v in graph.get_node("Alice").items():
print(k, ": ", v, ", type: ", type(v))
```
ReGraph provides some utils for plotting NetworkX-based graphs
```
positioning = plot_graph(graph)
```
Graph objects can me dumped to dictionaries following the JSON format (note how the attribute values are encoded).
```
graph.to_json()
```
### 2. Finding graph patterns
```
# Initialize a pattern graph
pattern = NXGraph()
pattern.add_nodes_from(["x", "y", "z"])
pattern.add_edges_from([
("x", "y"),
("z", "y")
])
# Find matchings of the pattern in the graph
instances = graph.find_matching(pattern)
print(instances)
```
We can equip pattern nodes and edges with attributes, then ReGraph will look for all subgraphs matching to the structure of the pattern _and_ whose elements contain respective attributes.
```
pattern.add_edge_attrs("x", "y", {"type": "parent"})
pattern.add_edge_attrs("z", "y", {"type": "parent"})
instances = graph.find_matching(pattern)
print(instances)
```
We can plot matchings inside the graph using `plot_instance`.
```
print("Instances:")
for instance in instances:
print(instance)
plot_instance(graph, pattern, instance, parent_pos=positioning) #filename=("instance_example_%d.png" % i))
```
### 3. Rewriting graph objects
ReGraph implements the rewriting technique called _Sesqui-pushout rewriting_ that allows to transform graphs by applying _rules_ through their instances (matchings). It allows to express the following graph transformations:
- node cloning,
- node/edge removal,
- node/edge attributes removal,
- node merging,
- node/edge addition,
- node/edge attribute addition.
A rewriting rule is a span $LHS \leftarrow P \rightarrow RHS$, where $LHS$ is a graph that represents a left-hand side of the rule -- a pattern that is going to be matched inside of the input graph, $P$ is a graph that represents the interfaces of the rule -- together with a homomorphism $LHS \leftarrow P$ it specifies nodes and edges that are going to be preserved in the course of application of the rule. $RHS$ and a homomorphism $P \rightarrow RHS$, on the other hand, specify nodes and edges that are going to be added. In addition, if two nodes $n^P_1, n^P_2$ of $P$ map to the same node $n^{LHS}$ in $LHS$, $n^{LHS}$ is going to be cloned during graph rewriting. Symmetrically, if two nodes of $n^P_1$ and $n^P_2$ in $P$ match to the same node $n^{RHS}$ in $RHS$, $n^P_1$ and $n^P_2$ are merged.
To rewrite the graph, we first create a rewriting rule (see `Tutorial_rules.ipynb` on more examples of rules and means for their creation provided by ReGraph). A data structure for rewriting rules is implemeted in the class `regraph.rules.Rule`. Here, we will use the created pattern to initialize a rule. ReGraph implements the util `plot_rule` ror rule visualization.
```
rule = Rule.from_transform(pattern)
rule.inject_add_edge("y", "x", {"type": "child_of"})
rule.inject_add_edge("y", "z", {"type": "child_of"})
plot_rule(rule)
```
Graph rewriting can be performed with the `rewrite` method of `NXGraph`. It takes as an input a rule and an instance of this rule. Rewriting is performed in-place, the provided graph object is modified and a dictionary corresponding to the $RHS$ matching in the rewritten graph ($RHS \rightarrowtail G'$) is returned.
```
# Back-up the graph
graph_backup = NXGraph.copy(graph)
# Rewrite using the first instances
rhs_graph = graph.rewrite(rule, instances[0])
# Plot old instances in the backed-up graph
plot_instance(graph_backup, rule.lhs, instances[0], parent_pos=positioning)
# Plot RHS instance in the transformed graph
new_pos = plot_instance(graph, rule.rhs, rhs_graph, parent_pos=positioning)
```
Let us consider another example of a rewriting rule
```
# Create a pattern
pattern = NXGraph()
pattern.add_nodes_from(["x", "y"])
pattern.add_edge("x", "y", {"type": "parent"})
# Initialize a rule that clones `x`, note that tha variable `rhs_clone_id`
# corresponds to the ID of the newly produced clone in the RHS of the rule
rule = Rule.from_transform(pattern)
_, rhs_clone_id = rule.inject_clone_node("x")
rule.inject_add_edge("x", rhs_clone_id, {"type": "spouse"})
rule.inject_add_edge(rhs_clone_id, "x", {"type": "spouse"})
plot_rule(rule)
# Find matching in the graph
instances = graph.find_matching(rule.lhs)
print(instances)
# Let us fix an instace
instance = {'x': 'Jane', 'y': 'Bob'}
new_pos = plot_instance(graph, rule.lhs, instance, parent_pos=new_pos)
rhs_graph = graph.rewrite(rule, instance)
new_pos = plot_instance(graph, rule.rhs, rhs_graph, parent_pos=new_pos)
```
| github_jupyter |
###### Reference:
https://finthon.com/learn-cnn-two-tfrecord-read-data/
https://finthon.com/learn-cnn-three-resnet-prediction/
# 匯入圖片資料並輸出成tfrecord檔案
```
import os
from PIL import Image
import tensorflow as tf
'''
設置路徑
# 將需分類之圖片目錄放置Working Directory於之下,Folder以Int作為命名
'''
# 图片路径,两组标签都在该目录下
cwd = r"./OM/"
# tfrecord文件保存路径
file_path = r"./"
# 每个tfrecord存放图片个数
bestnum = 10000
# 第几个图片
num = 0
# 第几个TFRecord文件
recordfilenum = 0
# 将labels放入到classes中
classes = []
for i in os.listdir(cwd):
classes.append(i)
# tfrecords格式文件名
ftrecordfilename = ("traindata_63.tfrecords-%.3d" % recordfilenum)
writer = tf.python_io.TFRecordWriter(os.path.join(file_path, ftrecordfilename))
'''
tfrecord 製造函數
'''
for index, name in enumerate(classes):
class_path = os.path.join(cwd, name)
for img_name in os.listdir(class_path):
num = num + 1
if num > bestnum:
num = 1
recordfilenum += 1
ftrecordfilename = ("traindata_63.tfrecords-%.3d" % recordfilenum)
writer = tf.python_io.TFRecordWriter(os.path.join(file_path, ftrecordfilename))
img_path = os.path.join(class_path, img_name) # 每一个图片的地址
img = Image.open(img_path, 'r')
img_raw = img.tobytes() # 将图片转化为二进制格式
example = tf.train.Example(
features=tf.train.Features(feature={
'label': tf.train.Feature(int64_list=tf.train.Int64List(value=[index])),
'img_raw': tf.train.Feature(bytes_list=tf.train.BytesList(value=[img_raw])),
}))
writer.write(example.SerializeToString()) # 序列化为字符串
writer.close()
```
# 自tfrecord輸入資料集
```
'''
tfrecord 輸入函數
'''
import tensorflow as tf
def read_and_decode_tfrecord(filename):
filename_deque = tf.train.string_input_producer(filename)
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_deque)
features = tf.parse_single_example(serialized_example, features={
'label': tf.FixedLenFeature([], tf.int64),
'img_raw': tf.FixedLenFeature([], tf.string)})
label = tf.cast(features['label'], tf.int32)
img = tf.decode_raw(features['img_raw'], tf.uint8)
img = tf.reshape(img, [640, 480, 3])
img = tf.cast(img, tf.float32) / 255.0
return img, label
```
# 訓練CNN模型
```
import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
'''
定义模型保存地址,batch_sizes设置的小一点训练效果更好,将当前目录下的tfrecord文件放入列表中:
# tf.train.shuffle_batch: 随机打乱队列里面的数据顺序
# num_threads: 表示线程数
# capacity" 表示队列的容量,在这里设置成10000
# min_after_dequeue: 队列里保留的最小数据量,并且控制着随机的程度,
设置成9900的意思是,当队列中的数据出列100个,剩下9900个的时候,就要重新补充100个数据进来并打乱顺序。
如果你要按顺序导入队列,改成tf.train.batch函数,并删除min_after_dequeue参数。
这些参数都要根据自己的电脑配置进行相应的设置。
接下来将label值进行onehot编码,直接调用tf.one_hot函数。因为我们这里有2类,depth设置成2:
'''
save_dir = r"./train_image_63.model" # 模型保存路径
batch_size_ = 2
lr = tf.Variable(0.0001, dtype=tf.float32) # 学习速率
x = tf.placeholder(tf.float32, [None, 640, 480, 3]) # 图片大小为224*224*3
y_ = tf.placeholder(tf.float32, [None])
'''
train_list = ['traindata_63.tfrecords-000', 'traindata_63.tfrecords-001', 'traindata_63.tfrecords-002',
'traindata_63.tfrecords-003', 'traindata_63.tfrecords-004', 'traindata_63.tfrecords-005',
'traindata_63.tfrecords-006', 'traindata_63.tfrecords-007', 'traindata_63.tfrecords-008',
'traindata_63.tfrecords-009', 'traindata_63.tfrecords-010', 'traindata_63.tfrecords-011',
'traindata_63.tfrecords-012', 'traindata_63.tfrecords-013', 'traindata_63.tfrecords-014',
'traindata_63.tfrecords-015', 'traindata_63.tfrecords-016', 'traindata_63.tfrecords-017',
'traindata_63.tfrecords-018', 'traindata_63.tfrecords-019', 'traindata_63.tfrecords-020',
'traindata_63.tfrecords-021'] #制作成的所有tfrecord数据,每个最多包含1000个图片数据
'''
train_list = ['traindata_63.tfrecords-000']
# 随机打乱顺序
img, label = read_and_decode_tfrecord(train_list)
img_batch, label_batch = tf.train.shuffle_batch([img, label], num_threads=2, batch_size=batch_size_, capacity=10000,
min_after_dequeue=9900)
'''
接下来将label值进行onehot编码,直接调用tf.one_hot函数。因为我们这里有100类,depth设置成100:
'''
# 将label值进行onehot编码
one_hot_labels = tf.one_hot(indices=tf.cast(y_, tf.int32), depth=2)
pred, end_points = nets.resnet_v2.resnet_v2_50(x, num_classes=2, is_training=True)
pred = tf.reshape(pred, shape=[-1, 2])
'''
# nets.resnet_v2.resnet_v2_50: 直接调用ResNet_50网络
# num_classes等于类别总数
# is_training表示我们是否要训练网络里面固定层的参数,True表示所有参数都重新训练,False表示只训练后面几层的参数。
网络搭好后,我们继续定义损失函数和优化器,损失函数选择sigmoid交叉熵,优化器选择Adam:
'''
# 定义损失函数和优化器
loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=pred, labels=one_hot_labels))
optimizer = tf.train.AdamOptimizer(learning_rate=lr).minimize(loss)
# 准确度
a = tf.argmax(pred, 1)
b = tf.argmax(one_hot_labels, 1)
correct_pred = tf.equal(a, b)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# 创建一个协调器,管理线程
coord = tf.train.Coordinator()
# 启动QueueRunner,此时文件名队列已经进队
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
i = 0
while True:
i += 1
b_image, b_label = sess.run([img_batch, label_batch])
_, loss_, y_t, y_p, a_, b_ = sess.run([optimizer, loss, one_hot_labels, pred, a, b], feed_dict={x: b_image,
y_: b_label})
print('step: {}, train_loss: {}'.format(i, loss_))
if i % 20 == 0:
_loss, acc_train = sess.run([loss, accuracy], feed_dict={x: b_image, y_: b_label})
print('--------------------------------------------------------')
print('step: {} train_acc: {} loss: {}'.format(i, acc_train, _loss))
print('--------------------------------------------------------')
if i == 200:
saver.save(sess, save_dir, global_step=i)
#elif i == 300000:
# saver.save(sess, save_dir, global_step=i)
#elif i == 400000:
# saver.save(sess, save_dir, global_step=i)
break
coord.request_stop()
# 其他所有线程关闭之后,这一函数才能返回
coord.join(threads)
```
# 使用模型進行預測
```
import tensorflow as tf
import tensorflow.contrib.slim.nets as nets
from PIL import Image
import os
test_dir = r'./test' # 原始的test文件夹,含带预测的图片
model_dir = r'./train_image_63.model-300000' # 模型地址
test_txt_dir = r'./test.txt' # 原始的test.txt文件
result_dir = r'./result.txt' # 生成输出结果
x = tf.placeholder(tf.float32, [None, 640, 480, 3])
'''
classes = ['1', '10', '100', '11', '12', '13', '14', '15', '16', '17', '18', '19', '2', '20', '21', '22', '23', '24',
'25', '26', '27', '28', '29', '3', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '4', '40',
'41', '42', '43', '44', '45', '46', '47', '48', '49', '5', '50', '51', '52', '53', '54', '55', '56', '57',
'58', '59', '6', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '7', '70', '71', '72', '73',
'74', '75', '76', '77', '78', '79', '8', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '9',
'90', '91', '92', '93', '94', '95', '96', '97', '98', '99'] # 标签顺序
'''
classes = ['0', '1'] # 标签顺序
pred, end_points = nets.resnet_v2.resnet_v2_50(x, num_classes=2, is_training=True)
pred = tf.reshape(pred, shape=[-1, 2])
a = tf.argmax(pred, 1)
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver.restore(sess, model_dir)
with open(test_txt_dir, 'r') as f:
data = f.readlines()
for i in data:
test_name = i.split()[0]
for pic in os.listdir(test_dir):
if pic == test_name:
img_path = os.path.join(test_dir, pic)
img = Image.open(img_path)
img = img.resize((640, 480))
img = tf.reshape(img, [1, 640, 480, 3])
img1 = tf.reshape(img, [1, 640, 480, 3])
img = tf.cast(img, tf.float32) / 255.0
b_image, b_image_raw = sess.run([img, img1])
t_label = sess.run(a, feed_dict={x: b_image})
index_ = t_label[0]
predict = classes[index_]
with open(result_dir, 'a') as f1:
print(test_name, predict, file=f1)
break
```
| github_jupyter |
# KIC 9651065
```
%run setup.py
t, y = np.loadtxt('../lc/9651065_lc.txt', usecols=(0,1)).T
ms = Maelstrom(t, y, max_peaks=5, fmin=5, fmax=48)
ms.first_look()
period_guess = 300
a_guess = 200
time, flux = ms.time, ms.flux
freq = ms.freq
weights = ms.get_weights(norm=False)
pg = ms.period_search()
periods = np.linspace(100, 300, 300)
results = pg.fit(periods)
ys = np.array([[r[0] for r in row] for row in results])
sm = np.sum(ys, axis=0)
period_ind = np.argmax(sm)
plt.plot(periods[:-2], sm[:-2]);
from maelstrom.utils import unique_colors
hh = unique_colors(len(ms.freq), cmap='Blues')
plt.figure(figsize=mnras_size(240.))
ys = np.array([[np.exp(r[1]["logasini"]) for r in row] for row in results])
for i, c in zip(ys, hh):
plt.plot(periods, i, alpha=1, linewidth=0.8, c=c);
plt.xlabel('Period (d)')
plt.ylabel(r'a$\sin{i}$ (s)')
plt.ylim(0, None)
plt.xlim(100,300)
plt.axhline(184., c='r', linestyle='dashed', linewidth=0.7)
plt.axvline(272., c='r', linestyle='dashed', linewidth=0.7)
plt.savefig(overleaf_path + '9651065_period_search.pdf', dpi=300, bbox_inches='tight', pad_inches=0)
```
# Maelstrom model
```
ms.setup_orbit_model(period=period_guess)
# opt = ms.optimize()
pb1 = ms.pin_orbit_model()
opt = pb1.optimize()
opt
# with pb1:
# trace = pm.load_trace('traces/9651065_FINAL_VERSION2/')
with pb1:
trace = pm.sample(
tune=1000,
draws=2000,
start=opt,
chains=2,
step=xo.get_dense_nuts_step(target_accept=0.9),
)
pm.save_trace(trace, 'trace/NEW/9651065')
with pb1:
trace = pm.load_trace('trace/NEW/9651065')
pm.summary(trace)
from tqdm import tqdm
taus = []
with pb1:
for samp in tqdm(xo.utils.get_samples_from_trace(trace, size=1000), total=1000):
taus.append(xo.eval_in_model(pb1.orbit.get_time_delay(time), samp) * 86400)
med_td = np.median(taus, axis=0)
sd_td = np.std(taus, axis=0)
mean = np.mean(taus)
mean
np.random.seed(23)
fig, ax = plt.subplots(figsize=mnras_size(240), constrained_layout=True)
ax.set_rasterized(True)
#ax.set_rasterization_zorder(1)
with pb1:
for samp in xo.utils.get_samples_from_trace(trace, size=10):
taumod = xo.eval_in_model(pb1.orbit.get_time_delay(time), samp) * 86400
#ttime = (ms.time_mid + time - samp['tref']) % samp['period'] / samp['period']
ttime = (time) % samp['PB1_period'] / samp['PB1_period']
#ttime = ((ms.time_mid + time) + (samp['phi'] * samp['period'] / (2*np.pi))) % samp['period'] / samp['period']
sort = np.argsort(ttime)
ax.plot(ttime[sort], (taumod - np.mean(taumod))[sort], color=blue, linewidth=0.4, alpha=1,
# rasterized=True,
zorder=1)
ax.set_xlabel('Orbital phase')
ax.set_ylabel('Time delay (s)', c=blue)
times = time# + xo.eval_in_model(phi * period / (2*np.pi), samp)
fold = times % np.median(trace['PB1_period']) / np.median(trace['PB1_period'])
sort = np.argsort(fold)
plt.fill_between(fold[sort], (med_td - sd_td - 108.12878089572754)[:,0][sort], (med_td+sd_td - 108.12878089572754)[:,0][sort], alpha=0.2, color=blue)
ax.set_xlim(0, 1)
plt.savefig(overleaf_path + '9651065.pdf', dpi=300, bbox_inches='tight', pad_inches=0)
#plt.savefig('rast.pdf', dpi=300, bbox_inches='tight')
from maelstrom.utils import mass_function
import astropy.units as u
rounding = 3
samples = pm.trace_to_dataframe(trace, varnames=['PB1_period', 'PB1_asini'])
mfs = mass_function(samples['PB1_period'].values * u.day, samples['PB1_asini'].values*u.s)
#mfs = np.array(mfs)
upper, med, lower = np.percentile(mfs.value, [84.13, 50, 15.86])
print('mass_func', ': ', np.round(med,rounding), ' + ', np.round(upper - med,rounding), ' - ', np.round(med - lower,rounding))
varnames = ["period", "asini", "eccen", "omega", "phi"]
for var in varnames:
percentiles = np.percentile(trace['PB1_' + var], q=[15.87, 50, 84.13])
print(f'{var}: {percentiles[1]:.2f} + {percentiles[1] - percentiles[0]:.2f} - {percentiles[2] - percentiles[1]:.2f}')
```
# Subdividing model
```
td_time, td_td, td_err = np.loadtxt('../data/kic9651065_uncertainties-plus-time-delay_Q99_llc.txt', delimiter=',', usecols=(0,1,2)).T
td_time += 2400000
td_time -= 2454833
plt.scatter(td_time, td_td)
import theano.tensor as tt
from maelstrom.orbit import Orbit
with pm.Model() as subdivide_model:
logP = pm.Normal("logP", mu=np.log(272), sd=1.0, testval=np.log(272))
period = pm.Deterministic("period", pm.math.exp(logP))
# The time of conjunction
phi = xo.distributions.Angle("phi", testval=0.5691498)
logs_lc = pm.Normal('logs_lc', mu=np.log(np.std(flux)), sd=10, testval=0.)
logasini = pm.Normal('logasini', mu=np.log(184), sd=10, testval=np.log(184))
asini = pm.Normal("asini", mu=184, sd=10, testval=184)
drift = pm.Normal('drift', mu=0., sd=0.1, testval=0)
# Periastron sampled from uniform angle
omega = xo.distributions.Angle("omega", testval=-0.94)
# Eccentricity
eccen = pm.Uniform("eccen", lower=0, upper=0.9, testval=0.45)
# The baseline flux
mean = pm.Normal("mean", mu=0.0, sd=10.0, testval=0.003)
# Here, we generate an Orbit instance and pass in our priors.
orbit = Orbit(period=period,
lighttime=asini,
omega=omega,
eccen=eccen,
phi=phi,
freq=0)
# psi is defined to be negative but the light curve model takes 2*pi*f * (time - tau), so
# we must flip tau here to phase it on the same values
td = -1*tt.squeeze(orbit.get_time_delay(td_time) * 86400) # Convert to s
td += drift * td_time
taumodel = pm.Deterministic('taumodel', td - tt.mean(td))
pm.Normal('obs', mu=taumodel, sd=tt.exp(logs_lc), observed=td_td)
plt.plot(td_time, xo.eval_in_model(taumodel))
plt.plot(td_time, td_td)
with subdivide_model:
opt = xo.optimize()
opt
with subdivide_model:
trace = pm.sample(draws=2000, tune=2000, chains=2, start=opt)
pm.summary(trace)
varnames=['period', 'phi', 'eccen', 'asini', 'omega', 'phi', 'drift']
rounding = 2
for varname in varnames:
upper, med, lower = np.percentile(trace[varname], [84.13, 50, 15.86])
print(varname, ': ', np.round(med,rounding), ' + ', np.round(upper - med,rounding), ' - ', np.round(med - lower,rounding))
from maelstrom.utils import mass_function
import astropy.units as u
rounding = 3
samples = pm.trace_to_dataframe(trace, varnames=['period', 'asini'])
mfs = mass_function(samples['period'].values * u.day, samples['asini'].values*u.s)
#mfs = np.array(mfs)
upper, med, lower = np.percentile(mfs.value, [84.13, 50, 15.86])
print('mass_func', ': ', np.round(med,rounding), ' + ', np.round(upper - med,rounding), ' - ', np.round(med - lower,rounding))
plt.scatter(td_time % 272 / 272, np.median(trace['taumodel'], axis=0))
```
| github_jupyter |
Zipline Beginner Tutorial
=========================
Basics
------
Zipline is an open-source algorithmic trading simulator written in Python.
The source can be found at: https://github.com/quantopian/zipline
Some benefits include:
* Realistic: slippage, transaction costs, order delays.
* Stream-based: Process each event individually, avoids look-ahead bias.
* Batteries included: Common transforms (moving average) as well as common risk calculations (Sharpe).
* Developed and continuously updated by [Quantopian](https://www.quantopian.com) which provides an easy-to-use web-interface to Zipline, 10 years of minute-resolution historical US stock data, and live-trading capabilities. This tutorial is directed at users wishing to use Zipline without using Quantopian. If you instead want to get started on Quantopian, see [here](https://www.quantopian.com/faq#get-started).
This tutorial assumes that you have zipline correctly installed, see the [installation instructions](https://github.com/quantopian/zipline#installation) if you haven't set up zipline yet.
Every `zipline` algorithm consists of two functions you have to define:
* `initialize(context)`
* `handle_data(context, data)`
Before the start of the algorithm, `zipline` calls the `initialize()` function and passes in a `context` variable. `context` is a persistent namespace for you to store variables you need to access from one algorithm iteration to the next.
After the algorithm has been initialized, `zipline` calls the `handle_data()` function once for each event. At every call, it passes the same `context` variable and an event-frame called `data` containing the current trading bar with open, high, low, and close (OHLC) prices as well as volume for each stock in your universe. For more information on these functions, see the [relevant part of the Quantopian docs](https://www.quantopian.com/help#api-toplevel).
My first algorithm
----------------------
Lets take a look at a very simple algorithm from the `examples` directory, `buyapple.py`:
```
# assuming you're running this notebook in zipline/docs/notebooks
import os
if os.name == 'nt':
# windows doesn't have the cat command, but uses 'type' similarly
! type "..\..\zipline\examples\buyapple.py"
else:
! cat ../../zipline/examples/buyapple.py
```
As you can see, we first have to import some functions we would like to use. All functions commonly used in your algorithm can be found in `zipline.api`. Here we are using `order()` which takes two arguments -- a security object, and a number specifying how many stocks you would like to order (if negative, `order()` will sell/short stocks). In this case we want to order 10 shares of Apple at each iteration. For more documentation on `order()`, see the [Quantopian docs](https://www.quantopian.com/help#api-order).
Finally, the `record()` function allows you to save the value of a variable at each iteration. You provide it with a name for the variable together with the variable itself: `varname=var`. After the algorithm finished running you will have access to each variable value you tracked with `record()` under the name you provided (we will see this further below). You also see how we can access the current price data of the AAPL stock in the `data` event frame (for more information see [here](https://www.quantopian.com/help#api-event-properties)).
## Ingesting data for your algorithm
Before we can run the algorithm, we'll need some historical data for our algorithm to ingest, which we can get through a data bundle. A data bundle is a collection of pricing data, adjustment data, and an asset database. Bundles allow us to preload all of the data we will need to run backtests and store the data for future runs. Quantopian provides a default bundle called `quandl` which uses the [Quandl WIKI Dataset](https://www.quandl.com/data/WIKI-Wiki-EOD-Stock-Prices). You'll need a [Quandl API Key](https://docs.quandl.com/docs#section-authentication), and then you can ingest that data by running:
```
! QUANDL_API_KEY=<yourkey> zipline ingest -b quandl
```
For more information on data bundles, such as building custom data bundles, you can look at the [zipline docs](http://www.zipline.io/bundles.html).
## Running the algorithm
To now test this algorithm on financial data, `zipline` provides two interfaces. A command-line interface and an `IPython Notebook` interface.
### Command line interface
After you installed zipline you should be able to execute the following from your command line (e.g. `cmd.exe` on Windows, or the Terminal app on OSX):
```
!zipline run --help
```
Note that you have to omit the preceding '!' when you call `run_algo.py`, this is only required by the IPython Notebook in which this tutorial was written.
As you can see there are a couple of flags that specify where to find your algorithm (`-f`) as well as the time-range (`--start` and `--end`). Finally, you'll want to save the performance metrics of your algorithm so that you can analyze how it performed. This is done via the `--output` flag and will cause it to write the performance `DataFrame` in the pickle Python file format.
Thus, to execute our algorithm from above and save the results to `buyapple_out.pickle` we would call `run_algo.py` as follows:
```
!zipline run -f ../../zipline/examples/buyapple.py --start 2016-1-1 --end 2018-1-1 -o buyapple_out.pickle
```
`run_algo.py` first outputs the algorithm contents. It then uses historical price and volume data of Apple from the `quantopian-quandl` bundle in the desired time range, calls the `initialize()` function, and then streams the historical stock price day-by-day through `handle_data()`. After each call to `handle_data()` we instruct `zipline` to order 10 stocks of AAPL. After the call of the `order()` function, `zipline` enters the ordered stock and amount in the order book. After the `handle_data()` function has finished, `zipline` looks for any open orders and tries to fill them. If the trading volume is high enough for this stock, the order is executed after adding the commission and applying the slippage model which models the influence of your order on the stock price, so your algorithm will be charged more than just the stock price * 10. (Note, that you can also change the commission and slippage model that `zipline` uses, see the [Quantopian docs](https://www.quantopian.com/help#ide-slippage) for more information).
Note that there is also an `analyze()` function printed. `run_algo.py` will try and look for a file with the ending with `_analyze.py` and the same name of the algorithm (so `buyapple_analyze.py`) or an `analyze()` function directly in the script. If an `analyze()` function is found it will be called *after* the simulation has finished and passed in the performance `DataFrame`. (The reason for allowing specification of an `analyze()` function in a separate file is that this way `buyapple.py` remains a valid Quantopian algorithm that you can copy&paste to the platform).
Lets take a quick look at the performance `DataFrame`. For this, we use `pandas` from inside the IPython Notebook and print the first ten rows. Note that `zipline` makes heavy usage of `pandas`, especially for data input and outputting so it's worth spending some time to learn it.
```
import pandas as pd
perf = pd.read_pickle('buyapple_out.pickle') # read in perf DataFrame
perf.head()
```
As you can see, there is a row for each trading day, starting on the first business day of 2016. In the columns you can find various information about the state of your algorithm. The very first column `AAPL` was placed there by the `record()` function mentioned earlier and allows us to plot the price of apple. For example, we could easily examine now how our portfolio value changed over time compared to the AAPL stock price.
```
%pylab inline
figsize(12, 12)
import matplotlib.pyplot as plt
ax1 = plt.subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('Portfolio Value')
ax2 = plt.subplot(212, sharex=ax1)
perf.AAPL.plot(ax=ax2)
ax2.set_ylabel('AAPL Stock Price')
```
As you can see, our algorithm performance as assessed by the `portfolio_value` closely matches that of the AAPL stock price. This is not surprising as our algorithm only bought AAPL every chance it got.
### IPython Notebook
The [IPython Notebook](http://ipython.org/notebook.html) is a very powerful browser-based interface to a Python interpreter (this tutorial was written in it). As it is already the de-facto interface for most quantitative researchers `zipline` provides an easy way to run your algorithm inside the Notebook without requiring you to use the CLI.
To use it you have to write your algorithm in a cell and let `zipline` know that it is supposed to run this algorithm. This is done via the `%%zipline` IPython magic command that is available after you run `%load_ext zipline` in a separate cell. This magic takes the same arguments as the command line interface described above.
```
%load_ext zipline
%%zipline --start 2016-1-1 --end 2018-1-1 -o perf_ipython.pickle
from zipline.api import symbol, order, record
def initialize(context):
context.asset = symbol('AAPL')
def handle_data(context, data):
order(context.asset, 10)
record(AAPL=data.current(context.asset, 'price'))
```
Note that we did not have to specify an input file as above since the magic will use the contents of the cell and look for your algorithm functions there.
```
pd.read_pickle('perf_ipython.pickle').head()
```
## Access to previous prices using `data.history()`
### Working example: Dual Moving Average Cross-Over
The Dual Moving Average (DMA) is a classic momentum strategy. It's probably not used by any serious trader anymore but is still very instructive. The basic idea is that we compute two rolling or moving averages (mavg) -- one with a longer window that is supposed to capture long-term trends and one shorter window that is supposed to capture short-term trends. Once the short-mavg crosses the long-mavg from below we assume that the stock price has upwards momentum and long the stock. If the short-mavg crosses from above we exit the positions as we assume the stock to go down further.
As we need to have access to previous prices to implement this strategy we need a new concept: History
`data.history()` is a convenience function that keeps a rolling window of data for you. The first argument is the asset or iterable of assets you're using, the second argument is the field you're looking for i.e. price, open, volume, the third argument is the number of bars, and the fourth argument is your frequency (either `'1d'` for `'1m'` but note that you need to have minute-level data for using `1m`).
For a more detailed description of `data.history()`'s features, see the [Quantopian docs](https://www.quantopian.com/help#ide-history). Let's look at the strategy which should make this clear:
```
%pylab inline
figsize(12, 12)
%%zipline --start 2014-1-1 --end 2018-1-1 -o perf_dma.pickle
from zipline.api import order_target, record, symbol
import numpy as np
import matplotlib.pyplot as plt
def initialize(context):
context.i = 0
context.asset = symbol('AAPL')
def handle_data(context, data):
# Skip first 300 days to get full windows
context.i += 1
if context.i < 300:
return
# Compute averages
# data.history() has to be called with the same params
# from above and returns a pandas dataframe.
short_mavg = data.history(context.asset, 'price', bar_count=100, frequency="1d").mean()
long_mavg = data.history(context.asset, 'price', bar_count=300, frequency="1d").mean()
# Trading logic
if short_mavg > long_mavg:
# order_target orders as many shares as needed to
# achieve the desired number of shares.
order_target(context.asset, 100)
elif short_mavg < long_mavg:
order_target(context.asset, 0)
# Save values for later inspection
record(AAPL=data.current(context.asset, 'price'),
short_mavg=short_mavg,
long_mavg=long_mavg)
def analyze(context, perf):
ax1 = plt.subplot(211)
perf.portfolio_value.plot(ax=ax1)
ax1.set_ylabel('portfolio value in $')
ax1.set_xlabel('time in years')
ax2 = plt.subplot(212, sharex=ax1)
perf['AAPL'].plot(ax=ax2)
perf[['short_mavg', 'long_mavg']].plot(ax=ax2)
perf_trans = perf.ix[[t != [] for t in perf.transactions]]
buys = perf_trans.ix[[t[0]['amount'] > 0 for t in perf_trans.transactions]]
sells = perf_trans.ix[[t[0]['amount'] < 0 for t in perf_trans.transactions]]
ax2.plot(buys.index, perf.short_mavg.ix[buys.index], '^', markersize=10, color='m')
ax2.plot(sells.index, perf.short_mavg.ix[sells.index],'v', markersize=10, color='k')
ax2.set_ylabel('price in $')
ax2.set_xlabel('time in years')
plt.legend(loc=0)
plt.show()
```
Here we are explicitly defining an `analyze()` function that gets automatically called once the backtest is done (this is not possible on Quantopian currently).
Although it might not be directly apparent, the power of `history` (pun intended) can not be under-estimated as most algorithms make use of prior market developments in one form or another. You could easily devise a strategy that trains a classifier with [`scikit-learn`](http://scikit-learn.org/stable/) which tries to predict future market movements based on past prices (note, that most of the `scikit-learn` functions require `numpy.ndarray`s rather than `pandas.DataFrame`s, so you can simply pass the underlying `ndarray` of a `DataFrame` via `.values`).
We also used the `order_target()` function above. This and other functions like it can make order management and portfolio rebalancing much easier. See the [Quantopian documentation on order functions](https://www.quantopian.com/help#api-order-methods) fore more details.
# Conclusions
We hope that this tutorial gave you a little insight into the architecture, API, and features of `zipline`. For next steps, check out some of the [examples](https://github.com/quantopian/zipline/tree/master/zipline/examples).
Feel free to ask questions on [our mailing list](https://groups.google.com/forum/#!forum/zipline), report problems on our [GitHub issue tracker](https://github.com/quantopian/zipline/issues?state=open), [get involved](https://github.com/quantopian/zipline/wiki/Contribution-Requests), and [checkout Quantopian](https://quantopian.com).
| github_jupyter |
# Derived Fields and Profiles
One of the most powerful features in yt is the ability to create derived fields that act and look exactly like fields that exist on disk. This means that they will be generated on demand and can be used anywhere a field that exists on disk would be used. Additionally, you can create them by just writing python functions.
```
%matplotlib inline
import yt
import numpy as np
from yt import derived_field
from matplotlib import pylab
```
## Derived Fields
This is an example of the simplest possible way to create a derived field. All derived fields are defined by a function and some metadata; that metadata can include units, LaTeX-friendly names, conversion factors, and so on. Fields can be defined in the way in the next cell. What this does is create a function which accepts two arguments and then provide the units for that field. In this case, our field is `dinosaurs` and our units are `K*cm/s`. The function itself can access any fields that are in the simulation, and it does so by requesting data from the object called `data`.
```
@derived_field(name="dinosaurs", units="K * cm/s", sampling_type="cell")
def _dinos(field, data):
return data["temperature"] * data["velocity_magnitude"]
```
One important thing to note is that derived fields must be defined *before* any datasets are loaded. Let's load up our data and take a look at some quantities.
```
ds = yt.load_sample("IsolatedGalaxy")
dd = ds.all_data()
print (list(dd.quantities.keys()))
```
One interesting question is, what are the minimum and maximum values of dinosaur production rates in our isolated galaxy? We can do that by examining the `extrema` quantity -- the exact same way that we would for density, temperature, and so on.
```
print (dd.quantities.extrema("dinosaurs"))
```
We can do the same for the average quantities as well.
```
print (dd.quantities.weighted_average_quantity("dinosaurs", weight="temperature"))
```
## A Few Other Quantities
We can ask other quantities of our data, as well. For instance, this sequence of operations will find the most dense point, center a sphere on it, calculate the bulk velocity of that sphere, calculate the baryonic angular momentum vector, and then the density extrema. All of this is done in a memory conservative way: if you have an absolutely enormous dataset, yt will split that dataset into pieces, apply intermediate reductions and then a final reduction to calculate your quantity.
```
sp = ds.sphere("max", (10.0, 'kpc'))
bv = sp.quantities.bulk_velocity()
L = sp.quantities.angular_momentum_vector()
rho_min, rho_max = sp.quantities.extrema("density")
print (bv)
print (L)
print (rho_min, rho_max)
```
## Profiles
yt provides the ability to bin in 1, 2 and 3 dimensions. This means discretizing in one or more dimensions of phase space (density, temperature, etc) and then calculating either the total value of a field in each bin or the average value of a field in each bin.
We do this using the objects `Profile1D`, `Profile2D`, and `Profile3D`. The first two are the most common since they are the easiest to visualize.
This first set of commands manually creates a profile object the sphere we created earlier, binned in 32 bins according to density between `rho_min` and `rho_max`, and then takes the density-weighted average of the fields `temperature` and (previously-defined) `dinosaurs`. We then plot it in a loglog plot.
```
prof = yt.Profile1D(sp, "density", 32, rho_min, rho_max, True, weight_field="mass")
prof.add_fields(["temperature","dinosaurs"])
pylab.loglog(np.array(prof.x), np.array(prof["temperature"]), "-x")
pylab.xlabel('Density $(g/cm^3)$')
pylab.ylabel('Temperature $(K)$')
```
Now we plot the `dinosaurs` field.
```
pylab.loglog(np.array(prof.x), np.array(prof["dinosaurs"]), '-x')
pylab.xlabel('Density $(g/cm^3)$')
pylab.ylabel('Dinosaurs $(K cm / s)$')
```
If we want to see the total mass in every bin, we profile the `mass` field with no weight. Specifying `weight=None` will simply take the total value in every bin and add that up.
```
prof = yt.Profile1D(sp, "density", 32, rho_min, rho_max, True, weight_field=None)
prof.add_fields(["mass"])
pylab.loglog(np.array(prof.x), np.array(prof["mass"].in_units("Msun")), '-x')
pylab.xlabel('Density $(g/cm^3)$')
pylab.ylabel('Cell mass $(M_\odot)$')
```
In addition to the low-level `ProfileND` interface, it's also quite straightforward to quickly create plots of profiles using the `ProfilePlot` class. Let's redo the last plot using `ProfilePlot`
```
prof = yt.ProfilePlot(sp, 'density', 'mass', weight_field=None)
prof.set_unit('mass', 'Msun')
prof.show()
```
## Field Parameters
Field parameters are a method of passing information to derived fields. For instance, you might pass in information about a vector you want to use as a basis for a coordinate transformation. yt often uses things like `bulk_velocity` to identify velocities that should be subtracted off. Here we show how that works:
```
sp_small = ds.sphere("max", (50.0, 'kpc'))
bv = sp_small.quantities.bulk_velocity()
sp = ds.sphere("max", (0.1, 'Mpc'))
rv1 = sp.quantities.extrema("radial_velocity")
sp.clear_data()
sp.set_field_parameter("bulk_velocity", bv)
rv2 = sp.quantities.extrema("radial_velocity")
print (bv)
print (rv1)
print (rv2)
```
| github_jupyter |
# One time pad
In the previous lesson we performed an attack over the Monoalphabetic cipher where the attacker (Charlie) only knew that Alice and Bob were communicating in english and that they were using this concrete cipher. Therefore the ciphertext is leaking information. Can we find a cipher whose ciphertext doesn't leak any information on the original message?. We are going to answer this question using the Vigenere cipher.
# Table of contents:
* [Vigenere revisited](#vigenere-revisited)
* [Gathering plain english data](#nineteen-eighty-four)
* [Counting letter frequencies](#counting-frequencies)
* [Frequencies with short key](#counting-frequencies-2)
* [Frequencies with large key](#counting-frequencies3)
* [The One Time Pad](#onetimepad)
* [Why is the one time pad impractical?](#impractical-onetimepad)
Author: [Sebastià Agramunt Puig](https://github.com/sebastiaagramunt) for [OpenMined](https://www.openmined.org/) Privacy ML Series course.
## Vigenère cipher revisited <a class="anchor" id="vigenere-revisited"></a>
First, lets copy paste the code for the Vingenère cipher already coded in the first notebook
```
from copy import deepcopy
from random import randrange
import string
def vigenere_key_generator(secret_key_size: int) -> str:
n = len(string.ascii_lowercase)
secret_key = ''
while len(secret_key) < secret_key_size:
secret_key += string.ascii_lowercase[randrange(n)]
return secret_key
def shift_letter(letter: str, shiftby: str, forward: bool=True) -> str:
n = len(string.ascii_lowercase)
letter_int = ord(letter) - 97
shiftby_int = ord(shiftby) - 97
if forward:
return string.ascii_lowercase[(letter_int+shiftby_int)%n]
else:
return string.ascii_lowercase[(letter_int-shiftby_int)%n]
def vigenere_encrypt_decrypt(message: str, secret_key: str, encrypt:bool = True) -> str:
key_len = len(secret_key)
encoded = ''
for i, letter in enumerate(message):
if letter != " ":
encoded += shift_letter(letter, secret_key[i%key_len], forward=encrypt)
else:
encoded += letter
return encoded
```
## Downloading data from the book Nineteen Eighty Four <a class="anchor" id="nineteen-eighty-four"></a>
```
from utils import download_data, process_load_textfile
import string
import os
url = 'http://gutenberg.net.au/ebooks01/0100021.txt'
filename = 'Nineteen-eighty-four_Orwell.txt'
download_path = '/'.join(os.getcwd().split('/')[:-1]) + '/data/'
#download data to specified path
download_data(url, filename, download_path)
#load data and process
data = process_load_textfile(filename, download_path)
```
Have a look at the data
```
data[10000:11000]
```
## Counting letter frequencies <a class="anchor" id="counting-frequencies"></a>
First we write a function to count the frequency of the letters for a given text and outputs a sorted tuple with the letter and the counts of that letter
```
from typing import List, Tuple
from collections import Counter
def letter_count(text: str) -> List[Tuple[str, int]]:
text2 = text.replace(" ", "")
letters = [c for c in text2]
return Counter(letters).most_common()
```
And calculate the frequency for the book nineteen eighty four
```
freq = letter_count(data)
freq
```
And let's write a function that gives a bar plot for the frequencies:
```
import matplotlib.pyplot as plt
def freq_plotter(text: str, title: str) -> plt.figure:
plt.clf()
freq = letter_count(text)
names = [x[0] for x in freq]
values = [x[1] for x in freq]
fig = plt.figure(figsize=(16,7))
plt.bar(names, values)
plt.title(title)
return fig
fig = freq_plotter(data, "Frequencies of letters for Nineteen Eighty Four")
```
And finally let's code another nice utility, a function that randomly draws a random portion of the text
```
from random import randrange, seed
def draw_sample(text: str, size: int) -> str:
n = len(text)
i_init = randrange(n)
i_final = i_init + size
c = ''
for i in range(i_init, i_final):
c += text[i%n]
return c
seed(3)
draw_sample(data, 100)
```
## Counting frequencies with short key <a class="anchor" id="counting-frequencies-2"></a>
Now let's count the frequency in the ciphertext for a randomly sampled text from the book. Let's begin with the shift cipher (i.e. Vigenere with key size 1)
```
seed(10)
message_size = len(data)//4
secret_key_size = 1
print(f"message_size = {message_size}\nsecret_key_size = {secret_key_size}")
# generating random message
message = draw_sample(data, message_size)
# generating secret key
secret_key = vigenere_key_generator(secret_key_size)
# calculating ciphertext that Alice sends to Bob
ciphertext = vigenere_encrypt_decrypt(message, secret_key, encrypt=True)
# just to make sure Vigenere is well coded
assert message==vigenere_encrypt_decrypt(ciphertext, secret_key, encrypt=False), "something went wrong"
fig = freq_plotter(ciphertext, f"Frequencies for ciphertext size {message_size} and key size {secret_key_size}")
```
We observe that the frequencies of the letters are not the same, and therefore if the attacker knows Alice and Bob communicate in english he will probably be able to say that the shift is 1, i.e. the secret key is "b" because the most frequent letter in english is "e" and that corresponds to the peak in "f", therefore the shift is of one position. Here we can extract information from the ciphertext.
## Counting frequencies with large key <a class="anchor" id="counting-frequencies3"></a>
Instead of having a short key, let's take a super long key, actually the size of our message:
```
seed(10)
message_size = len(data)//4
secret_key_size = message_size
print(f"message_size = {message_size}\nsecret_key_size = {secret_key_size}")
# generating random message
message = draw_sample(data, message_size)
# generating secret key
secret_key = vigenere_key_generator(secret_key_size)
# calculating ciphertext that Alice sends to Bob
ciphertext = vigenere_encrypt_decrypt(message, secret_key, encrypt=True)
# just to make sure Vigenere is well coded
assert message==vigenere_encrypt_decrypt(ciphertext, secret_key, encrypt=False), "something went wrong"
fig = freq_plotter(ciphertext, f"Frequencies for ciphertext size {message_size} and key size {secret_key_size}")
```
Great!. The attacker computes the frequency of the letters in the ciphertext and determines that the probability for each letter to appear is almost the same. In this context we can say that the ciphertext does not contain any information from the original message.
## The one time pad <a class="anchor" id="onetimepad"></a>
Let's have a deeper look on what we've done in the previous section. First let's see what is the frequency for each letter when randomly generating the key
```
rdm_secret_keys = [vigenere_key_generator(secret_key_size=1) for _ in range(15000)]
count = Counter(rdm_secret_keys)
count.most_common()
```
These are very similar, this means that the probability for generating any letter is almost the same, around 1/26. Then when we encrypt what we do is to "shift" or "pad" our character by the key number. This means that the character in the ciphertext has probability 1/26 independent on what the character of the message was. We can formalise this using Bayesian statistics:
A cryptosystem has perfect secrecy if for all possible messages and for all possible ciphertext the probability of finding a message is independent of the ciphertext
$$P(m|c) = P(m)$$
where $P(m)$ is the probability of message $m$ from the corpus of all possible messages $M$ and $P(m|c)$ is the conditional probability for $m$ having observed the ciphertext $c$ belonging to the corpus of all possible ciphertexts $C$.
Equivalently we can write
$$P(m|c) = P(m|c^\prime)$$
for any two arbitrary ciphertext $c$ and $c^\prime$. This means that the probability for the message $m$ is independent of the ciphertext.
## One time pad is impractical because... <a class="anchor" id="impractical-onetimepad"></a>
* The key has to be at least as long as the message one wants to transmit
* For perfect secrecy one has to use a new key every time.
* Alice and Bob have to make sure that they are the only ones that know the key. They cannot stablish a common key communicating through an insecure channel
| github_jupyter |
# Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
```
Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.
```
text[:100]
```
And we can see the characters encoded as integers.
```
encoded[:100]
```
Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
```
len(vocab)
```
## Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in `encoded`. Let's create a function that will give us an iterator for our batches. I like using [generator functions](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/) to do this. Then we can pass `encoded` into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array `arr`, you divide the length of `arr` by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split `arr` into $N$ sequences. You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (`n_seqs` below), let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
```python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
```
where `x` is the input batch and `y` is the target batch.
The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide.
```
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
```
Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
```
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
```
If you implemented `get_batches` correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
```
although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`.
## Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
### Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called `keep_prob`.
```
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
```
### LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
```python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
```
where `num_units` is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
```python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
```
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with [`tf.contrib.rnn.MultiRNNCell`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/rnn/MultiRNNCell). With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
```python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
```
This might look a little weird if you know Python well because this will create a list of the same `cell` object. However, TensorFlow 1.0 will create different weight matrices for all `cell` objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
```python
initial_state = cell.zero_state(batch_size, tf.float32)
```
Below, we implement the `build_lstm` function to create these LSTM cells and the initial state.
```
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
```
### RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with `tf.variable_scope(scope_name)` because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
```
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
2
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
```
### Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.
Then we run the logits and targets through `tf.nn.softmax_cross_entropy_with_logits` and find the mean to get the loss.
```
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
```
### Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
```
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
```
### Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn). This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as `final_state` so we can pass it to the first LSTM cell in the the next mini-batch run. For `tf.nn.dynamic_rnn`, we pass in the cell and initial state we get from `build_lstm`, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
```
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# self.seq_output = tf.concat(outputs, axis=1)
# self.xval = tf.reshape(self.seq_output, [-1, lstm_size])
self.x2 = tf.concat(outputs, axis=0)
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
```
## Hyperparameters
Here I'm defining the hyperparameters for the network.
* `batch_size` - Number of sequences running through the network in one pass.
* `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
* `lstm_size` - The number of units in the hidden layers.
* `num_layers` - Number of hidden LSTM layers to use
* `learning_rate` - Learning rate for training
* `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).
> ## Tips and Tricks
>### Monitoring Validation Loss vs. Training Loss
>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)
> ### Approximate number of parameters
> The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are:
> - The number of parameters in your model. This is printed when you start training.
> - The size of your dataset. 1MB file is approximately 1 million characters.
>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger.
> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
> ### Best models strategy
>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
```
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
```
## Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I save a checkpoint.
Here I'm saving checkpoints with the format
`i{iteration number}_l{# hidden layer units}.ckpt`
```
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
# sess.run(model.xval)
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
sess.run(model.x2)
sess.run(model.x2.eval())
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
```
#### Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
```
tf.train.get_checkpoint_state('checkpoints')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
```
Here, pass in the path to a checkpoint and sample from the network.
```
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
| github_jupyter |
# Circuit Translation
In this notebook we will introduce a tool of `sqwalk` that is useful to decompose (or translate) an unitary transormation (in our case the one generated by the walker's Hamiltonian) into a series of gates that can be simulated or even run on quantum hardware. The decomposition method is based on `qiskit` thus we will need it as a dependency, in addition to our usual `SQWalker` class and some QuTiP objects.
Before jumping to the tutorial, it is useful to note here that this decomposition, for the sake of being general, it is not optimized. Indeed, while it supports any kind of quantum computer and every kind of quantum walker it usually takes up a lot of gates to implement the decomposition. To optimize the number of gate one must resort to some specific techniques in the literature that leverage the symmetries and characteristics of some particular graphs and are not general.
```
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import scipy
from sqwalk import SQWalker
from sqwalk import gate_decomposition
from qiskit import Aer
from qiskit.visualization import *
```
First we create the walker as we have seen in the previous notebooks and we run it for a certain time to have a reference result. In this case we have picked a time of 1000.
```
#Create and run the walker
graph = nx.path_graph(8)
adj = nx.adj_matrix(graph).todense()
walker = SQWalker(np.array(adj))
time_samples = 1000
initial_node = 0
result = walker.run_walker(initial_node, time_samples)
new_state = result.final_state
nodelist = [i for i in range(adj.shape[0])]
plt.bar(nodelist, new_state.diag())
plt.show()
```
Our decomposition, albeit being devised as a tool to decompose walkers, can be used with any Unitary or Hamiltonian.
Note that since we will use a system of $n$ qubits, our Hamiltonian has to be $2^n$ dimensional, if the problem has the wrong dimensionality one can zero-pad it to make it work.
The time we used above in `time_samples` has to be rescaled of a factor of $100$ since the timestep of the master equation in run_walker is $10^{-2}$.
```
#Estract the Hamiltonian from the walker
hamiltonian = walker.quantum_hamiltonian.full()
#Set the time one wants to simulate
time_rescaled = 10
#Compute the respective unitary
unitary = scipy.linalg.expm(-1j*time_rescaled*hamiltonian)
```
Now it is all set up to decompose our walker using the `gate_decomposition` function from `sqwalk`. To decompose it, it is suffiicent to pass our unitary to the function that, leveraging qiskit transpiling, will give us back the quantum circuit.
The `gate_decomposition` function also accepts two more arguments:
- topology: a list of connections between qubits specifying the topology of the particular hardware we want to decompose on, default topology is fully connected.
- gates: a list of allowed gates that can be used to create the decomposition, defaults to single qubit rotations and CNOT.
The resulting decomposition is a qiskit circuit object that can be exported into QASM instructions ot be executed on virtually every device.
```
#Decompose into gates
circuit_decomp = gate_decomposition(unitary)
circuit_decomp.qasm() # port it to whatever hardware
#circuit_decomp.draw()
```
As an example we take a simulator backend from `qiskit` itself (it could be a real device instead of a simulator), we execute the decomposed circuit and plot the result.
```
backend=Aer.get_backend('aer_simulator')
circuit_decomp.measure_all()
result=backend.run(circuit_decomp).result()
counts = result.get_counts(circuit_decomp)
plot_histogram(counts)
```
We can see that the decomposition is perfectly consistent with the quantum walker we have simulated above with SQWalk!
```
```
| github_jupyter |
# Analysis of Consumer Healthcare Costs
>Project submission for Applied Statistics course as a part of the PGP-AIML programme
***
#### Author:
>Abhinav Kimothi
#### Project Description:
>With the rising healthcare costs, it becomes imperative for a medical insurance provider to carefully analyze the costs viz-a-viz the customer profile to better target and price potential customers. This project aims at exploring customer profiles and establishing their relationship with costs
#### Data:
>Medical Costs of individuals along with certain profile attributes
#### Metrics(KPIs):
>Medical Cost
#### Profile(Dimensions):
> - Age
- Sex
- BMI
- Number of Children/Dependents
- Smoking Information
- Region
#### Subsequently in the analysis, we will explore these profiles and try to establish insights into their medical costs
***
Let's first begin by __importing python libraries__ that will help us analyse this data
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import seaborn as sns
import os
from matplotlib import pyplot as plt
from scipy.stats import ttest_1samp, ttest_ind, mannwhitneyu, levene, shapiro
from statsmodels.stats.power import ttest_power
from statsmodels.stats.proportion import proportions_ztest
import statsmodels.api as sm
from statsmodels.formula.api import ols
from platform import python_version
print (python_version())
```
Next, we'll __read__ in the csv file __"insurance (2).csv"__ which has the data and store it as a __pandas DataFrame__ called __'Costs'__
```
Costs=pd.read_csv("insurance (2).csv")
```
Let's __preview__ the data before any analysis
```
Costs.head(5) ### shows the first five rows of the data
Costs.dtypes
```
***
> - Looks like there are __four numeric fields__ i.e. _'age' & 'children' are __integers__ and 'bmi' & 'charges'_ are __float__
> - There's a case to be made here to analyse __'children'__ as a __categorical field__ rather than a continuous one. But, let's wait to see the statistical summary.
> - There are __three string fields (objects)__ i.e. _'sex', 'smoker' & 'region'_
> - We can assume 'age' to be in __years__ & charges to be in __$ US__
***
Let's analyse them a little later, but first - __how many total records are there in the data and how many columns?__ Let's find out
```
Costs.shape
```
***
> - Okay, the data contains records for __1338 individuals__. This might be good enough for us to make some statistical claims later in the analysis
> - We had also seen in the previews that there are __7 columns__ and it is reconfirmed here
***
Now let's first focus on the __numeric variables__ and look at the __count, mean, standard deviation__ and the __5-number summary__
```
Costs.describe().T ###summarizes the distribution of numeric variables
print("Number of missing values for Age = " +str(Costs.age.isna().sum()))
print("Number of missing values for bmi = " +str(Costs.bmi.isna().sum()))
print("Number of missing values for children = " +str(Costs.children.isna().sum()))
print("Number of missing values for charges = " +str(Costs.charges.isna().sum()))
```
***
#### Certain observations can be made by just looking at the above statistics
***
> - We notice that __none__ of the four numeric field have a missing value
>> - This is important because othewise we would've had to perform exclusions or missing value imputations - but, none of that here
> - The data is for individuals who are __more than 18 years of age__, and includes individuals till the__age of 64__. That's quite a __wide range__
> > - Though the __mean__ is around __39 years__, there's a __high standard deviation__ in age indicating a __high spread__
>> - The __mean and the median are fairly close__ to each other which implies a __symmetric distribution__ with __negligible skew__
>> - The 5 number summary suggests __no outliers__
> - Similarly, __BMI__ ranges from __15.96__ to __53.13__
> > - In the case of BMI, the __standard deviation is low__ compared to the mean. BMI must be __fairly centered__ around the mean
>> - The __mean and the median are fairly close__ to each other which implies a __symmetric distribution__ with __negligible skew__
>> - The 5 number summary suggests __outliers on both ends__
> - The __number of children/dependents__ has a small range from __0__ to __5__
>> - It is also noteworthy that though the highest number of children/dependents is five, __at least 3/4th__ of the sample has __two children/dependents or less__ and __at least 1/4th__ of the sample has __no children/dependents__
>> - There is case to be made here, that since children/dependent has only a few unique values, it should be treated as a categorical variable
> - __Charges__, the metric of importance to us, has a wide range from __about 1100__ to __more than 63000__
>> - We can observe that the __median (around 9382)__ is much __smaller__ compared to the __mean (around 13270)__. This implies a __high right skew__
>> - There is a __high standard deviation__ ; actually almost inching towards the mean. There is __high spread__ amongst charges
>> - The 5 number summary suggests __outliers on the higher end__
***
__It makes sense here to substantiate the above observations through charts and more statistics__
Lets try to __visualize the distributions__ of the three continuous variables and also __validate__ our claims above about the __skew__
```
print("Distribution of Age")
sns.set_style("whitegrid")
sns.set_context("talk")
sns.distplot(Costs.age)
plt.show()
print("Skew of Age distribution is " + str(Costs.age.skew()))
print("Distribution of BMI")
sns.distplot(Costs.bmi)
plt.show()
print("Skew of BMI distribution is " + str(Costs.bmi.skew()))
print("Distribution of Charges")
sns.distplot(Costs.charges)
plt.show()
print("Skew of Charges distribution is " + str(Costs.charges.skew()))
```
***
#### Great! It looks like our earlier inferences on skew hold for the three continuous variables. The visual representation enhances our knowledge about the distribution
***
> __AGE__ : While we do infer that the __skew is minimal (0.05, almost no skew)__, it is also worthwhile to notice that the distribution of age in the data is __not normal__. It's closer to a uniform distribution.
> __BMI__ : The distribution has a __marginal right skew (0.28)__
> __Charges__ : There is a __high degree of right skew (1.5)__
***
#### __Skewed data__ increases the chances of an __outlier__. It doesn't mean the unskewed data can't have outliers.
__It is always important to understand the outliers in the data, as they might render our inference incorrect. The treatment of outliers shall remain a subjective call__
__Let's now look at a visual box plot that will help us identify the outliers in the data__
```
print("Box Plot for Charges")
sns.set_style("dark")
sns.boxplot(Costs.charges)
plt.show()
print("As per the 'Median +/- 1.5 times IQR")
print("For Charges there are " +
str(len(Costs[Costs.charges<(Costs.charges.median()-1.5*(np.percentile(Costs.charges,75) - np.percentile(Costs.charges,25)))])) + " outliers on the lower side")
print("For Charges there are " +
str(len(Costs[Costs.charges>(Costs.charges.median()+1.5*(np.percentile(Costs.charges,75) - np.percentile(Costs.charges,25)))])) + " outliers on the upper side")
print("Box plot for Age")
sns.boxplot(Costs.age)
plt.show()
print("As per the 'Median +/- 1.5 times IQR")
print("For Age there are " +
str(len(Costs[Costs.age<(Costs.age.median()-1.5*(np.percentile(Costs.age,75) - np.percentile(Costs.age,25)))])) + " outliers on the lower side")
print("and " +
str(len(Costs[Costs.age>(Costs.age.median()+1.5*(np.percentile(Costs.age,75) - np.percentile(Costs.age,25)))])) + " outliers on the upper side")
print("Box plot for BMI")
sns.boxplot(Costs.bmi)
plt.show()
print("As per the 'Median +/- 1.5 times IQR")
print("For BMI there are " +
str(len(Costs[Costs.bmi<(Costs.bmi.median()-1.5*(np.percentile(Costs.bmi,75) - np.percentile(Costs.bmi,25)))])) + " outliers on the lower side")
print("and " +
str(len(Costs[Costs.bmi>(Costs.bmi.median()+1.5*(np.percentile(Costs.bmi,75) - np.percentile(Costs.bmi,25)))])) + " outliers on the upper side")
```
#### The box plot reveals important facts about these continuous variables
***
> __AGE__ : There are __no outliers__ on either side
> __BMI__ : There are __thirty four__ values more than __42.99__ and __thirteen__ values below __17.8__
> __Charges__ : There are __no outliers on the lower side__ and __181__ values more than __27231__
***
### That was about the continuous variables, but let's not forget that there are four categorical variables too
> - Sex
> - Region
> - Smoker
> - Number of Children/Dependents
__Let's first look at the distribution of these__
```
Costs.sex.value_counts()
Costs.region.value_counts()
Costs.smoker.value_counts()
Costs.children.value_counts()
```
__We can observe here__
> - __Sex__: Almost an __equal distribution__ with 14 more males than females
> - __Region__: __Slightly skewed towards southeast__ with other three regions almost equal
> - __Smokers__: About __20% smokers__ and __80% non-smokers__
> - __Number of children/dependents__ : __43% have no dependents__ and __42% have either 1 or 2 dependents__
__The best way to look at this distribution is through count plots__
```
print("Count plot for Sex")
sns.countplot(Costs.sex)
plt.show()
print("Count plot for Region")
sns.countplot(Costs.region)
plt.show()
print("Count plot for Smoking")
sns.countplot(Costs.smoker)
plt.show()
print("Count plot for Number of Children/Dependent")
sns.countplot(Costs.children)
plt.show()
```
#### Our observations above have been represented in a visual manner
****
### Till now we have looked into univariate distributions. Now let's look into bivariate relationships. This will help us in making inferences on our metric of choice
***
__Firstly, let's explore bivariate relationships amongst the numeric fields (including children)__
```
print("Bivariate relationships of all numeric fields")
sns.set_context("notebook")
sns.set_style("white")
sns.pairplot(Costs)
plt.show()
```
__We can use catplots to explore relationships amongst categorical data__
```
print("Between Smoker and Region")
sns.set_context("talk")
sns.catplot(y="region", hue="smoker", kind="count",
palette="pastel", edgecolor=".6",
data=Costs)
plt.show()
print("Between Sex and Region")
sns.catplot(y="region", hue="sex", kind="count",
palette="pastel", edgecolor=".6",
data=Costs)
plt.show()
print("Between Sex and Smoker")
sns.catplot(y="sex", hue="smoker", kind="count",
palette="pastel", edgecolor=".6",
data=Costs)
plt.show()
print("Between Sex and Children")
sns.catplot(y="children", hue="sex", kind="count",
palette="pastel", edgecolor=".6",
data=Costs)
plt.show()
print("Between Region and Children")
sns.catplot(y="region", hue="children", kind="count",
palette="pastel", edgecolor=".6",
data=Costs)
plt.show()
print("Between Smoking and Children")
sns.catplot(y="children", hue="smoker", kind="count",
palette="pastel", edgecolor=".6",
data=Costs)
plt.show()
```
__We will now use barplots to understand the bivariate relationships between one categorical and one continuous variable__
```
print("Between Smoking and Charges")
sns.barplot(Costs.smoker, Costs.charges)
plt.show()
print("Between Region and Charges")
sns.barplot(Costs.region, Costs.charges)
plt.show()
print("Between Sex and Charges")
sns.barplot(Costs.sex, Costs.charges)
plt.show()
print("Between Sex and BMI")
sns.barplot(Costs.sex, Costs.bmi)
plt.show()
print("Between Sex and Age")
sns.barplot(Costs.sex, Costs.age)
plt.show()
print("Between Region and BMI")
sns.barplot(Costs.region, Costs.bmi)
plt.show()
print("Between Region and Age")
sns.barplot(Costs.region, Costs.age)
plt.show()
print("Between Smoking and Age")
sns.barplot(Costs.smoker, Costs.age)
plt.show()
print("Between Smoking and BMI")
sns.barplot(Costs.smoker, Costs.bmi)
plt.show()
```
## There are some valuable observations to be made here
***
> - __Charges increase with age__ in three distinct groups
> - There is __no apparant relationship between age and BMI__
> - __South East Region__ has a __higher__ number of __Smokers__ and __Males__
> - There are __more male smokers__ than females
> - __Smokers__ in total __pay >three times more charges__ than non-smokers
> - __Charges are higher__ in __South East__ followed by __North East__
> - __Males pay more average charges__ than females
> - __Average age__ is __similar across Sex__
> - __South East Region__ has a __higher average BMI__
> - __Average age__ & __average BMI__ is __similar across Smokers and Non-Smokers__
***
## Now that we have looked at the univariate and the bivariate distributions, it is time to form some hypotheses and test them
### __Let's begin with looking at the relationship between smoking and costs__
#### We saw above in the chart that people who smoke pay more. Now let's test it statistically
__Question__. Do charges of people who smoke differ significantly from the people who don't?
We will compare the __average charges__ of __smoker=yes__ and __smoker=no__
__Null Hypothesis__: mean(charges) of smoker(yes) = mean(charges) of smoker(yes)
__Alternate Hypothesis__: mean(charges) of smoker(yes) != mean(charges) of smoker(yes)
__We can apply a two sample t-test__ (or if the samples aren't normally distributed a __two-sample wilcoxon test__)
__Smokers__ and __non-smokers__ are __independent groups__
__Let's pull the chart again__
```
sns.barplot(Costs.smoker, Costs.charges)
plt.show()
Costs.groupby('smoker')['charges'].agg(['mean','count','std'])
Costs.charges.agg(['mean','count','std'])
```
__Let's divide our population charges into smokers and non-smokers__
```
smoker_charges=Costs[Costs['smoker']=='yes']['charges']
non_smoker_charges=Costs[Costs['smoker']=='no']['charges']
sns.distplot(smoker_charges)
plt.show()
shapiro(smoker_charges)
sns.distplot(non_smoker_charges)
plt.show()
shapiro(non_smoker_charges)
```
#### <font color='blue'>__The two samples are not normally distributed__</font>
__Still, let's apply a two sample t-test__
__Smokers__ and __non-smokers__ are __independent groups__
__Null Hypothesis__: mean(charges) of smoker(yes) = mean(charges) of smoker(yes)
__Alternate Hypothesis__: mean(charges) of smoker(yes) != mean(charges) of smoker(yes)
__the smaller the p-value we get, with greater confidence we can reject the null hypothesis and say that the mean is different__
```
t_stat, pvalue=ttest_ind(smoker_charges, non_smoker_charges)
print("The p-value is "+ str(pvalue))
```
As we observe here, the __pvalue <.0001__ we can say __even with 99.99% confidence, the charges for smokers are not the same as charges for non-smokers__
__But,__ we observe from distributions above that samples are __not normally distributed__ Let's try and apply __the two-sample wilcoxon test__ to these samples
```
u, p_value = mannwhitneyu(smoker_charges, non_smoker_charges)
print ("two-sample wilcoxon-test p-value=", p_value)
```
__The samples are not normally distributed and we can claim with more then 99.99% confidence that the charges for smokers are not the same as the charges for non-smokers__
Infact, by looking at the means, we can claim that __the charges for smokers are higher than charges for non-smokers__ by more than 3.5 times
#### <font color='red'> In summary, we can claim with more than 99.99% confidence that the charges for smokers are higher than the charges for non-smokers
***
### Now let's look another relationship, between Sex and BMI
#### We visually observed above in a chart that BMI doesn't vary a lot with Sex. Now let's test it statistically
__Question__. Does bmi of males differ significantly from that of females?
We will compare the __average BMI__ of __sex=male__ and __sex=female__
__Null Hypothesis__: mean(bmi) of sex(male) = mean(bmi) of sex(female)
__Alternate Hypothesis__: mean(bmi) of sex(male) != mean(bmi) of sex(female)
__We can apply a two sample t-test__ (or if the samples aren't normally distributed a __two-sample wilcoxon test__)
__Male__ and __Female__ are __independent groups__
__Let's pull the chart again__
```
print("Between Sex and BMI")
sns.barplot(Costs.sex, Costs.bmi)
plt.show()
Costs.groupby('sex')['bmi'].agg(['mean','count','std'])
Costs.bmi.agg(['mean','count','std'])
```
__Let's divide our population charges into smokers and non-smokers__
```
female_bmi=Costs[Costs['sex']=='female']['bmi']
male_bmi=Costs[Costs['sex']=='male']['bmi']
sns.distplot(female_bmi)
plt.show()
shapiro(female_bmi)
sns.distplot(male_bmi)
plt.show()
shapiro(male_bmi)
```
#### <font color='blue'>The two samples are not normally distributed</font>
__Still, let's apply a two sample t-test__
__Male__ and __Female__ are __independent groups__
__Null Hypothesis__: mean(bmi) of sex(male) = mean(bmi) of sex(female)
__Alternate Hypothesis__: mean(bmi) of sex(male) != mean(bmi) of sex(female)
__the smaller the p-value we get, with greater confidence we can reject the null hypothesis and say that the mean is different__
```
t_stat, pvalue=ttest_ind(male_bmi, female_bmi)
print("The p-value is "+ str(pvalue))
```
__But,__ we observe from distributions above that samples are __not normally distributed__ Let's try and apply __the two-sample wilcoxon test__ to these samples
```
u, p_value = mannwhitneyu(male_bmi, female_bmi)
print ("two-sample wilcoxon-test p-value=", p_value)
```
As we observe here, the __pvalue >.05__, which means __even with 95% confidence we <font color='blue'>cannot</font> reject the null hypothesis__
> - Therefore, __at 95% CI, the BMI for males and females is <font color='blue'> not different </font>__
However, __pvalue<.051__, which means __at 94.9% confidence we <font color='blue'>can</font> reject the null hypothesis__
> - Therefore, __at 94.9% CI, the BMI for males <font color='blue'> is more </font> than the BMI for females__
#### <font color='red'> In summary, we can claim with 94.9% confidence that BMI for males is higher than the BMI for females. However, the same is not true at 95% confidence interval
***
### Now let's look at another type of relationship, the distribution of proportions. Here, let's evaluate the proportion of smokers across sex
#### We visually observed above in a chart there is a higher proportion of male smokers than females. Now let's test it statistically
__Question__. Is the proportion of smokers significantly different in different genders?
We will compare the __%smokers__ of __sex=male__ and __sex=female__
__Null Hypothesis__: %smoker(yes) of sex(male) = %smoker(yes) of sex(female)
__Alternate Hypothesis__: %smoker(yes) of sex(male) != %smoker(yes) of sex(female)
__We can apply a test of proportions__
__Let's pull the chart again__
```
print("Between Sex and Smoker")
sns.catplot(y="sex", hue="smoker", kind="count",
palette="pastel", edgecolor=".6",
data=Costs)
plt.show()
female_smokers = Costs[Costs['sex'] == 'female'].smoker.value_counts()[1] # number of female smokers
male_smokers = Costs[Costs['sex'] == 'male'].smoker.value_counts()[1] # number of male smokers
n_females = Costs.sex.value_counts()[1] # number of females in the data
n_males = Costs.sex.value_counts()[0] #number of males in the data
print ("Number of Females = ", n_females)
print ("Number of Males = ", n_males)
print ("Number of Female Smokers = ", female_smokers)
print ("Number of Male Smokers = ", male_smokers)
print(f'Proportion of smokers in females, males = {round(female_smokers/n_females,3)*100}%, {round(male_smokers/n_males,3)*100}%, respectively')
```
__Let's apply a test of proportions__
Ho = The proportions are equal
Ha = The two proportions are not equal
```
stat, pval = proportions_ztest([female_smokers, male_smokers] , [n_females, n_males])
print("The pvalue is ",pval)
```
As we observe here, the __pvalue <.006__, which means __with 99.4% confidence we <font color='blue'>can</font> reject the null hypothesis__
> - Therefore, __at 99.4% CI, the proportion of smokers in males <font color='blue'> is higher </font> than the proportion of smokers in females__
#### <font color='red'> In summary, we can claim with more than 99.4% confidence that the proportion of smokers amongst males is higher than the proportion of smokers amongst females
***
### Finally, let's compare the distributions across more than two groups. Here we'll compare the distribution of BMI in Females across the number of children
#### Let's first visualize this
```
Costs_Fem=Costs[(Costs.sex=='female') & (Costs.children<=2)]
print("BMI across number of children/dependents for Females")
sns.distplot(Costs_Fem[Costs_Fem.children==0]['bmi'], hist=False, rug=False)
sns.distplot(Costs_Fem[Costs_Fem.children==1]['bmi'], hist=False, rug=False)
sns.distplot(Costs_Fem[Costs_Fem.children==2]['bmi'], hist=False, rug=False)
plt.show()
```
__Visually, we can't distinguish much except that for Children=0 & Children=1, the distribution is almost similar. Let's test this statistically__
First, let's find the mean BMI and the standard deviations in the groups
```
Costs_Fem.groupby('children')['bmi'].agg(['mean', 'count', 'std'])
```
__In order to compare the distributions, we will compare the means of the three distributions using ANOVA test__
Here the __null hypothesis__ is that __all the means are same__
Therefore, __alternate hypothesis__ is that __at least one mean is different__
To rely on ANOVA we must check __two assumptions__
> - __Variance__ of the three distributions is equal: We'll check this using __Levene Test__
> - __Normality__ holds in the three distributions: We'll check this using __Shapiro Wilk Test__
```
levene(Costs_Fem[Costs_Fem.children==0]['bmi'],Costs_Fem[Costs_Fem.children==1]['bmi'],Costs_Fem[Costs_Fem.children==2]['bmi'])
```
__Levene Test__ assumes equality of variance. The above result shows a __high pvalue__ which means that we __cannot reject the null hypothesis__ and we can claim that __variance of the three samples is not different__
```
print(shapiro(Costs_Fem[Costs_Fem.children==0]['bmi']))
print(shapiro(Costs_Fem[Costs_Fem.children==1]['bmi']))
print(shapiro(Costs_Fem[Costs_Fem.children==2]['bmi']))
```
__At 99% Confidence Interval we can claim that the samples are normally distributed__
```
mod = ols('bmi ~ children', data = Costs_Fem).fit()
aov_table = sm.stats.anova_lm(mod, typ=2)
print(aov_table)
```
__The p-value above is .79376__ hence we __cannot reject the null hypothesis__
#### <font color='red'>Therefore, we can claim that the means of BMI amongst females with zero, one and two children is not different</font>
***
### This brings us to the conclusion of this analysis.
### There are, of course, many other hypotheses that can be made and tested on this data.
### I hope you enjoyed.
#### Please [drop me a note](mailto:abhinavkimothi145@gmail.com) with your comments
| github_jupyter |
# Preparing for Your Proposal
## Which client/dataset did you select and why?
Client 3: SportsStats (Olympics Dataset - 120 years of data)
SportsStats is a sports analysis firm partnering with local news and elite personal trainers to provide “interesting” insights to help their partners. Insights could be patterns/trends highlighting certain groups/events/countries, etc. for the purpose of developing a news story or discovering key health insights.
I selected SportsStats client, because of two reasons:
1) the size of the SportsStats data is smaller than the others. This makes data wrangling and analysis easier; 2) I am interested in sports and working with sports analytics
## Describe the steps you took to import and clean the data.
To import the data, I used pandas to read the CSV files. I used in-built pandas to_sql() to store the data in MySQL dataset. I did not clean the data because the dataset has NaN values.
## Importing the data
```
import pandas as pd
from pandasql import sqldf
pysqldf = lambda q: sqldf(q, globals())
athlete_events = pd.read_csv("athlete_events.csv")
noc_regions = pd.read_csv("noc_regions.csv")
```
Aftering importing the data, the next step is to dividie the athelte_events table into winter and summer tables. "Team", "Games", "Season", and "City" columns are not selected, since they provide us with either repetiitive or unnecessary information:
```
summer_events = pysqldf('''SELECT
ID,
Name,
Sex,
Age,
Height,
Weight,
NOC,
Year,
Sport,
Event,
Medal
FROM
athlete_events
WHERE
Season = "Summer"''')
winter_events = pysqldf('''SELECT
ID,
Name,
Sex,
Age,
Height,
Weight,
NOC,
Year,
Sport,
Event,
Medal
FROM
athlete_events
WHERE
Season = "Winter"''')
```
## Initial exploration of the data
Out of 271116 data points, 9474 lack age value, 60171 lack height value, 62875 lack weight value. Sex values are complete. Games data and its attributes (year, city, etc.) are complete.Since team names can be changed, using NOC instead would prove more consistent.
```
from IPython.display import Image
Image(filename = "ERD.png", width = 800, height = 400)
```
## Project Proposal
In this project, I would analyze the summer and winter Olympic games data. I try to figure out important changes in terms of diversity and performance. I also try to find correlation between several factors (such as representing the host country) and team performance. This would be beneficial to anyone interested in understanding the demographics of Olympic games.
## Questions
Q1: Is there any correlation between the performance of a country in winter olympics and that in summer olympics?<br>
Q2: Does country performance by year change more in Winter Olympics or Summer Olympics?<br>
Q3: How has the male:female ratio evolved through time?
## Hypotheses
H1: Yes.<br>
H2: Winter Olympics.<br>
H3: Decreased.
## Approach
A1: to calculate the Pearon correlation coefficient. <br>
A2: to calculate the standard deviation in country performance through years. A Comparison between average std of Winter and that of Summer Olympics will help.<br>
A3: to draw a simple histogram.
| github_jupyter |
<a href="https://colab.research.google.com/github/kentokura/ox_2x2_retrograde_analysis/blob/main/ox2x2/makeAllState.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
ノードの3状態:
- 未発見
unsolved, solvedのいずれにもnext_nodeが存在しない
- 未訪問
unsolvedに存在する
- 既訪問
solvedに存在する
処理:
- 初期状態"____"を未訪問にする.
- 未訪問キューが空になるまで以下を行う # BFS開始
1. 未訪問キューから先頭のノードをcurrent_nodeとしてpopする.
1. 盤面の勝敗判定を行う。
1. もし、勝敗がついている盤面なら、
1. RESULTに勝敗を記録
1. 勝敗がついていない盤面なら
1. current_nodeを探索し、次のノード(next_nodes)全てをnext_nodesとして列挙する.
1. next_nodesの要素next_nodeそれぞれに対して、以下を行う.
1. もし、next_nodeが未発見ならば、
1. そのノードを未訪問にする.
1. そうではなく、発見済みならば
1. 特になし.
1. next_node.previous_nodeにcurrent_nodeを追加する.
1. current_node.next_node.にnext_nodeを追加する.
1. ノード(current_node)を既訪問にする.
1. solvedをcsvで書き出し.
# インプット
```
# ドライブのマウント
from google.colab import drive
drive.mount('/content/drive')
# csvの読み込むためのモジュール
import pandas as pd
from pandas import DataFrame
import numpy as np
from tabulate import tabulate # pandasのdfをきれいに出力するためのモジュール
```
# 処理
```
### BFSでゲーム木を作成するプログラム ###
###
### 関数群ここから ###
# stateを入力すると、その盤面の勝敗を返す関数
def resultcheck(state: str) -> str:
# return 一覧
# "o_win"
# "x_win"
# "draw"
# ""
result = ""
if len(state) == 4:
state = list(state)
if (state[0] != "_" and (state[0] == state[1] or state[0] == state[2] or state[0] == state[3])) or (state[1] != "_" and (state[1] == state[2] or state[1] == state[3] )) or (state[2] != "_" and state[2] == state[3]):
result = "{}_win".format(state[0])
elif state in ["oooo", "xxxx"]:
result = "draw"
return result
# stateを入力すると、正規化したstateを返す関数
def normalization(state: str) -> str:
normalization_state = state
if state in ["o___", "_o__", "__o_", "___o"]:
normalization_state = "o___"
elif state == ["ox__", "o_x_", "_o_x", "xo__", "_x_o", "__xo", "x_o_", "__ox"]:
normalization_state ="ox__"
elif state == ["o__x", "_ox_", "x_o_", "_x_o"]:
normalization_state = "o__x"
elif state == ["x_oo", "xo_o", "oxo_", "_xoo", "oox_", "_oxo", "oo_x", "o_ox"]:
normalization_state = "oxo_"
elif state == ["xoo_", "ox_o", "o_xo", "_oox"]:
normalization_state = "ox_o"
elif state == ["oxox", "xxoo", "xoxo", "ooxx"]:
normalization_state = "oxox"
elif state == ["oxxo", "xoox"]:
normalization_state = "oxxo"
return normalization_state
# state を入力すると、次のstateのリストを出力する関数
def nextStates(state: str) -> list:
next_states = []
# 次の手番の決定
player = "_"
if state.count('o') <= state.count('x'):
player = "o"
else:
player = "x"
# next_nodeの作成
for i, piece in enumerate(state):
if piece == "_":
next_state = list(state)
next_state[i] = player
next_states.append(''.join(next_state))
# print(next_states)
# return next_states
# next_statesに登録された全てのstateを正規化
normalization_next_states = []
for next_state in next_states:
normalization_next_state = normalization(next_state)
normalization_next_states.append(''.join(normalization_next_state))
# 重複削除
normalization_next_states = list(set(normalization_next_states))
return normalization_next_states
### 関数群ここまで
### 設定ここから ###
printFlag = False
### 設定ここまで ###
###mainここから
# unsolvedDf, solvedDfの初期化
if printFlag:
print("===")
print("プログラム開始")
print("===")
print()
print("データを初期化します")
cols = ["PREVIOUS_STATES", "STATE", "NEXT_STATES", "RESULT"] #[前の状態list, 状態, 次の状態list]
df = pd.DataFrame(index=[], columns=cols)
df.set_index("STATE")
unsolvedDf = df
solvedDf = df
if printFlag:
print("データを初期化しました")
print()
# 初期状態"____"をunsolvedに追加する。unsolvedに積まれているノードは未訪問.
if printFlag:
print("===")
print("BFSの準備")
print("===")
print()
print("初期状態をセットします")
init_state = "____"
previous_state = ""
unsolvedDf = unsolvedDf.append(pd.Series([[previous_state], init_state, "unsolved", ""], index=df.columns, name=init_state))
if printFlag:
print("初期状態をセットしました") # 確認
print("確認[UNSOLVED_DF]:") # 確認
print(unsolvedDf) # 確認
print() # 確認
# unsolvedが空になるまで以下を行う. BFS開始
if printFlag:
print("===")
print("BFSを開始します")
print("===")
print()
for _ in range(1000): # while len(unsolvedDf) > 0: # 開発のためにfor文にしている。
# unsolvedDfから先頭のノードをpopする。
if len(unsolvedDf) <= 0:
break;
current_node = unsolvedDf.iloc[0] # 先頭のノード(current_node)を抽出。
unsolvedDf.drop(unsolvedDf.index[0], inplace=True) # 抽出したノードをunsolvedから削除。
# 勝敗の確認
result = resultcheck(current_node.STATE)
# 勝敗確定盤面なら
if result != "":
current_node.RESULT = result
current_node.NEXT_STATES = []
else: # 勝敗確定盤面でないなら
# 先頭のノード(current_node)から次のノード(next_nodes)を探索する。
next_states = nextStates(current_node.STATE) # 次のノードの探索結果
current_node.NEXT_STATES = next_states # current_nodeのNEXT_STATESに探索結果を反映
# 探索した全ての状態について、以下を行う。
if printFlag:
print("unsolvedDfからpopされたノード'{}'の探索を行います".format(current_node.STATE))
if len(next_states) <= 0:
if printFlag:
print(" 探索結果: このノードは末端です")
for next_state in next_states:
# もし、next_nodeが未発見ならば # unsolved, solvedのいずれにもnext_nodeが存在しない
if (next_state not in unsolvedDf.STATE.values) and (next_state not in solvedDf.STATE.values):
if next_state == current_node.STATE: # 次のノードが自身と同一
if printFlag:
print("探索結果: 自身のノード'{}'と同一です".format(next_state))
continue;
else:
if printFlag:
print(" 探索結果: 未発見のノード'{}'です".format(next_state))
# T)そのノードを未訪問にする。 # unsolvedに追加
previous_state = [current_node.STATE]
next_node = pd.Series([previous_state, next_state, "unsolved", ""], index=df.columns, name=next_state) # next_nodeの作成
unsolvedDf = unsolvedDf.append(next_node)
else: # F)そうではなく、発見済みならば
if printFlag:
print(" 探索結果: 発見済みのノード'{}'です".format(next_state))
#これを既に登録されていたノードのprevious_stateに追加する。
previous_state = [current_node.STATE]
if next_state in unsolvedDf.STATE.values: # unsolvedDfに存在
if printFlag:
print(" これはunsolvedに存在しています")
# unsolvedDf[unsolvedDf.STATE.values == next_state])にprevious_stateを追加する
tmp = unsolvedDf.loc[next_state, "PREVIOUS_STATES"]
tmp.append(previous_state[0])
unsolvedDf.loc[next_state, "PREVIOUS_STATES"] = tmp
elif next_state in solvedDf.STATE.values:# solveDfに存在
if printFlag:
print(" これはsolvedに存在しています")
# solvedDf[solvedDf.STATE.values == next_state])にprevious_stateを追加する
tmp = solvedDf.loc[next_state, "PREVIOUS_STATES"]
tmp.append(previous_state[0])
solvedDf.loc[next_state, "PREVIOUS_STATES"] = tmp
else: # 何らかの理由で漏れた状態
print(" エラー")
# 現在のノード(current_node)をsolvedDfに追加する。solvedDfのノードは既訪問。
solvedDf = solvedDf.append(current_node)
if printFlag:
print()
print("BFSが終了しました")
print()
# 結果確認
print("===")
print("結果確認")
print("===")
print()
print("確認[unsolvedDf]:")
print()
print(tabulate(unsolvedDf, unsolvedDf.columns,tablefmt='github', showindex=True))
print()
print("確認[solvedDf]:")
print()
print(tabulate(solvedDf, solvedDf.columns,tablefmt='github', showindex=True))
print()
### mainここまで
```
### 出力結果
確認[solvedDf]:
| | PREVIOUS_STATES | STATE | NEXT_STATES | RESULT |
|------|-------------------|---------|--------------------------|----------|
| ____ | [''] | ____ | ['o___'] | |
| o___ | ['____'] | o___ | ['o__x', 'o_x_', 'ox__'] | |
| o__x | ['o___'] | o__x | ['oo_x', 'o_ox'] | |
| o_x_ | ['o___'] | o_x_ | ['o_xo', 'oox_'] | |
| ox__ | ['o___'] | ox__ | ['oxo_', 'ox_o'] | |
| oo_x | ['o__x'] | oo_x | [] | o_win |
| o_ox | ['o__x'] | o_ox | [] | o_win |
| o_xo | ['o_x_'] | o_xo | [] | o_win |
| oox_ | ['o_x_'] | oox_ | [] | o_win |
| oxo_ | ['ox__'] | oxo_ | [] | o_win |
| ox_o | ['ox__'] | ox_o | [] | o_win |
# 出力
```
# solvedDfをox_outputという名前で書き出し
solvedDf.to_csv('/content/drive/My Drive/ox/workspace/ox_output.csv')
# ox_outputの確認
solvedDf = pd.read_csv(
"/content/drive/My Drive/ox/workspace/ox_output.csv",
index_col=0, # 最初の1行はデータ名。
encoding="cp932" # windowsの追加文字に対応。おまじないだと思えば良い。
)
print(solvedDf)
```
| github_jupyter |
# Boston Housing Prices Dataset
## Contents
0. [Introduction](#intro)
1. [Pre-processing and Splitting Data](#split)
2. [Models for median price predictions](#model)
3. [Stacked model](#stack)
## Introduction <a class="anchor" id="intro"></a>
This notebook illustrates the use of the `Stacker` to conveniently stack models over folds to perform predictions. In this example, the Boston Housing dataset (included in scikit-learn) is used. Two linear models (Ridge Regression and LASSO) are stacked. The single stacker is a Ridge Regression model.
```
import warnings
import numpy as np
import pandas as pd
from scipy import stats
from sklearn.model_selection import train_test_split, StratifiedKFold, RepeatedKFold, KFold, ParameterGrid, GridSearchCV
from sklearn.linear_model import Ridge, RidgeCV, Lasso, LassoCV
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVR, LinearSVR
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import RobustScaler
# Stacking
from Pancake.Stacker import *
# Data
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
warnings.filterwarnings('ignore')
# Random seed
seed=123
```
## Data Loading and Pre-processing <a class="anchor" id="split"></a>
```
# Get data
boston=load_boston()
X = boston['data']
y = boston['target']
print(boston['DESCR'])
```
Features and target variables:
```
feats = boston["feature_names"]
df_boston = pd.DataFrame(X, columns=feats)
df_boston['MEDV'] = y
# Unique values for each feature
df_boston.apply(lambda x: len(set(x)))
```
The following features benefit from a log transform:
* `CRIM`, `DIS`, `LSTAT`
```
fig, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(18,6))
sns.distplot(df_boston['CRIM'],ax=ax1)
sns.distplot(df_boston['DIS'], ax=ax2)
sns.distplot(df_boston['LSTAT'],ax=ax3)
plt.suptitle('Original Features')
plt.show()
fig, (ax1,ax2,ax3) = plt.subplots(1,3,figsize=(18,6))
sns.distplot(df_boston['CRIM'].apply(lambda x: np.log10(x)),ax=ax1)
sns.distplot(df_boston['DIS'].apply(lambda x: np.log10(x)), ax=ax2)
sns.distplot(df_boston['LSTAT'].apply(lambda x: np.log10(x)),ax=ax3)
plt.suptitle('Log transformed features')
plt.show()
```
To split the data into train/test sets, we can stratify using `MEDV` percentiles. This helps in a more balanced distribution between train and test sets
```
def quantileClasses(y, percs=[25,50,75]):
quantiles = np.percentile(y, percs)
yq = np.zeros_like(y,dtype=int)
# Categorical yq based on quantiles
yq[(y>quantiles[0]) & (y < quantiles[1])] = 1
yq[(y>quantiles[1]) & (y < quantiles[2])] = 2
yq[(y>quantiles[2])] = 3
return yq
yq = quantileClasses(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=yq, test_size=0.25, random_state=seed)
```
Let's pre-process and use robust scaler to re-scale:
```
feats_toLog = ['CRIM','DIS','LSTAT']
df_train = pd.DataFrame(X_train, columns=feats)
df_test = pd.DataFrame(X_test, columns=feats)
for f in feats_toLog:
df_train[f] = np.log10(df_train[f])
df_test[f] = np.log10(df_test[f])
```
Let's also rescale the features (except the categorical `CHAS`):
```
feats_to_normalize = [f for f in feats if f != 'CHAS']
X_ = df_train[feats_to_normalize].values
# Scale training data
scaler = RobustScaler()
X_rscl = scaler.fit_transform(X_)
center_, scale_ = scaler.center_, scaler.scale_
```
Training and test sets:
```
# Train
df_train_new = pd.DataFrame(X_rscl, columns=feats_to_normalize)
df_train_new['CHAS'] = df_train['CHAS']
# Test
X_ = df_test[feats_to_normalize].values
X_ = (X_ - center_) / scale_
df_test_new = pd.DataFrame(X_, columns=feats_to_normalize)
df_test_new['CHAS'] = df_test['CHAS']
```
## Modeling <a class="anchor" id="model"></a>
As a simple case, let's use Ridge Regression and LASSO as in-layer models. We will train both models separately in all training data, as well as on folds for the stacked model
```
X_train = df_train_new[feats].values
X_test = df_test_new[feats].values
```
#### Ridge Regression
```
skf = RepeatedKFold(n_repeats=10,n_splits=5,random_state=seed)
regMod_1 = RidgeCV(alphas=np.logspace(-2,2,100), scoring='neg_mean_squared_error', cv=skf)
regMod_1.fit(X_train, y_train)
print("Best hyper-parameter: alpha = {:.4f}".format(regMod_1.alpha_))
# Predict on train/test sets
y_pred_tr = regMod_1.predict(X_train)
mse_tr = mean_squared_error(y_train, y_pred_tr)
y_pred_ts = regMod_1.predict(X_test)
mse_ts = mean_squared_error(y_test, y_pred_ts)
# Performance
print("Training RMSE = {:.4f}".format(np.sqrt(mse_tr)))
print("Test RMSE = {:.4f}".format(np.sqrt(mse_ts)))
```
#### Lasso
```
skf = RepeatedKFold(n_repeats=10,n_splits=5,random_state=seed)
regMod_2 = LassoCV(n_alphas=100, cv=skf)
regMod_2.fit(X_train, y_train)
print("Best hyper-parameter: alpha = {:.4f}".format(regMod_2.alpha_))
# Train/test predictions
y_pred_tr = regMod_2.predict(X_train)
mse_tr = mean_squared_error(y_train, y_pred_tr)
y_pred_ts = regMod_2.predict(X_test)
mse_ts = mean_squared_error(y_test, y_pred_ts)
# Performance
print("Training RMSE = {:.4f}".format(np.sqrt(mse_tr)))
print("Test RMSE = {:.4f}".format(np.sqrt(mse_ts)))
```
## Stacking <a class="anchor" id="stack"></a>
We can now stack predictions from the above models. We choose to re-train the in-layer models over the folds since it leads to better performance
```
# Metric to maximize (negative RMSE)
def nrmse(y,y_pred):
return -np.sqrt(mean_squared_error(y,y_pred))
# Folds
splt = KFold(n_splits=5,random_state=seed)
# Initiate stacker
stacker = Stacker(X_train, y_train, splitter=splt, evalMetric=nrmse, family="regression")
# Hyper-parameters
hypers = {'alpha':np.logspace(-2,2,100)}
# Add one in-layer model
stacker.addModelIn(Ridge(), trainable = True, hyperParameters = hypers)
stacker.addModelIn(Lasso(), trainable = True, hyperParameters = hypers)
# Add one out-layer model
stacker.addModelOut(Ridge(), hypers)
# Train
predsTrain = stacker.stackTrain()
# Test
predsTest = stacker.stackTest(X_test)
# Train/Test set predictions and performance
mse_tr = mean_squared_error(y_train, predsTrain[0])
rmse_tr = np.sqrt(mse_tr)
print("Ridge Regression RMSE (train) = {:.4f}".format(rmse_tr))
mse_ts = mean_squared_error(y_test, predsTest[0])
rmse_ts = np.sqrt(mse_ts)
print("Ridge Regression RMSE (test) = {:.4f}".format(rmse_ts))
```
This result is better than single models trained on all data. Also, the difference between the training and test set performance is lower.
Let's now use the summary method of the stacker to get some more information:
```
stacker.summary()
```
| github_jupyter |
## Model - Infinite DPM - Chinese Restaurant Mixture Model (CRPMM)
#### Dirichlet mixture model where number of clusters is learned.
ref = reference sequence
$N$ = number of reads
$K$ = number of clusters/components
$L$ = genome length (number of positions)
alphabet = {A, C, G, T, -}
no-mutation rate: $\gamma \sim Beta(a,b)$
no-error rate: $\theta \sim Beta(c,d)$
Cluster weights ($K$-dim): $\pi | \alpha \sim Dir(\alpha)$
Cluster assignments ($N$-dim): $z|\pi \sim Categorical(\pi)$
Cluster centers/haplotypes ($K$x$L$-dim): $h | ref, \gamma \sim Categorical(W) $
with $W(l,i)=
\begin{cases}
\gamma, \text{ if }i = ref[l] \\
\frac{1-\gamma}{4}, \text{ else. }
\end{cases}$ for $l \in {1, ..., L}$ and $i\in {1,..., |alphabet|}$
Likelihood of the reads ($N$-dim): $r | z, h, \theta \sim Categorical(E)$
with $E(n,l,i)=
\begin{cases}
\theta, \text{ if }i = h_{z_n}[l] \\
\frac{1-\theta}{4}, \text{ else. }
\end{cases}$ for $n \in {1, ..., N}$, $l \in {1, ..., L}$ and $i\in {1,..., |alphabet|}$
```
import numpyro
import numpyro.distributions as dist
from numpyro.infer import MCMC, NUTS, DiscreteHMCGibbs, Predictive
from jax import random
import jax
import jax.numpy as jnp
import arviz as az
import matplotlib.pyplot as plt
# Minimal example
reference = jnp.array([0])
reads = jnp.array([[0], [1], [1], [1], [0], [1], [0], [1]])
alphabet ='01'
cluster_num = 5
input_data = reference, reads, len(alphabet)
# Use the following as inspiration
# https://forum.pyro.ai/t/variational-inference-for-dirichlet-process-clustering/98/2
def model_infiniteCRPMM(input_data):
reference, read_data, alphabet_length = input_data
# parameters
read_count = read_data.shape[0]
genome_length = read_data.shape[1]
alphabet_length = alphabet_length
alpha0 = 0.1
haplotypes = {} # sample this lazily
crp_counts = []
# define rates
mutation_rate = numpyro.sample('mutation_rate', dist.Beta(1, 1))
error_rate = numpyro.sample('error_rate', dist.Beta(1, 1))
# create matrix of rates
mutation_rate_matrix = jnp.full((genome_length, alphabet_length), (1 - mutation_rate) / (alphabet_length - 1))
mutation_rate_matrix = custom_put_along_axis(mutation_rate_matrix, reference.reshape(genome_length, 1), mutation_rate, axis=1)
#loc, scale = jnp.zeros(1), jnp.ones(1)*2
#alpha = numpyro.sample("alpha", dist.LogNormal(loc,scale)) # alpha must be more than zero
for n in range(read_count):
print('----')
print('read number ', n)
print('crp_counts ', crp_counts)
# sample from a CRP
weights = jnp.array(crp_counts + [alpha0])
weights /= weights.sum()
print('weights ', weights)
cluster_assignments = numpyro.sample("cluster_assignments"+str(n), dist.Categorical(weights))
print('cluster_assignments', cluster_assignments)
if cluster_assignments >= len(crp_counts):
# new cluster
crp_counts.append(1)
else:
# cluster already exists
crp_counts[cluster_assignments] += 1
# sample haplotypes
# lazily sample cluster mean
if int(cluster_assignments) not in haplotypes.keys():
haplotypes[int(cluster_assignments)] = numpyro.sample("haplotypes"+str(cluster_assignments), dist.Categorical(mutation_rate_matrix))
print('shape haplotypes[int(cluster_assignments)] ', haplotypes[int(cluster_assignments)].shape)
error_rate_matrix = jnp.full((genome_length, alphabet_length), (1 - error_rate) / (alphabet_length - 1))
print('error_rate ', error_rate)
print('shape error_rate_matrix', error_rate_matrix.shape)
print('before ' , type(error_rate_matrix))
print('haplotypes[int(cluster_assignments)] ',haplotypes[int(cluster_assignments)])
error_rate_matrix = custom_put_along_axis(error_rate_matrix, haplotypes[int(cluster_assignments)].reshape(genome_length, 1), error_rate, axis=1)
print('after ',type(error_rate_matrix))
obs = numpyro.sample("obs"+str(n), dist.Categorical(error_rate_matrix), obs=read_data[n])
rng_key = jax.random.PRNGKey(0)
num_warmup, num_samples = 2000, 20000
model = model_infiniteCRPMM
# Run NUTS. How many chains?
kernel = NUTS(model)
mcmc = MCMC(
DiscreteHMCGibbs(kernel),
num_warmup=num_warmup,
num_samples=num_samples,
num_chains=2
)
mcmc.run(rng_key, input_data)
```
| github_jupyter |
```
import pandas as pd
import datetime
import matplotlib.pyplot as plt
all_o3_df = pd.read_csv("./all_years_o3.csv")
#turn date column elements into datetime objects
all_o3_df["Date"] = pd.to_datetime(all_o3_df["Date"])
all_o3_df = all_o3_df.set_index("Date")
all_pm25_df = pd.read_csv("./all_years_pm25.csv")
#turn date column elements into datetime objects
all_pm25_df["Date"] = pd.to_datetime(all_pm25_df["Date"])
all_pm25_df = all_pm25_df.set_index("Date")
all_pm25_df.head()
#select date range to measure PM2.5 for full shutdown - include all years
#messy right now - turn into a function!
#use Jacksonville as an example
earliest_year = min(all_pm25_df.index.year)
latest_year = max(all_pm25_df.index.year)
shutdown_start_date = (2, 10)#"1/23"
shutdown_end_date = (4, 8)#"4/8"
mask = ((pd.Series(map(lambda x: x.month <= shutdown_start_date[0], all_pm25_df.index.date), index=all_pm25_df.index)) &
((pd.Series(map(lambda x: x.day < shutdown_start_date[1], all_pm25_df.index.date), index=all_pm25_df.index))))
#first get dates after the start date for all years
shutdown_time_period_pm_df = all_pm25_df.loc[~mask, :]
#remove the later months
shutdown_time_period_pm_df = shutdown_time_period_pm_df.loc[shutdown_time_period_pm_df.index.month<=shutdown_end_date[0]]
#now get dates before the end date
mask2 = ((pd.Series(map(lambda x: x.month == shutdown_end_date[0], shutdown_time_period_pm_df.index.date), index=shutdown_time_period_pm_df.index)) &
((pd.Series(map(lambda x: x.day >= shutdown_end_date[1], shutdown_time_period_pm_df.index.date), index=shutdown_time_period_pm_df.index))))
shutdown_time_period_pm_df = shutdown_time_period_pm_df.loc[~mask2, :]
shutdown_time_period_pm_df = shutdown_time_period_pm_df.loc[shutdown_time_period_pm_df["City"] == "Shanghai"]
print(shutdown_time_period_pm_df.head())
shutdown_time_period_pm_df.tail(10)
#inputs for the function are the complete dataframe for one particulate (df), the city name (city), an integer tuple in the
#form of (month, day) for the shutdown date (shutdown_date, for example (1, 23) for 1/23) - NOTE that this is the
#date where the strictest lockdown regulations start for that city, and an integer tuple in the
#form of (month, day) for the reopen date (reopen_date, for example (4, 8) for 4/8) - NOTE that this is the date when the city
#begins to reopen from the strictest lockdown regulations
#
#returns a dataframe with the correct shutdown date ranges for all years in the data set
def shutdownData(df, city, shutdown_date, reopen_date):
mask = ((pd.Series(map(lambda x: x.month <= shutdown_date[0], df.index.date), index=df.index)) &
((pd.Series(map(lambda x: x.day < shutdown_date[1], df.index.date), index=df.index))))
#first get dates after the start date for all years
shutdown_time_period_df = df.loc[~mask, :]
#remove the later months
shutdown_time_period_df = shutdown_time_period_df.loc[shutdown_time_period_df.index.month<=reopen_date[0]]
#now get dates before the end date
mask2 = ((pd.Series(map(lambda x: x.month == reopen_date[0], shutdown_time_period_df.index.date), index=shutdown_time_period_df.index)) &
((pd.Series(map(lambda x: x.day >= reopen_date[1], shutdown_time_period_df.index.date), index=shutdown_time_period_df.index))))
shutdown_time_period_df = shutdown_time_period_df.loc[~mask2, :]
shutdown_time_period_df = shutdown_time_period_df.loc[shutdown_time_period_df["City"] == city]
return shutdown_time_period_df
test_df = shutdownData(all_pm25_df, "Shanghai", (2, 10), (4, 8))
print("The 'shutdownData' function is working correctly:", test_df.equals(shutdown_time_period_pm_df))
#so to get the ozone information for Wuhan between the shutdown date of 1/23 and the reopening date of 4/8, we
#need to call the function as follows:
Shanghai_o3_shutdown_df = shutdownData(all_o3_df, "Shanghai", (2, 10), (4, 8))
Shanghai_o3_shutdown_df
#get average of medians by year
bar_plot_info = shutdown_time_period_pm_df.groupby(shutdown_time_period_pm_df.index.year).mean()
bar_plot_info
bar_plot_info.plot(kind="bar", y="median (ug/m3)")
plt.savefig("./Shanghai_pm25med.png")
#combine three previous years into an average median value
prior_years_df = bar_plot_info.loc[bar_plot_info.index<2020]
prior_averages = prior_years_df.mean()
prior_averages
summary_bar_plot = pd.DataFrame({"average median during shutdown dates (ug/m3)":[prior_averages["median (ug/m3)"],
bar_plot_info["median (ug/m3)"][2020]]},
index=["Prior Years", "2020"])
summary_bar_plot.plot(kind="bar")
summary_bar_plot.pct_change()
plt.savefig("./Shanghai_pm25combined.png")
line_plot, line_axes = plt.subplots()
Shanghai_2020_pm25_df = all_pm25_df.loc[(all_pm25_df.index.year == 2020) & (all_pm25_df["City"] == "Shanghai")]
Shanghai_line_axes = Shanghai_2020_pm25_df.plot(kind="line", y="median (ug/m3)", ax=line_axes)
#set titles, axes labels
Shanghai_line_axes.set_title("Shanghai Air Quality 2020 - PM2.5")
Shanghai_line_axes.set_ylabel("PM2.5 Average Median (ug/m3)")
Shanghai_line_axes.set_xlabel("Month")
Shanghai_line_axes.get_figure().savefig("./Shanghai_2020_line_plot.png")
Shanghai_2020_shutdown = Shanghai_2020_pm25_df["2/10/20":"4/8/20"]
shutdown_axes = Shanghai_2020_shutdown.plot(y="median (ug/m3)", style="r", ax=line_axes)
shutdown_axes.legend(["median (ug/m3)", "median (ug/m3) during shutdown"])
line_plot
Shanghai_line_axes.get_figure().savefig("./Shanghai_2020_line_plot.png")
#look at o3 values for wuhan for the same time period by year
Shanghai_avg_o3_df = Shanghai_o3_shutdown_df.groupby(Shanghai_o3_shutdown_df.index.year).mean()
Shanghai_avg_o3_df
prior_years_o3_df = Shanghai_avg_o3_df.loc[Shanghai_avg_o3_df.index<2020]
prior_o3_averages = prior_years_o3_df.mean()
prior_o3_averages
summary_bar_plot = pd.DataFrame({"average median during shutdown dates (ppb)":[prior_o3_averages["median (ppb)"],
Shanghai_avg_o3_df["median (ppb)"][2020]]},
index=["Prior Years", "2020"])
o3_axes = summary_bar_plot.plot(kind="bar")
#set titles, axes labels
o3_axes.set_title("Shanghai Air Quality - Ozone")
o3_axes.set_ylabel("O3 Average Median (ppb)")
o3_axes.set_xlabel("Year")
o3_axes.get_figure().savefig("./Shanghai_3yearmedianchange_o3.png")
summary_bar_plot.pct_change()
```
| github_jupyter |
# Inverse Kinematics tutorial
we'll demonstrate inverse kinematics on a baxter robot
## Setup
```
import numpy as np
from pykin.robots.bimanual import Bimanual
from pykin.kinematics.transform import Transform
from pykin.utils import plot_utils as plt
from pykin.utils.transform_utils import compute_pose_error
file_path = '../asset/urdf/baxter/baxter.urdf'
robot = Bimanual(file_path, Transform(rot=[0.0, 0.0, 0.0], pos=[0, 0, 0]))
visible_collision = True
```
You must set from base link name to end effector link name
```
robot.setup_link_name("base", "right_wrist")
robot.setup_link_name("base", "left_wrist")
# set the angles you want
head_thetas = np.zeros(1)
right_arm_thetas = np.array([-np.pi/4 , 0, 0, 0, 0 , 0 ,0])
left_arm_thetas = np.array([np.pi/4 , 0, 0, 0, 0 , 0 ,0])
```
### Compute the forward kinematics to set the robot's target pose.
```
thetas = np.concatenate((head_thetas ,right_arm_thetas ,left_arm_thetas))
target_transformations = robot.forward_kin(thetas)
fig, ax = plt.init_3d_figure()
plt.plot_robot(robot,
ax=ax,
transformations=target_transformations,
visible_collision=visible_collision)
ax.legend()
```
## Compute Inverse Kinematics
First, you must set joint initial angles using np.random.randn
```
init_thetas = np.random.randn(7)
```
### Get the target pose with the target transformations information obtained above.
```
target_pose = { "right": robot.get_eef_pose(target_transformations)["right"],
"left" : robot.get_eef_pose(target_transformations)["left"]}
print(target_pose)
```
Target pose's shape for Inverse Kinematics is (7,)
Combine position(x,y,z) and orientation(quaternion: w,x,y,z)
### Compute the target joint using inverse kinematics.
Iterators is the number of iterations to update the joint angle.
WorkingTime is how long this function takes to execute.
LM means to levenberg-marquardt and NR means to Newton Raphson.
### levenberg-marquardt Method
```
ik_LM_result = robot.inverse_kin(
init_thetas,
target_pose,
method="LM",
maxIter=100)
print(ik_LM_result)
thetas_LM = np.concatenate((head_thetas, ik_LM_result["right"], ik_LM_result["left"]))
result_fk_LM = robot.forward_kin(thetas_LM)
_, ax = plt.init_3d_figure("LM IK Result")
plt.plot_robot(robot, ax, result_fk_LM, visible_collision=visible_collision)
```
### Newton Raphson method
```
ik_NR_result = robot.inverse_kin(
init_thetas,
target_pose,
method="NR",
maxIter=100)
print(ik_NR_result)
thetas_NR = np.concatenate((head_thetas, ik_NR_result["right"], ik_NR_result["left"]))
result_fk_NR = robot.forward_kin(thetas_NR)
_, ax = plt.init_3d_figure("NR IK Result")
plt.plot_robot(robot, ax, result_fk_NR, visible_collision=visible_collision)
```
### Error between target pose and pose obtained by solving ik
```
err = {}
for arm in robot.arm_type:
err[arm+"_NR_error"] = compute_pose_error(
target_transformations[robot.eef_name[arm]].h_mat,
result_fk_NR[robot.eef_name[arm]].h_mat)
err[arm+"_LM_error"] = compute_pose_error(
target_transformations[robot.eef_name[arm]].h_mat,
result_fk_LM[robot.eef_name[arm]].h_mat)
for error_name, value in err.items():
print(f"{error_name} : {value}")
plt.show_figure()
```
| github_jupyter |
# Imports
```
import numpy as np
import pandas as pd
import glob
import re
from bs4 import BeautifulSoup
import nltk
nltk.download('punkt')
from nltk.tokenize import word_tokenize as wt
nltk.download('stopwords')
from nltk.corpus import stopwords
```
# Load data
```
def load_reviews(path, columns=["filename", 'review']):
assert len(columns) == 2
l = list()
for filename in glob.glob(path):
# print(filename)
with open(filename, 'r') as f:
review = f.read()
l.append((filename, review))
return pd.DataFrame(l, columns=columns)
def load_labelled_data(path, neg='/neg/',
pos='/pos/', shuffle=True):
neg_df = load_reviews(path + neg + "*.txt")
pos_df = load_reviews(path + pos + "*.txt")
neg_df['sentiment'] = 0
pos_df['sentiment'] = 1
df = pd.concat([neg_df, pos_df], axis=0)
if shuffle:
df = df.sample(frac=1, random_state=42)
return df
train_df = load_labelled_data("./aclImdb/train/")
train_df.info()
```
# Generate WordClouds and Frequencies
```
en_stopw = set(stopwords.words("english"))
def get_words(review, words, stopw=en_stopw):
review = BeautifulSoup(review).text # remove HTML tags
review = re.sub('[^A-Za-z]', ' ', review) # remove non letters
review = review.lower()
tok_rev = wt(review)
rev_word = [word for word in tok_rev if word not in stopw]
words += rev_word
pos_rev = train_df[train_df.sentiment == 1]
pos_rev.head()
pos_words = []
pos_rev.review.apply(get_words, args=(pos_words,))
from wordcloud import WordCloud
import matplotlib.pyplot as plt
pos_words_sen = " ".join(pos_words)
pos_wc = WordCloud(width = 600,height = 512).generate(pos_words_sen)
plt.figure(figsize = (12, 8), facecolor = 'k')
plt.imshow(pos_wc)
plt.axis('off')
plt.tight_layout(pad = 0)
plt.show()
neg_words = []
neg_rev = train_df[train_df.sentiment == 0]
neg_rev.review.apply(get_words, args=(neg_words, ))
len(neg_words)
neg_words_sen = " ".join(neg_words)
neg_wc = WordCloud(width = 600,height = 512).generate(neg_words_sen)
plt.figure(figsize = (12, 8), facecolor = 'k')
plt.imshow(neg_wc)
plt.axis('off')
plt.tight_layout(pad = 0)
plt.show()
from collections import Counter
pos = Counter(pos_words)
neg = Counter(neg_words)
pos.most_common(10)
neg.most_common(10)
for word, count in pos.most_common(1000):
negc = neg[word]
if abs((count-negc)/count) > 0.50:
print(word, count, negc)
```
# Naive-Bayes Classifier
```
# lets try to build a naive bayes model for sentiment classification
tot_words = pos + neg
tot_words.most_common(10)
top2k = [x for (x, y) in tot_words.most_common(2000)]
def featurize(review, topk=top2k, stopw=en_stopw):
review = BeautifulSoup(review).text # remove HTML tags
review = re.sub('[^A-Za-z]', ' ', review) # remove non letters
review = review.lower()
tok_rev = wt(review)
rev_word = [word for word in tok_rev if word not in stopw]
features = {}
for word in top2k:
features['contains({})'.format(word)] = (word in rev_word)
return features
train = [(featurize(rev), senti) for (rev, senti) in zip(train_df.review, train_df.sentiment)]
classifier = nltk.NaiveBayesClassifier.train(train)
# 0: negative sentiment, 1: positive sentiment
classifier.show_most_informative_features(120)
```
| github_jupyter |
# Nonstationary Temporal Matrix Factorization
Taking into account both seasonal differencing and first-order differencing.
```
import numpy as np
def compute_mape(var, var_hat):
return np.sum(np.abs(var - var_hat) / var) / var.shape[0]
def compute_rmse(var, var_hat):
return np.sqrt(np.sum((var - var_hat) ** 2) / var.shape[0])
def generate_Psi(T, d, season):
Psi = []
for k in range(0, d + 1):
if k == 0:
Psi.append(np.append(np.zeros((T - d - season, d)),
np.append(-1 * np.eye(T - d - season), np.zeros((T - d - season, season)), axis = 1)
+ np.append(np.zeros((T - d - season, season)), np.eye(T - d - season), axis = 1), axis = 1))
else:
Psi.append(np.append(np.append(np.zeros((T - d - season, d - k)),
np.append(-1 * np.eye(T - d - season), np.zeros((T - d - season, season)), axis = 1)
+ np.append(np.zeros((T - d - season, season)), np.eye(T - d - season), axis = 1), axis = 1),
np.zeros((T - d - season, k)), axis = 1))
return Psi
def update_cg(var, r, q, Aq, rold):
alpha = rold / np.inner(q, Aq)
var = var + alpha * q
r = r - alpha * Aq
rnew = np.inner(r, r)
q = r + (rnew / rold) * q
return var, r, q, rnew
def ell_w(ind, W, X, rho):
return X @ ((W.T @ X) * ind).T + rho * W
def conj_grad_w(sparse_mat, ind, W, X, rho, maxiter = 5):
rank, dim1 = W.shape
w = np.reshape(W, -1, order = 'F')
r = np.reshape(X @ sparse_mat.T - ell_w(ind, W, X, rho), -1, order = 'F')
q = r.copy()
rold = np.inner(r, r)
for it in range(maxiter):
Q = np.reshape(q, (rank, dim1), order = 'F')
Aq = np.reshape(ell_w(ind, Q, X, rho), -1, order = 'F')
w, r, q, rold = update_cg(w, r, q, Aq, rold)
return np.reshape(w, (rank, dim1), order = 'F')
def ell_x(ind, W, X, A, Psi, d, lambda0, rho):
rank, dim2 = X.shape
temp = np.zeros((d * rank, Psi[0].shape[0]))
for k in range(1, d + 1):
temp[(k - 1) * rank : k * rank, :] = X @ Psi[k].T
temp1 = X @ Psi[0].T - A @ temp
temp2 = np.zeros((rank, dim2))
for k in range(d):
temp2 += A[:, k * rank : (k + 1) * rank].T @ temp1 @ Psi[k + 1]
return W @ ((W.T @ X) * ind) + rho * X + lambda0 * (temp1 @ Psi[0] - temp2)
def conj_grad_x(sparse_mat, ind, W, X, A, Psi, d, lambda0, rho, maxiter = 5):
rank, dim2 = X.shape
x = np.reshape(X, -1, order = 'F')
r = np.reshape(W @ sparse_mat - ell_x(ind, W, X, A, Psi, d, lambda0, rho), -1, order = 'F')
q = r.copy()
rold = np.inner(r, r)
for it in range(maxiter):
Q = np.reshape(q, (rank, dim2), order = 'F')
Aq = np.reshape(ell_x(ind, W, Q, A, Psi, d, lambda0, rho), -1, order = 'F')
x, r, q, rold = update_cg(x, r, q, Aq, rold)
return np.reshape(x, (rank, dim2), order = 'F')
def notmf(dense_mat, sparse_mat, rank, d, lambda0, rho, season, maxiter):
dim1, dim2 = sparse_mat.shape
W = 0.01 * np.random.randn(rank, dim1)
X = 0.01 * np.random.randn(rank, dim2)
A = 0.01 * np.random.randn(rank, d * rank)
if np.isnan(sparse_mat).any() == False:
ind = sparse_mat != 0
pos_test = np.where((dense_mat != 0) & (sparse_mat == 0))
elif np.isnan(sparse_mat).any() == True:
pos_test = np.where((dense_mat != 0) & (np.isnan(sparse_mat)))
ind = ~np.isnan(sparse_mat)
sparse_mat[np.isnan(sparse_mat)] = 0
dense_test = dense_mat[pos_test]
del dense_mat
Psi = generate_Psi(dim2, d, season)
Phi = (np.append(np.zeros((dim2 - d - 1 - season, 1)), np.eye(dim2 - d - 1 - season), axis = 1)
- np.append(np.eye(dim2 - d - 1 - season), np.zeros((dim2 - d - 1 - season, 1)), axis = 1))
for k in range(d + 1):
Psi[k] = Phi @ Psi[k]
show_iter = 100
temp = np.zeros((d * rank, dim2 - d - season - 1))
for it in range(maxiter):
W = conj_grad_w(sparse_mat, ind, W, X, rho)
X = conj_grad_x(sparse_mat, ind, W, X, A, Psi, d, lambda0, rho)
for k in range(1, d + 1):
temp[(k - 1) * rank : k * rank, :] = X @ Psi[k].T
A = X @ Psi[0].T @ np.linalg.pinv(temp)
mat_hat = W.T @ X
if (it + 1) % show_iter == 0:
temp_hat = mat_hat[pos_test]
print('Iter: {}'.format(it + 1))
print('MAPE: {:.6}'.format(compute_mape(dense_test, temp_hat)))
print('RMSE: {:.6}'.format(compute_rmse(dense_test, temp_hat)))
print()
return mat_hat, W, X, A
def notmf_dic(obs, W, X, A, d, lambda0, rho, season):
dim1, dim2 = obs.shape
rank = X.shape[0]
if np.isnan(obs).any() == False:
ind = obs != 0
elif np.isnan(obs).any() == True:
ind = ~np.isnan(obs)
obs[np.isnan(obs)] = 0
Psi = generate_Psi(dim2, d, season)
Phi = (np.append(np.zeros((dim2 - d - 1 - season, 1)), np.eye(dim2 - d - 1 - season), axis = 1)
- np.append(np.eye(dim2 - d - 1 - season), np.zeros((dim2 - d - 1 - season, 1)), axis = 1))
for k in range(d + 1):
Psi[k] = Phi @ Psi[k]
X = conj_grad_x(obs, ind, W, X, A, Psi, d, lambda0, rho)
temp = np.zeros((d * rank, dim2 - d - season - 1))
for k in range(1, d + 1):
temp[(k - 1) * rank : k * rank, :] = X @ Psi[k].T
A = X @ Psi[0].T @ np.linalg.pinv(temp)
return X, A
def var4cast(X, A, d, delta, season):
dim1, dim2 = X.shape
X_hat = np.append((X[:, season + 1 : dim2] - X[:, 1 : dim2 - season]
- X[:, season : dim2 - 1] + X[:, 0 : dim2 - season - 1]),
np.zeros((dim1, delta)), axis = 1)
for t in range(delta):
X_hat[:, dim2 - season - 1 + t] = A @ X_hat[:, dim2 - season - 1 + t - np.arange(1, d + 1)].T.reshape(dim1 * d)
X = np.append(X, np.zeros((dim1, delta)), axis = 1)
for t in range(delta):
X[:, dim2 + t] = (X[:, dim2 - season + t] + X[:, dim2 - 1 + t]
- X[:, dim2 - season - 1 + t] + X_hat[:, dim2 - season - 1 + t])
return X
from ipywidgets import IntProgress
from IPython.display import display
def rolling4cast(dense_mat, sparse_mat, pred_step, delta, rank, d, lambda0, rho, season, maxiter):
dim1, T = sparse_mat.shape
start_time = T - pred_step
max_count = int(np.ceil(pred_step / delta))
mat_hat = np.zeros((dim1, max_count * delta))
f = IntProgress(min = 0, max = max_count) # instantiate the bar
display(f) # display the bar
for t in range(max_count):
if t == 0:
_, W, X, A = notmf(dense_mat[:, : start_time], sparse_mat[:, : start_time],
rank, d, lambda0, rho, season, maxiter)
else:
X, A = notmf_dic(sparse_mat[:, : start_time + t * delta], W, X_new, A, d, lambda0, rho, season)
X_new = var4cast(X, A, d, delta, season)
mat_hat[:, t * delta : (t + 1) * delta] = W.T @ X_new[:, - delta :]
f.value = t
small_dense_mat = dense_mat[:, start_time : T]
pos = np.where((small_dense_mat != 0) & (np.invert(np.isnan(small_dense_mat))))
mape = compute_mape(small_dense_mat[pos], mat_hat[pos])
rmse = compute_rmse(small_dense_mat[pos], mat_hat[pos])
print('Prediction MAPE: {:.6}'.format(mape))
print('Prediction RMSE: {:.6}'.format(rmse))
print()
return mat_hat, W, X, A
import numpy as np
dense_mat = np.load('../datasets/NYC-movement-data-set/hourly_speed_mat_2019_1.npz')['arr_0']
for month in range(2, 4):
dense_mat = np.append(dense_mat, np.load('../datasets/NYC-movement-data-set/hourly_speed_mat_2019_{}.npz'.format(month))['arr_0'], axis = 1)
import time
for rank in [10]:
for delta in [1, 2, 3, 6]:
for d in [1, 2, 3, 6]:
start = time.time()
dim1, dim2 = dense_mat.shape
pred_step = 7 * 24
lambda0 = 1
rho = 5
season = 7 * 24
maxiter = 50
mat_hat, W, X, A = rolling4cast(dense_mat[:, : 24 * 7 * 10], dense_mat[:, : 24 * 7 * 10],
pred_step, delta, rank, d, lambda0, rho, season, maxiter)
print('delta = {}'.format(delta))
print('rank R = {}'.format(rank))
print('Order d = {}'.format(d))
end = time.time()
print('Running time: %d seconds'%(end - start))
```
### License
<div class="alert alert-block alert-danger">
<b>This work is released under the MIT license.</b>
</div>
| github_jupyter |
# FTE/BTE Experiment for MNIST & Fashion-MNIST
As an extension of the FTE/BTE experiments demonstrated on the CIFAR and food-101 datasets, we now look to examine the performance of progressive learning algorithms on the MNIST and fashion-MNIST datasets.
Due to their similarity in structure, both containing 60,000 training and 10,000 testing samples of 28x28 grayscale images, MNIST and fashion-MNIST are ideal for studying recruitment between two different datasets. We are interested in obtaining benchmarks for how inter-dataset training performs, and do so using the FTE/BTE experiment.
```
import numpy as np
import keras
import functions.fte_bte_mnist_functions as fn
```
**Note:** This notebook tutorial uses functions stored externally within `functions/fte_bte_mnist_functions.py` to simplify presentation of code. These functions are imported above, along with other libraries.
## Benchmark Individual Datasets
Before we compare performance between datasets, we begin by first benchmarking the individual datasets, such that we are able to compare relative performance. We run the FTE/BTE experiments on MNIST and Fashion-MNIST individually.
### Import Data
First, let's import the data. Both the MNIST and Fashion-MNIST datasets can be imported via the `keras` package.
```
(MNIST_x_train, MNIST_y_train), (MNIST_x_test, MNIST_y_test) = keras.datasets.mnist.load_data()
MNIST_x_data = np.concatenate((MNIST_x_train, MNIST_x_test))
MNIST_y_data = np.concatenate((MNIST_y_train, MNIST_y_test))
(FASHION_x_train, FASHION_y_train), (FASHION_x_test, FASHION_y_test) = keras.datasets.fashion_mnist.load_data()
FASHION_x_data = np.concatenate((FASHION_x_train, FASHION_x_test))
FASHION_y_data = np.concatenate((FASHION_y_train, FASHION_y_test))
```
### Define Hyperparameters
Next, let's define the hyperparameters to be used for the experiment, which are as follows:
- `model`: model to be used for FTE/BTE experiment
- `num_tasks`: number of tasks
- `num_trees`: number of trees
- `num_points_per_task`: number of samples to take from the data set for each task
- `reps`: number of repetitions
```
### MAIN HYPERPARAMS ###
model = "uf"
num_tasks = 5
num_trees = 10
num_points_per_task=500
reps = 100
########################
```
By default, for the individual datasets, we are using a forest with `10` trees. From the `5` tasks, each of which contains 2 different labels, we take `500` samples randomly and run the experiment on it. This is repeated `100` times.
### MNIST
First, let's look at MNIST, which contains images of handwritten numerical digits from 0-9. Since we are using 5 tasks, each task contains data for two numbers.
We call the function to run the experiment:
```
accuracy_all_task = fn.run_experiment(MNIST_x_data, MNIST_y_data, num_tasks, num_points_per_task)
```
Next, we calculate the accuracy over tasks, as well as the forwards transfer efficiency (FTE), backwards transfer efficiency (BTE), and the overall transfer efficiency (TE). Given these values, we can plot them as follows:
```
acc, bte, fte, te = fn.calculate_results(accuracy_all_task, num_tasks)
fn.plot_results(acc, bte, fte, te)
```
### Fashion-MNIST
Next, we do the same for Fashion-MNIST, which contains images of clothing. Each task contains randomly selected images of two pieces of clothing.
We call the function to run the experiment:
```
accuracy_all_task = fn.run_experiment(FASHION_x_data, FASHION_y_data, num_tasks, num_points_per_task)
```
Next, we again calculate the accuracy over tasks, as well as the forwards transfer efficiency (FTE), backwards transfer efficiency (BTE), and the overall transfer efficiency (TE). Given these values, we can plot them as follows:
```
acc, bte, fte, te = fn.calculate_results(accuracy_all_task, num_tasks)
fn.plot_results(acc, bte, fte, te)
```
## FTE/BTE Between Datasets
Now that the individual datasets' transfer capabilities have been evaluated, let's look at how learning transfers between different datasets.
### Update Hyperparameters
For this, we want to use the first dataset as the first task and the second dataset as the second task, which makes it two tasks of 10 labels each. We therefore update the hyperparameters such that `num_tasks = 2`:
```
### MAIN HYPERPARAMS ###
model = "uf"
num_tasks = 2
num_trees = 10
num_points_per_task=500
reps = 100
########################
```
### Reformat Data
Since we want to train between the datasets,
```
x_data = np.concatenate((FASHION_x_data, MNIST_x_data))
y_data = np.concatenate((FASHION_y_data, MNIST_y_data + 10))
```
### MNIST -> Fashion-MNIST
Now, we run the experiment across datasets, calling the function as follows:
```
accuracy_all_task = fn.run_experiment(x_data, y_data, num_tasks, num_points_per_task)
```
Given the accuracies, we calculate accuracies and transfer efficiencies, and then plot the results:
```
acc, bte, fte, te = fn.calculate_results(accuracy_all_task, num_tasks)
fn.plot_results(acc, bte, fte, te)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/j3nguyen/jupyter_notebooks/blob/master/Optimizing_a_Library_Collection.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Stocking a digital library using combinatorial optimization
## Background
Suppose we are starting a library and want to select the best set of books.
There are so many great books available, but we only have $1000 to spend. Which books do we select?
## Solution
This is an example of the Knapsack problem. We have a knapsack of limited capacity and items of varying sizes and utility; which items do we pack?
In this scenario, the "knapsack" is the $1000 book budget and the items are the books. Each book has an associated "enjoyment" level and an associated cost.
Using combinatorial optimization, we will find the optimal set of books to maximize reader enjoyment and stay within the budget.
The following code is adapted from Google's [OR tools knapsack](https://developers.google.com/optimization/bin/knapsack) example.
```
# install Google's OR tools library
%pip install --upgrade --user ortools
from ortools.algorithms import pywrapknapsack_solver
import pandas as pd
import math
import time
# from google.colab import drive
# drive.mount('/content/drive')
path = '/content/drive/My Drive/Colab Notebooks/'
```
### 1. Load the data
The [Goodreads dataset](https://www.kaggle.com/jealousleopard/goodreadsbooks) will serve as the universe of books to choose from.
For illustrative purposes, we'll assume the cost of each book is proportional to the number pages, at $0.10/page.
We'll also assume that "best" is proportional to the average user rating. Therefore, we want to select the set of books that has the maximal overall user rating.
```
books = pd.read_csv("books.csv")
books["price"] = books[' num_pages']*0.10
books.head()
```
### 2. Instantiate the solver
More info on the knapsack solver can be found [here](https://developers.google.com/optimization/reference/python/algorithms/pywrapknapsack_solver).
We also have to convert the ratings and prices to integers since the solver only works for integer values.
Since the ratings range from 0 to 5.00, we'll multiply by 100. For the price, we'll round to the nearest dollar.
```
solver = pywrapknapsack_solver.KnapsackSolver(
pywrapknapsack_solver.KnapsackSolver.
KNAPSACK_MULTIDIMENSION_BRANCH_AND_BOUND_SOLVER, 'KnapsackExample')
# set the limit of the "knapsack"
MAX_BUDGET = [1000]
# declare the value and cost of each book and convert to integers
ratings = list(map(lambda x: int(x*100), books.average_rating.to_list()))
prices = [list(map(lambda x: round(x), books.price.to_list()))]
```
### 3. Run the solver
Call the solver with the ratings and price of each book.
```
solver.Init(ratings, prices, MAX_BUDGET)
computed_value = solver.Solve()
```
### 4. Display the best solution
The solver's `BestSolutionContains` method holds an array indicating which items are in the optimal set. Let's see which books it selected.
```
selected_books = []
total_cost = 0
avg_rating = 0
for i in range(len(ratings)):
if solver.BestSolutionContains(i):
selected_books.append((books.iloc[i].title,ratings[i]))
total_cost += prices[0][i]
avg_rating += ratings[i]
# print(i,books.iloc[i].title,ratings[i])
print('The total cost of the libary is:', total_cost)
print("The library has %d books" % len(selected_books))
print('Library average rating: %0.1f' % (avg_rating/(100*len(selected_books))))
print('\n')
print('Selected books:')
sorted_books = sorted(selected_books,key=lambda x: x[1],reverse=True)
for book in sorted_books:
print("%s - Rating: %0.1f" % (book[0], book[1]/100.0))
```
## Summary
We've demonstrated a simple example of how to use Google's OR tools to optimize a library, which didn't require a lot of work to set up. A further extension to this problem is to find an optimial library collections for many users. This is the *multi-knapsack problem* and is also available in the OR Tools package.
```
```
| github_jupyter |
# ---------------------------------------------------------------
# python best courses https://courses.tanpham.org/
# ---------------------------------------------------------------
# 100 numpy exercises
This is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercices for those who teach.
#### 1. Import the numpy package under the name `np` (★☆☆)
```
import numpy as np
a=np.array([1,2,3])
b=np.array([4,5,6])
print(np.vstack((a,b)))
```
#### 2. Print the numpy version and the configuration (★☆☆)
```
print(np.__version__)
np.show_config()
```
#### 3. Create a null vector of size 10 (★☆☆)
```
Z = np.zeros(10)
print(Z)
```
#### 4. How to find the memory size of any array (★☆☆)
```
Z = np.zeros((10,10))
print("%d bytes" % (Z.size * Z.itemsize))
```
#### 5. How to get the documentation of the numpy add function from the command line? (★☆☆)
```
%run `python -c "import numpy; numpy.info(numpy.add)"`
```
#### 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
```
Z = np.zeros(10)
Z[4] = 1
print(Z)
```
#### 7. Create a vector with values ranging from 10 to 49 (★☆☆)
```
Z = np.arange(10,50)
print(Z)
```
#### 8. Reverse a vector (first element becomes last) (★☆☆)
```
Z = np.arange(50)
Z = Z[::-1]
print(Z)
```
#### 9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
```
Z = np.arange(9).reshape(3,3)
print(Z)
```
#### 10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
```
nz = np.nonzero([1,2,0,0,4,0])
print(nz)
```
#### 11. Create a 3x3 identity matrix (★☆☆)
```
Z = np.eye(3)
print(Z)
```
#### 12. Create a 3x3x3 array with random values (★☆☆)
```
Z = np.random.random((3,3,3))
print(Z)
```
#### 13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
```
Z = np.random.random((10,10))
Zmin, Zmax = Z.min(), Z.max()
print(Zmin, Zmax)
```
#### 14. Create a random vector of size 30 and find the mean value (★☆☆)
```
Z = np.random.random(30)
m = Z.mean()
print(Z)
print(m)
```
#### 15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
```
Z = np.ones((10,10))
Z[1:-1,1:-1] = 0
print(Z)
```
#### 16. How to add a border (filled with 0's) around an existing array? (★☆☆)
```
Z = np.ones((5,5))
Z = np.pad(Z, pad_width=1, mode='constant', constant_values=0)
print(Z)
```
#### 17. What is the result of the following expression? (★☆☆)
```
print(0 * np.nan)
print(np.nan == np.nan)
print(np.inf > np.nan)
print(np.nan - np.nan)
print(0.3 == 3 * 0.1)
```
#### 18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
```
Z = np.diag(1+np.arange(4),k=-1)
print(Z)
```
#### 19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
```
Z = np.zeros((8,8),dtype=int)
Z[1::2,::2] = 1
Z[::2,1::2] = 2
print(Z)
```
#### 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
```
print(np.unravel_index(100,(6,7,8)))
```
#### 21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
```
Z = np.tile( np.array([[0,1],[1,0]]), (4,4))
print(Z)
```
#### 22. Normalize a 5x5 random matrix (★☆☆)
```
Z = np.random.random((5,5))
Zmax, Zmin = Z.max(), Z.min()
Z = (Z - Zmin)/(Zmax - Zmin)
print(Z)
```
#### 23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
```
color = np.dtype([("r", np.ubyte, 1),
("g", np.ubyte, 1),
("b", np.ubyte, 1),
("a", np.ubyte, 1)])
```
#### 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
```
Z = np.dot(np.ones((5,3)), np.ones((3,2)))
print(Z)
# Alternative solution, in Python 3.5 and above
#Z = np.ones((5,3)) @ np.ones((3,2))
```
#### 25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
```
# Author: Evgeni Burovski
Z = np.arange(11)
Z[(3 < Z) & (Z <= 8)] *= -1
print(Z)
```
#### 26. What is the output of the following script? (★☆☆)
```
# Author: Jake VanderPlas
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
```
#### 27. Consider an integer vector Z, which of these expressions are legal? (★☆☆)
```
Z**Z
2 << Z >> 2
Z <- Z
1j*Z
Z/1/1
Z<Z>Z
```
#### 28. What are the result of the following expressions?
```
print(np.array(0) / np.array(0))
print(np.array(0) // np.array(0))
print(np.array([np.nan]).astype(int).astype(float))
```
#### 29. How to round away from zero a float array ? (★☆☆)
```
# Author: Charles R Harris
Z = np.random.uniform(-10,+10,10)
print (np.copysign(np.ceil(np.abs(Z)), Z))
```
#### 30. How to find common values between two arrays? (★☆☆)
```
Z1 = np.random.randint(0,10,10)
Z2 = np.random.randint(0,10,10)
print(np.intersect1d(Z1,Z2))
```
#### 31. How to ignore all numpy warnings (not recommended)? (★☆☆)
```
# Suicide mode on
defaults = np.seterr(all="ignore")
Z = np.ones(1) / 0
# Back to sanity
_ = np.seterr(**defaults)
An equivalent way, with a context manager:
with np.errstate(divide='ignore'):
Z = np.ones(1) / 0
```
#### 32. Is the following expressions true? (★☆☆)
```
np.sqrt(-1) == np.emath.sqrt(-1)
```
#### 33. How to get the dates of yesterday, today and tomorrow? (★☆☆)
```
yesterday = np.datetime64('today', 'D') - np.timedelta64(1, 'D')
today = np.datetime64('today', 'D')
tomorrow = np.datetime64('today', 'D') + np.timedelta64(1, 'D')
```
#### 34. How to get all the dates corresponding to the month of July 2016? (★★☆)
```
Z = np.arange('2016-07', '2016-08', dtype='datetime64[D]')
print(Z)
```
#### 35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆)
```
A = np.ones(3)*1
B = np.ones(3)*2
C = np.ones(3)*3
np.add(A,B,out=B)
np.divide(A,2,out=A)
np.negative(A,out=A)
np.multiply(A,B,out=A)
```
#### 36. Extract the integer part of a random array using 5 different methods (★★☆)
```
Z = np.random.uniform(0,10,10)
print (Z - Z%1)
print (np.floor(Z))
print (np.ceil(Z)-1)
print (Z.astype(int))
print (np.trunc(Z))
```
#### 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)
```
Z = np.zeros((5,5))
Z += np.arange(5)
print(Z)
```
#### 38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)
```
def generate():
for x in range(10):
yield x
Z = np.fromiter(generate(),dtype=float,count=-1)
print(Z)
```
#### 39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
```
Z = np.linspace(0,1,11,endpoint=False)[1:]
print(Z)
```
#### 40. Create a random vector of size 10 and sort it (★★☆)
```
Z = np.random.random(10)
Z.sort()
print(Z)
```
#### 41. How to sum a small array faster than np.sum? (★★☆)
```
# Author: Evgeni Burovski
Z = np.arange(10)
np.add.reduce(Z)
```
#### 42. Consider two random array A and B, check if they are equal (★★☆)
```
A = np.random.randint(0,2,5)
B = np.random.randint(0,2,5)
# Assuming identical shape of the arrays and a tolerance for the comparison of values
equal = np.allclose(A,B)
print(equal)
# Checking both the shape and the element values, no tolerance (values have to be exactly equal)
equal = np.array_equal(A,B)
print(equal)
```
#### 43. Make an array immutable (read-only) (★★☆)
```
Z = np.zeros(10)
Z.flags.writeable = False
Z[0] = 1
```
#### 44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)
```
Z = np.random.random((10,2))
X,Y = Z[:,0], Z[:,1]
R = np.sqrt(X**2+Y**2)
T = np.arctan2(Y,X)
print(R)
print(T)
```
#### 45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)
```
Z = np.random.random(10)
Z[Z.argmax()] = 0
print(Z)
```
#### 46. Create a structured array with `x` and `y` coordinates covering the \[0,1\]x\[0,1\] area (★★☆)
```
Z = np.zeros((5,5), [('x',float),('y',float)])
Z['x'], Z['y'] = np.meshgrid(np.linspace(0,1,5),
np.linspace(0,1,5))
print(Z)
```
#### 47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj))
```
# Author: Evgeni Burovski
X = np.arange(8)
Y = X + 0.5
C = 1.0 / np.subtract.outer(X, Y)
print(np.linalg.det(C))
```
#### 48. Print the minimum and maximum representable value for each numpy scalar type (★★☆)
```
for dtype in [np.int8, np.int32, np.int64]:
print(np.iinfo(dtype).min)
print(np.iinfo(dtype).max)
for dtype in [np.float32, np.float64]:
print(np.finfo(dtype).min)
print(np.finfo(dtype).max)
print(np.finfo(dtype).eps)
```
#### 49. How to print all the values of an array? (★★☆)
```
np.set_printoptions(threshold=np.nan)
Z = np.zeros((16,16))
print(Z)
```
#### 50. How to find the closest value (to a given scalar) in a vector? (★★☆)
```
Z = np.arange(100)
v = np.random.uniform(0,100)
index = (np.abs(Z-v)).argmin()
print(Z[index])
```
#### 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)
```
Z = np.zeros(10, [ ('position', [ ('x', float, 1),
('y', float, 1)]),
('color', [ ('r', float, 1),
('g', float, 1),
('b', float, 1)])])
print(Z)
```
#### 52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆)
```
Z = np.random.random((10,2))
X,Y = np.atleast_2d(Z[:,0], Z[:,1])
D = np.sqrt( (X-X.T)**2 + (Y-Y.T)**2)
print(D)
# Much faster with scipy
import scipy
# Thanks Gavin Heverly-Coulson (#issue 1)
import scipy.spatial
Z = np.random.random((10,2))
D = scipy.spatial.distance.cdist(Z,Z)
print(D)
```
#### 53. How to convert a float (32 bits) array into an integer (32 bits) in place?
```
Z = np.arange(10, dtype=np.int32)
Z = Z.astype(np.float32, copy=False)
print(Z)
```
#### 54. How to read the following file? (★★☆)
```
from io import StringIO
# Fake file
s = StringIO("""1, 2, 3, 4, 5\n
6, , , 7, 8\n
, , 9,10,11\n""")
Z = np.genfromtxt(s, delimiter=",", dtype=np.int)
print(Z)
```
#### 55. What is the equivalent of enumerate for numpy arrays? (★★☆)
```
Z = np.arange(9).reshape(3,3)
for index, value in np.ndenumerate(Z):
print(index, value)
for index in np.ndindex(Z.shape):
print(index, Z[index])
```
#### 56. Generate a generic 2D Gaussian-like array (★★☆)
```
X, Y = np.meshgrid(np.linspace(-1,1,10), np.linspace(-1,1,10))
D = np.sqrt(X*X+Y*Y)
sigma, mu = 1.0, 0.0
G = np.exp(-( (D-mu)**2 / ( 2.0 * sigma**2 ) ) )
print(G)
```
#### 57. How to randomly place p elements in a 2D array? (★★☆)
```
# Author: Divakar
n = 10
p = 3
Z = np.zeros((n,n))
np.put(Z, np.random.choice(range(n*n), p, replace=False),1)
print(Z)
```
#### 58. Subtract the mean of each row of a matrix (★★☆)
```
# Author: Warren Weckesser
X = np.random.rand(5, 10)
# Recent versions of numpy
Y = X - X.mean(axis=1, keepdims=True)
# Older versions of numpy
Y = X - X.mean(axis=1).reshape(-1, 1)
print(Y)
```
#### 59. How to sort an array by the nth column? (★★☆)
```
# Author: Steve Tjoa
Z = np.random.randint(0,10,(3,3))
print(Z)
print(Z[Z[:,1].argsort()])
```
#### 60. How to tell if a given 2D array has null columns? (★★☆)
```
# Author: Warren Weckesser
Z = np.random.randint(0,3,(3,10))
print((~Z.any(axis=0)).any())
```
#### 61. Find the nearest value from a given value in an array (★★☆)
```
Z = np.random.uniform(0,1,10)
z = 0.5
m = Z.flat[np.abs(Z - z).argmin()]
print(m)
```
#### 62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆)
```
A = np.arange(3).reshape(3,1)
B = np.arange(3).reshape(1,3)
it = np.nditer([A,B,None])
for x,y,z in it: z[...] = x + y
print(it.operands[2])
```
#### 63. Create an array class that has a name attribute (★★☆)
```
class NamedArray(np.ndarray):
def __new__(cls, array, name="no name"):
obj = np.asarray(array).view(cls)
obj.name = name
return obj
def __array_finalize__(self, obj):
if obj is None: return
self.info = getattr(obj, 'name', "no name")
Z = NamedArray(np.arange(10), "range_10")
print (Z.name)
```
#### 64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★)
```
# Author: Brett Olsen
Z = np.ones(10)
I = np.random.randint(0,len(Z),20)
Z += np.bincount(I, minlength=len(Z))
print(Z)
# Another solution
# Author: Bartosz Telenczuk
np.add.at(Z, I, 1)
print(Z)
```
#### 65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★)
```
# Author: Alan G Isaac
X = [1,2,3,4,5,6]
I = [1,3,9,3,4,1]
F = np.bincount(I,X)
print(F)
```
#### 66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★★)
```
# Author: Nadav Horesh
w,h = 16,16
I = np.random.randint(0,2,(h,w,3)).astype(np.ubyte)
#Note that we should compute 256*256 first.
#Otherwise numpy will only promote F.dtype to 'uint16' and overfolw will occur
F = I[...,0]*(256*256) + I[...,1]*256 +I[...,2]
n = len(np.unique(F))
print(n)
```
#### 67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★)
```
A = np.random.randint(0,10,(3,4,3,4))
# solution by passing a tuple of axes (introduced in numpy 1.7.0)
sum = A.sum(axis=(-2,-1))
print(sum)
# solution by flattening the last two dimensions into one
# (useful for functions that don't accept tuples for axis argument)
sum = A.reshape(A.shape[:-2] + (-1,)).sum(axis=-1)
print(sum)
```
#### 68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)
```
# Author: Jaime Fernández del Río
D = np.random.uniform(0,1,100)
S = np.random.randint(0,10,100)
D_sums = np.bincount(S, weights=D)
D_counts = np.bincount(S)
D_means = D_sums / D_counts
print(D_means)
# Pandas solution as a reference due to more intuitive code
import pandas as pd
print(pd.Series(D).groupby(S).mean())
```
#### 69. How to get the diagonal of a dot product? (★★★)
```
# Author: Mathieu Blondel
A = np.random.uniform(0,1,(5,5))
B = np.random.uniform(0,1,(5,5))
# Slow version
np.diag(np.dot(A, B))
# Fast version
np.sum(A * B.T, axis=1)
# Faster version
np.einsum("ij,ji->i", A, B)
```
#### 70. Consider the vector \[1, 2, 3, 4, 5\], how to build a new vector with 3 consecutive zeros interleaved between each value? (★★★)
```
# Author: Warren Weckesser
Z = np.array([1,2,3,4,5])
nz = 3
Z0 = np.zeros(len(Z) + (len(Z)-1)*(nz))
Z0[::nz+1] = Z
print(Z0)
```
#### 71. Consider an array of dimension (5,5,3), how to mulitply it by an array with dimensions (5,5)? (★★★)
```
A = np.ones((5,5,3))
B = 2*np.ones((5,5))
print(A * B[:,:,None])
```
#### 72. How to swap two rows of an array? (★★★)
```
# Author: Eelco Hoogendoorn
A = np.arange(25).reshape(5,5)
A[[0,1]] = A[[1,0]]
print(A)
```
#### 73. Consider a set of 10 triplets describing 10 triangles (with shared vertices), find the set of unique line segments composing all the triangles (★★★)
```
# Author: Nicolas P. Rougier
faces = np.random.randint(0,100,(10,3))
F = np.roll(faces.repeat(2,axis=1),-1,axis=1)
F = F.reshape(len(F)*3,2)
F = np.sort(F,axis=1)
G = F.view( dtype=[('p0',F.dtype),('p1',F.dtype)] )
G = np.unique(G)
print(G)
```
#### 74. Given an array C that is a bincount, how to produce an array A such that np.bincount(A) == C? (★★★)
```
# Author: Jaime Fernández del Río
C = np.bincount([1,1,2,3,4,4,6])
A = np.repeat(np.arange(len(C)), C)
print(A)
```
#### 75. How to compute averages using a sliding window over an array? (★★★)
```
# Author: Jaime Fernández del Río
def moving_average(a, n=3) :
ret = np.cumsum(a, dtype=float)
ret[n:] = ret[n:] - ret[:-n]
return ret[n - 1:] / n
Z = np.arange(20)
print(moving_average(Z, n=3))
```
#### 76. Consider a one-dimensional array Z, build a two-dimensional array whose first row is (Z\[0\],Z\[1\],Z\[2\]) and each subsequent row is shifted by 1 (last row should be (Z\[-3\],Z\[-2\],Z\[-1\]) (★★★)
```
# Author: Joe Kington / Erik Rigtorp
from numpy.lib import stride_tricks
def rolling(a, window):
shape = (a.size - window + 1, window)
strides = (a.itemsize, a.itemsize)
return stride_tricks.as_strided(a, shape=shape, strides=strides)
Z = rolling(np.arange(10), 3)
print(Z)
```
#### 77. How to negate a boolean, or to change the sign of a float inplace? (★★★)
```
# Author: Nathaniel J. Smith
Z = np.random.randint(0,2,100)
np.logical_not(Z, out=Z)
Z = np.random.uniform(-1.0,1.0,100)
np.negative(Z, out=Z)
```
#### 78. Consider 2 sets of points P0,P1 describing lines (2d) and a point p, how to compute distance from p to each line i (P0\[i\],P1\[i\])? (★★★)
```
def distance(P0, P1, p):
T = P1 - P0
L = (T**2).sum(axis=1)
U = -((P0[:,0]-p[...,0])*T[:,0] + (P0[:,1]-p[...,1])*T[:,1]) / L
U = U.reshape(len(U),1)
D = P0 + U*T - p
return np.sqrt((D**2).sum(axis=1))
P0 = np.random.uniform(-10,10,(10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10,10,( 1,2))
print(distance(P0, P1, p))
```
#### 79. Consider 2 sets of points P0,P1 describing lines (2d) and a set of points P, how to compute distance from each point j (P\[j\]) to each line i (P0\[i\],P1\[i\])? (★★★)
```
# Author: Italmassov Kuanysh
# based on distance function from previous question
P0 = np.random.uniform(-10, 10, (10,2))
P1 = np.random.uniform(-10,10,(10,2))
p = np.random.uniform(-10, 10, (10,2))
print(np.array([distance(P0,P1,p_i) for p_i in p]))
```
#### 80. Consider an arbitrary array, write a function that extract a subpart with a fixed shape and centered on a given element (pad with a `fill` value when necessary) (★★★)
```
# Author: Nicolas Rougier
Z = np.random.randint(0,10,(10,10))
shape = (5,5)
fill = 0
position = (1,1)
R = np.ones(shape, dtype=Z.dtype)*fill
P = np.array(list(position)).astype(int)
Rs = np.array(list(R.shape)).astype(int)
Zs = np.array(list(Z.shape)).astype(int)
R_start = np.zeros((len(shape),)).astype(int)
R_stop = np.array(list(shape)).astype(int)
Z_start = (P-Rs//2)
Z_stop = (P+Rs//2)+Rs%2
R_start = (R_start - np.minimum(Z_start,0)).tolist()
Z_start = (np.maximum(Z_start,0)).tolist()
R_stop = np.maximum(R_start, (R_stop - np.maximum(Z_stop-Zs,0))).tolist()
Z_stop = (np.minimum(Z_stop,Zs)).tolist()
r = [slice(start,stop) for start,stop in zip(R_start,R_stop)]
z = [slice(start,stop) for start,stop in zip(Z_start,Z_stop)]
R[r] = Z[z]
print(Z)
print(R)
```
#### 81. Consider an array Z = \[1,2,3,4,5,6,7,8,9,10,11,12,13,14\], how to generate an array R = \[\[1,2,3,4\], \[2,3,4,5\], \[3,4,5,6\], ..., \[11,12,13,14\]\]? (★★★)
```
# Author: Stefan van der Walt
Z = np.arange(1,15,dtype=np.uint32)
R = stride_tricks.as_strided(Z,(11,4),(4,4))
print(R)
```
#### 82. Compute a matrix rank (★★★)
```
# Author: Stefan van der Walt
Z = np.random.uniform(0,1,(10,10))
U, S, V = np.linalg.svd(Z) # Singular Value Decomposition
rank = np.sum(S > 1e-10)
print(rank)
```
#### 83. How to find the most frequent value in an array?
```
Z = np.random.randint(0,10,50)
print(np.bincount(Z).argmax())
```
#### 84. Extract all the contiguous 3x3 blocks from a random 10x10 matrix (★★★)
```
# Author: Chris Barker
Z = np.random.randint(0,5,(10,10))
n = 3
i = 1 + (Z.shape[0]-3)
j = 1 + (Z.shape[1]-3)
C = stride_tricks.as_strided(Z, shape=(i, j, n, n), strides=Z.strides + Z.strides)
print(C)
```
#### 85. Create a 2D array subclass such that Z\[i,j\] == Z\[j,i\] (★★★)
```
# Author: Eric O. Lebigot
# Note: only works for 2d array and value setting using indices
class Symetric(np.ndarray):
def __setitem__(self, index, value):
i,j = index
super(Symetric, self).__setitem__((i,j), value)
super(Symetric, self).__setitem__((j,i), value)
def symetric(Z):
return np.asarray(Z + Z.T - np.diag(Z.diagonal())).view(Symetric)
S = symetric(np.random.randint(0,10,(5,5)))
S[2,3] = 42
print(S)
```
#### 86. Consider a set of p matrices wich shape (n,n) and a set of p vectors with shape (n,1). How to compute the sum of of the p matrix products at once? (result has shape (n,1)) (★★★)
```
# Author: Stefan van der Walt
p, n = 10, 20
M = np.ones((p,n,n))
V = np.ones((p,n,1))
S = np.tensordot(M, V, axes=[[0, 2], [0, 1]])
print(S)
# It works, because:
# M is (p,n,n)
# V is (p,n,1)
# Thus, summing over the paired axes 0 and 0 (of M and V independently),
# and 2 and 1, to remain with a (n,1) vector.
```
#### 87. Consider a 16x16 array, how to get the block-sum (block size is 4x4)? (★★★)
```
# Author: Robert Kern
Z = np.ones((16,16))
k = 4
S = np.add.reduceat(np.add.reduceat(Z, np.arange(0, Z.shape[0], k), axis=0),
np.arange(0, Z.shape[1], k), axis=1)
print(S)
```
#### 88. How to implement the Game of Life using numpy arrays? (★★★)
```
# Author: Nicolas Rougier
def iterate(Z):
# Count neighbours
N = (Z[0:-2,0:-2] + Z[0:-2,1:-1] + Z[0:-2,2:] +
Z[1:-1,0:-2] + Z[1:-1,2:] +
Z[2: ,0:-2] + Z[2: ,1:-1] + Z[2: ,2:])
# Apply rules
birth = (N==3) & (Z[1:-1,1:-1]==0)
survive = ((N==2) | (N==3)) & (Z[1:-1,1:-1]==1)
Z[...] = 0
Z[1:-1,1:-1][birth | survive] = 1
return Z
Z = np.random.randint(0,2,(50,50))
for i in range(100): Z = iterate(Z)
print(Z)
```
#### 89. How to get the n largest values of an array (★★★)
```
Z = np.arange(10000)
np.random.shuffle(Z)
n = 5
# Slow
print (Z[np.argsort(Z)[-n:]])
# Fast
print (Z[np.argpartition(-Z,n)[:n]])
```
#### 90. Given an arbitrary number of vectors, build the cartesian product (every combinations of every item) (★★★)
```
# Author: Stefan Van der Walt
def cartesian(arrays):
arrays = [np.asarray(a) for a in arrays]
shape = (len(x) for x in arrays)
ix = np.indices(shape, dtype=int)
ix = ix.reshape(len(arrays), -1).T
for n, arr in enumerate(arrays):
ix[:, n] = arrays[n][ix[:, n]]
return ix
print (cartesian(([1, 2, 3], [4, 5], [6, 7])))
```
#### 91. How to create a record array from a regular array? (★★★)
```
Z = np.array([("Hello", 2.5, 3),
("World", 3.6, 2)])
R = np.core.records.fromarrays(Z.T,
names='col1, col2, col3',
formats = 'S8, f8, i8')
print(R)
```
#### 92. Consider a large vector Z, compute Z to the power of 3 using 3 different methods (★★★)
```
# Author: Ryan G.
x = np.random.rand(5e7)
%timeit np.power(x,3)
%timeit x*x*x
%timeit np.einsum('i,i,i->i',x,x,x)
```
#### 93. Consider two arrays A and B of shape (8,3) and (2,2). How to find rows of A that contain elements of each row of B regardless of the order of the elements in B? (★★★)
```
# Author: Gabe Schwartz
A = np.random.randint(0,5,(8,3))
B = np.random.randint(0,5,(2,2))
C = (A[..., np.newaxis, np.newaxis] == B)
rows = np.where(C.any((3,1)).all(1))[0]
print(rows)
```
#### 94. Considering a 10x3 matrix, extract rows with unequal values (e.g. \[2,2,3\]) (★★★)
```
# Author: Robert Kern
Z = np.random.randint(0,5,(10,3))
print(Z)
# solution for arrays of all dtypes (including string arrays and record arrays)
E = np.all(Z[:,1:] == Z[:,:-1], axis=1)
U = Z[~E]
print(U)
# soluiton for numerical arrays only, will work for any number of columns in Z
U = Z[Z.max(axis=1) != Z.min(axis=1),:]
print(U)
```
#### 95. Convert a vector of ints into a matrix binary representation (★★★)
```
# Author: Warren Weckesser
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128])
B = ((I.reshape(-1,1) & (2**np.arange(8))) != 0).astype(int)
print(B[:,::-1])
# Author: Daniel T. McDonald
I = np.array([0, 1, 2, 3, 15, 16, 32, 64, 128], dtype=np.uint8)
print(np.unpackbits(I[:, np.newaxis], axis=1))
```
#### 96. Given a two dimensional array, how to extract unique rows? (★★★)
```
# Author: Jaime Fernández del Río
Z = np.random.randint(0,2,(6,3))
T = np.ascontiguousarray(Z).view(np.dtype((np.void, Z.dtype.itemsize * Z.shape[1])))
_, idx = np.unique(T, return_index=True)
uZ = Z[idx]
print(uZ)
```
#### 97. Considering 2 vectors A & B, write the einsum equivalent of inner, outer, sum, and mul function (★★★)
```
A = np.random.uniform(0,1,10)
B = np.random.uniform(0,1,10)
np.einsum('i->', A) # np.sum(A)
np.einsum('i,i->i', A, B) # A * B
np.einsum('i,i', A, B) # np.inner(A, B)
np.einsum('i,j->ij', A, B) # np.outer(A, B)
```
#### 98. Considering a path described by two vectors (X,Y), how to sample it using equidistant samples (★★★)?
```
# Author: Bas Swinckels
phi = np.arange(0, 10*np.pi, 0.1)
a = 1
x = a*phi*np.cos(phi)
y = a*phi*np.sin(phi)
dr = (np.diff(x)**2 + np.diff(y)**2)**.5 # segment lengths
r = np.zeros_like(x)
r[1:] = np.cumsum(dr) # integrate path
r_int = np.linspace(0, r.max(), 200) # regular spaced path
x_int = np.interp(r_int, r, x) # integrate path
y_int = np.interp(r_int, r, y)
```
#### 99. Given an integer n and a 2D array X, select from X the rows which can be interpreted as draws from a multinomial distribution with n degrees, i.e., the rows which only contain integers and which sum to n. (★★★)
```
# Author: Evgeni Burovski
X = np.asarray([[1.0, 0.0, 3.0, 8.0],
[2.0, 0.0, 1.0, 1.0],
[1.5, 2.5, 1.0, 0.0]])
n = 4
M = np.logical_and.reduce(np.mod(X, 1) == 0, axis=-1)
M &= (X.sum(axis=-1) == n)
print(X[M])
```
#### 100. Compute bootstrapped 95% confidence intervals for the mean of a 1D array X (i.e., resample the elements of an array with replacement N times, compute the mean of each sample, and then compute percentiles over the means). (★★★)
```
# Author: Jessica B. Hamrick
X = np.random.randn(100) # random 1D array
N = 1000 # number of bootstrap samples
idx = np.random.randint(0, X.size, (N, X.size))
means = X[idx].mean(axis=1)
confint = np.percentile(means, [2.5, 97.5])
print(confint)
```
| github_jupyter |
# Feature Engineering
Feature engineering is an answer to the question, "How can I make the most of the data I have?"
Let's get started, then. How does one do feature engineering?
I'll assume you're familiar with pandas and the decision tree pipeline that we're using for this project. That's the algorithm we're going to engineer the data for; not all algorithms will want the data engineered the same way, though often the benefits will work for many algorithms.
```
import pandas as pd
# load the data output by src/merger.py
original_data = pd.read_csv('./merger/bigTable.csv')
print(original_data.columns)
print(original_data.shape)
original_data.head()
import sys, os
sys.path.append(os.path.join('src'))
from src import splitter
# Now we run splitter and decision_tree with our original data
splitter.main()
from src import decision_tree
decision_tree.main()
original_validation_score = 0.00268005495579566
```
So now we have a baseline for how well our decision tree performed before we added a feature.
Let's see what happens if we add a `two_weeks_before_christmas` and a `two_weeks_after_christmas` column, as per our Exploratory Analysis discussion.
```
# Re-read the data and use datetime objects for the date
engineered_data = pd.read_csv('./merger/bigTable.csv')
engineered_data.date = pd.to_datetime(engineered_data.date)
# Create a before_christmas_window
start_date = pd.to_datetime('2016-12-11')
end_date = pd.to_datetime('2016-12-25')
before_christmas = (engineered_data['date'] > start_date) & (engineered_data['date'] <= end_date)
# Create an after_christmas_window
start_date = pd.to_datetime('2016-12-25')
end_date = pd.to_datetime('2017-01-08')
after_christmas = (engineered_data['date'] > start_date) & (engineered_data['date'] <= end_date)
engineered_data['two_weeks_before_christmas'] = before_christmas
engineered_data['two_weeks_after_christmas'] = after_christmas
```
#### Just as a spot check, let's look at the date of the first few records in our new columns
```
print(engineered_data[engineered_data.two_weeks_before_christmas == True].date.head())
print(engineered_data[engineered_data.two_weeks_after_christmas == True].date.head())
```
Seems okay to me. Let's see how it changes the results now.
```
engineered_data.to_csv('./merger/bigTable.csv', index=False)
splitter.main()
decision_tree.main()
engineered_validation_score = 0.003692818003915606
print(original_validation_score - engineered_validation_score)
```
So as it turns out, adding a boolean about before/after Christmas slightly hurt our performance.
- Now we should iterate on the features
- for example, maybe two weeks is too wide a window
- or maybe it's time to question if the scoring algorithm provided to us by the kaggle competition
- should we replace nwrmsle with another error measurement?
| github_jupyter |
```
# default_exp image.color_palette
# hide
from nbdev.showdoc import *
# hide
%reload_ext autoreload
%autoreload 2
```
# Color Palettes
> Tools for generating color palettes of various data-sets.
```
# export
def pascal_voc_palette(num_cls=None):
"""
Generates the PASCAL Visual Object Classes (PASCAL VOC) data-set color palette.
Data-Set URL:
http://host.robots.ox.ac.uk/pascal/VOC/ .
Original source taken from:
https://gluon-cv.mxnet.io/_modules/gluoncv/utils/viz/segmentation.html .
`num_cls`: the number of colors to generate
return: the generated color palette
"""
# by default generate 256 colors
if num_cls is None:
num_cls = 256
palette = [0] * (num_cls * 3)
for j in range(0, num_cls):
lab = j
palette[j*3+0] = 0
palette[j*3+1] = 0
palette[j*3+2] = 0
i = 0
while lab > 0:
palette[j*3+0] |= (((lab >> 0) & 1) << (7-i))
palette[j*3+1] |= (((lab >> 1) & 1) << (7-i))
palette[j*3+2] |= (((lab >> 2) & 1) << (7-i))
i = i + 1
lab >>= 3
return palette
# export
def ade20k_palette(num_cls=None):
"""
Generates the ADE20K data-set color palette.
Data-Set URL:
http://host.robots.ox.ac.uk/pascal/VOC/
Color palette definition:
https://docs.google.com/spreadsheets/d/1se8YEtb2detS7OuPE86fXGyD269pMycAWe2mtKUj2W8/edit#gid=0 .
Original source taken from:
https://gluon-cv.mxnet.io/_modules/gluoncv/utils/viz/segmentation.html .
`num_cls`: the number of colors to generate
return: the generated color palette
"""
palette = [
0, 0, 0, 120, 120, 120, 180, 120, 120, 6, 230, 230, 80, 50, 50, 4, 200, 3, 120, 120, 80, 140, 140, 140, 204,
5, 255, 230, 230, 230, 4, 250, 7, 224, 5, 255, 235, 255, 7, 150, 5, 61, 120, 120, 70, 8, 255, 51, 255, 6, 82,
143, 255, 140, 204, 255, 4, 255, 51, 7, 204, 70, 3, 0, 102, 200, 61, 230, 250, 255, 6, 51, 11, 102, 255, 255,
7, 71, 255, 9, 224, 9, 7, 230, 220, 220, 220, 255, 9, 92, 112, 9, 255, 8, 255, 214, 7, 255, 224, 255, 184, 6,
10, 255, 71, 255, 41, 10, 7, 255, 255, 224, 255, 8, 102, 8, 255, 255, 61, 6, 255, 194, 7, 255, 122, 8, 0, 255,
20, 255, 8, 41, 255, 5, 153, 6, 51, 255, 235, 12, 255, 160, 150, 20, 0, 163, 255, 140, 140, 140, 250, 10, 15,
20, 255, 0, 31, 255, 0, 255, 31, 0, 255, 224, 0, 153, 255, 0, 0, 0, 255, 255, 71, 0, 0, 235, 255, 0, 173, 255,
31, 0, 255, 11, 200, 200, 255, 82, 0, 0, 255, 245, 0, 61, 255, 0, 255, 112, 0, 255, 133, 255, 0, 0, 255, 163,
0, 255, 102, 0, 194, 255, 0, 0, 143, 255, 51, 255, 0, 0, 82, 255, 0, 255, 41, 0, 255, 173, 10, 0, 255, 173, 255,
0, 0, 255, 153, 255, 92, 0, 255, 0, 255, 255, 0, 245, 255, 0, 102, 255, 173, 0, 255, 0, 20, 255, 184, 184, 0,
31, 255, 0, 255, 61, 0, 71, 255, 255, 0, 204, 0, 255, 194, 0, 255, 82, 0, 10, 255, 0, 112, 255, 51, 0, 255, 0,
194, 255, 0, 122, 255, 0, 255, 163, 255, 153, 0, 0, 255, 10, 255, 112, 0, 143, 255, 0, 82, 0, 255, 163, 255,
0, 255, 235, 0, 8, 184, 170, 133, 0, 255, 0, 255, 92, 184, 0, 255, 255, 0, 31, 0, 184, 255, 0, 214, 255, 255,
0, 112, 92, 255, 0, 0, 224, 255, 112, 224, 255, 70, 184, 160, 163, 0, 255, 153, 0, 255, 71, 255, 0, 255, 0,
163, 255, 204, 0, 255, 0, 143, 0, 255, 235, 133, 255, 0, 255, 0, 235, 245, 0, 255, 255, 0, 122, 255, 245, 0,
10, 190, 212, 214, 255, 0, 0, 204, 255, 20, 0, 255, 255, 255, 0, 0, 153, 255, 0, 41, 255, 0, 255, 204, 41, 0,
255, 41, 255, 0, 173, 0, 255, 0, 245, 255, 71, 0, 255, 122, 0, 255, 0, 255, 184, 0, 92, 255, 184, 255, 0, 0,
133, 255, 255, 214, 0, 25, 194, 194, 102, 255, 0, 92, 0, 255]
if num_cls is not None:
if num_cls >= len(palette):
raise Exception("Palette Color Definition exceeded.")
palette = palette[:num_cls*3]
return palette
# export
def cityscapes_palette(num_cls=None):
"""
Generates the Cityscapes data-set color palette.
Data-Set URL:
https://www.cityscapes-dataset.com/
Color palette definition:
https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py
Original source taken from:
https://gluon-cv.mxnet.io/_modules/gluoncv/utils/viz/segmentation.html .
`num_cls`: the number of colors to generate
return: the generated color palette
"""
palette = [
128, 64, 128,
244, 35, 232,
70, 70, 70,
102, 102, 156,
190, 153, 153,
153, 153, 153,
250, 170, 30,
220, 220, 0,
107, 142, 35,
152, 251, 152,
0, 130, 180,
220, 20, 60,
255, 0, 0,
0, 0, 142,
0, 0, 70,
0, 60, 100,
0, 80, 100,
0, 0, 230,
119, 11, 32,
]
if num_cls is not None:
if num_cls >= len(palette):
raise Exception("Palette Color Definition exceeded.")
palette = palette[:num_cls*3]
return palette
# export
def mhp_palette_v1(num_cls=None):
"""
Generates the Multi-Human Parsing (MHP) v1.0 data-set color palette.
Data-Set URL:
https://lv-mhp.github.io/
Color palette definition:
https://lv-mhp.github.io/human_parsing_task
Original source taken from:
https://gluon-cv.mxnet.io/_modules/gluoncv/utils/viz/segmentation.html .
`num_cls`: the number of colors to generate
return: the generated color palette
"""
palette = [
255, 255, 255,
165, 42, 42,
255, 0, 0,
0, 128, 0,
165, 42, 42,
255, 69, 0,
255, 20, 147,
30, 144, 255,
85, 107, 47,
0, 128, 128,
139, 69, 19,
70, 130, 180,
50, 205, 50,
0, 0, 205,
0, 191, 255,
0, 255, 255,
0, 250, 154,
173, 255, 47,
255, 255, 0,
]
if num_cls is not None:
if num_cls >= len(palette):
raise Exception("Palette Color Definition exceeded.")
palette = palette[:num_cls*3]
return palette
# hide
# for generating scripts from notebook directly
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
```
## Introduction
In order to get you familiar with graph ideas,
I have deliberately chosen to steer away from
the more pedantic matters
of loading graph data to and from disk.
That said, the following scenario will eventually happen,
where a graph dataset lands on your lap,
and you'll need to load it in memory
and start analyzing it.
Thus, we're going to go through graph I/O,
specifically the APIs on how to convert
graph data that comes to you
into that magical NetworkX object `G`.
Let's get going!
## Graph Data as Tables
Let's recall what we've learned in the introductory chapters.
Graphs can be represented using two **sets**:
- Node set
- Edge set
### Node set as tables
Let's say we had a graph with 3 nodes in it: `A, B, C`.
We could represent it in plain text, computer-readable format:
```csv
A
B
C
```
Suppose the nodes also had metadata.
Then, we could tag on metadata as well:
```csv
A, circle, 5
B, circle, 7
C, square, 9
```
Does this look familiar to you?
Yes, node sets can be stored in CSV format,
with one of the columns being node ID,
and the rest of the columns being metadata.
### Edge set as tables
If, between the nodes, we had 4 edges (this is a directed graph),
we can also represent those edges in plain text, computer-readable format:
```csv
A, C
B, C
A, B
C, A
```
And let's say we also had other metadata,
we can represent it in the same CSV format:
```csv
A, C, red
B, C, orange
A, B, yellow
C, A, green
```
If you've been in the data world for a while,
this should not look foreign to you.
Yes, edge sets can be stored in CSV format too!
Two of the columns represent the nodes involved in an edge,
and the rest of the columns represent the metadata.
### Combined Representation
In fact, one might also choose to combine
the node set and edge set tables together in a merged format:
```
n1, n2, colour, shape1, num1, shape2, num2
A, C, red, circle, 5, square, 9
B, C, orange, circle, 7, square, 9
A, B, yellow, circle, 5, circle, 7
C, A, green, square, 9, circle, 5
```
In this chapter, the datasets that we will be looking at
are going to be formatted in both ways.
Let's get going.
## Dataset
We will be working with the Divvy bike sharing dataset.
> Divvy is a bike sharing service in Chicago.
> Since 2013, Divvy has released their bike sharing dataset to the public.
> The 2013 dataset is comprised of two files:
> - `Divvy_Stations_2013.csv`, containing the stations in the system, and
> - `DivvyTrips_2013.csv`, containing the trips.
Let's dig into the data!
```
from pyprojroot import here
```
Firstly, we need to unzip the dataset:
```
import zipfile
import os
# This block of code checks to make sure that a particular directory is present.
if "divvy_2013" not in os.listdir(here() / 'datasets/'):
print('Unzipping the divvy_2013.zip file in the datasets folder.')
with zipfile.ZipFile(here() / "datasets/divvy_2013.zip","r") as zip_ref:
zip_ref.extractall(here() / 'datasets')
```
Now, let's load in both tables.
First is the `stations` table:
```
import pandas as pd
stations = pd.read_csv(here() / 'datasets/divvy_2013/Divvy_Stations_2013.csv', parse_dates=['online date'], encoding='utf-8')
stations.head()
stations.describe()
```
Now, let's load in the `trips` table.
```
trips = pd.read_csv(here() / 'datasets/divvy_2013/Divvy_Trips_2013.csv',
parse_dates=['starttime', 'stoptime'])
trips.head()
import janitor
trips_summary = (
trips
.groupby(["from_station_id", "to_station_id"])
.count()
.reset_index()
.select_columns(
[
"from_station_id",
"to_station_id",
"trip_id"
]
)
.rename_column("trip_id", "num_trips")
)
trips_summary.head()
```
## Graph Model
Given the data, if we wished to use a graph as a data model
for the number of trips between stations,
then naturally, nodes would be the stations,
and edges would be trips between them.
This graph would be directed,
as one could have more trips from station A to B
and less in the reverse.
With this definition,
we can begin graph construction!
### Create NetworkX graph from pandas edgelist
NetworkX provides an extremely convenient way
to load data from a pandas DataFrame:
```
import networkx as nx
G = nx.from_pandas_edgelist(
df=trips_summary,
source="from_station_id",
target="to_station_id",
edge_attr=["num_trips"],
create_using=nx.DiGraph
)
```
### Inspect the graph
Once the graph is in memory,
we can inspect it to get out summary graph statistics.
```
print(nx.info(G))
```
You'll notice that the edge metadata have been added correctly: we have recorded in there the number of trips between stations.
```
list(G.edges(data=True))[0:5]
```
However, the node metadata is not present:
```
list(G.nodes(data=True))[0:5]
```
### Annotate node metadata
We have rich station data on hand,
such as the longitude and latitude of each station,
and it would be a pity to discard it,
especially when we can potentially use it as part of the analysis
or for visualization purposes.
Let's see how we can add this information in.
Firstly, recall what the `stations` dataframe looked like:
```
stations.head()
```
The `id` column gives us the node ID in the graph,
so if we set `id` to be the index,
if we then also loop over each row,
we can treat the rest of the columns as dictionary keys
and values as dictionary values,
and add the information into the graph.
Let's see this in action.
```
for node, metadata in stations.set_index("id").iterrows():
for key, val in metadata.items():
G.nodes[node][key] = val
```
Now, our node metadata should be populated.
```
list(G.nodes(data=True))[0:5]
```
In `nxviz`, a `GeoPlot` object is available
that allows you to quickly visualize
a graph that has geographic data.
However, being `matplotlib`-based,
it is going to be quickly overwhelmed
by the sheer number of edges.
As such, we are going to first filter the edges.
### Exercise: Filter graph edges
> Leveraging what you know about how to manipulate graphs,
> now try _filtering_ edges.
>
_Hint: NetworkX graph objects can be deep-copied using `G.copy()`:_
```python
G_copy = G.copy()
```
_Hint: NetworkX graph objects also let you remove edges:_
```python
G.remove_edge(node1, node2) # does not return anything
```
```
def filter_graph(G, minimum_num_trips):
"""
Filter the graph such that
only edges that have minimum_num_trips or more
are present.
"""
G_filtered = G.____()
for _, _, _ in G._____(data=____):
if d[___________] < ___:
G_________.___________(_, _)
return G_filtered
from nams.solutions.io import filter_graph
G_filtered = filter_graph(G, 50)
```
### Visualize using GeoPlot
`nxviz` provides a GeoPlot object
that lets you quickly visualize geospatial graph data.
??? note "Geospatial Viz"
As the creator of `nxviz`,
I would recommend using proper geospatial packages
to build custom geospatial graph viz,
such as [`pysal`](http://pysal.org/).)
That said, `nxviz` can probably do what you need
for a quick-and-dirty view of the data.
```
import nxviz
c = nxviz.GeoPlot(G_filtered, node_lat="latitude", node_lon="longitude")
c.draw()
```
Does that look familiar to you? Looks quite a bit like Chicago, I'd say :)
Jesting aside, this visualization does help illustrate
that the majority of trips occur between stations that are
near the city center.
## Pickling Graphs
Since NetworkX graphs are Python objects,
the canonical way to save them is by pickling them.
You can do this using:
```python
nx.write_gpickle(G, file_path)
```
Here's an example in action:
```
nx.write_gpickle(G, "/tmp/divvy.pkl")
```
And just to show that it can be loaded back into memory:
```
G_loaded = nx.read_gpickle("/tmp/divvy.pkl")
```
### Exercise: checking graph integrity
If you get a graph dataset as a pickle,
you should always check it against reference properties
to make sure of its data integrity.
> Write a function that tests that the graph
> has the correct number of nodes and edges inside it.
```
def test_graph_integrity(G):
"""Test integrity of raw Divvy graph."""
# Your solution here
pass
from nams.solutions.io import test_graph_integrity
test_graph_integrity(G)
```
## Other text formats
CSV files and `pandas` DataFrames
give us a convenient way to store graph data,
and if possible, do insist with your data collaborators
that they provide you with graph data that are in this format.
If they don't, however, no sweat!
After all, Python is super versatile.
In this ebook, we have loaded data in
from non-CSV sources,
sometimes by parsing text files raw,
sometimes by treating special characters as delimiters in a CSV-like file,
and sometimes by resorting to parsing JSON.
You can see other examples of how we load data
by browsing through the source file of `load_data.py`
and studying how we construct graph objects.
## Solutions
The solutions to this chapter's exercises are below
```
from nams.solutions import io
import inspect
print(inspect.getsource(io))
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Algorithms/center_pivot_irrigation_detector.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/center_pivot_irrigation_detector.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Algorithms/center_pivot_irrigation_detector.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/center_pivot_irrigation_detector.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# Center-pivot Irrigation Detector.
#
# Finds circles that are 500m in radius.
Map.setCenter(-106.06, 37.71, 12)
# A nice NDVI palette.
palette = [
'FFFFFF', 'CE7E45', 'DF923D', 'F1B555', 'FCD163', '99B718',
'74A901', '66A000', '529400', '3E8601', '207401', '056201',
'004C00', '023B01', '012E01', '011D01', '011301']
# Just display the image with the palette.
image = ee.Image('LANDSAT/LC08/C01/T1_TOA/LC08_034034_20170608')
ndvi = image.normalizedDifference(['B5','B4'])
Map.addLayer(ndvi, {'min': 0, 'max': 1, 'palette': palette}, 'Landsat NDVI')
# Find the difference between convolution with circles and squares.
# This difference, in theory, will be strongest at the center of
# circles in the image. This region is filled with circular farms
# with radii on the order of 500m.
farmSize = 500 # Radius of a farm, in meters.
circleKernel = ee.Kernel.circle(farmSize, 'meters')
squareKernel = ee.Kernel.square(farmSize, 'meters')
circles = ndvi.convolve(circleKernel)
squares = ndvi.convolve(squareKernel)
diff = circles.subtract(squares)
# Scale by 100 and find the best fitting pixel in each neighborhood.
diff = diff.abs().multiply(100).toByte()
max = diff.focal_max(**{'radius': farmSize * 1.8, 'units': 'meters'})
# If a pixel isn't the local max, set it to 0.
local = diff.where(diff.neq(max), 0)
thresh = local.gt(2)
# Here, we highlight the maximum differences as "Kernel Peaks"
# and draw them in red.
peaks = thresh.focal_max(**{'kernel': circleKernel})
Map.addLayer(peaks.updateMask(peaks), {'palette': 'FF3737'}, 'Kernel Peaks')
# Detect the edges of the features. Discard the edges with lower intensity.
canny = ee.Algorithms.CannyEdgeDetector(ndvi, 0)
canny = canny.gt(0.3)
# Create a "ring" kernel from two circular kernels.
inner = ee.Kernel.circle(farmSize - 20, 'meters', False, -1)
outer = ee.Kernel.circle(farmSize + 20, 'meters', False, 1)
ring = outer.add(inner, True)
# Highlight the places where the feature edges best match the circle kernel.
centers = canny.convolve(ring).gt(0.5).focal_max({'kernel': circleKernel})
Map.addLayer(centers.updateMask(centers), {'palette': '4285FF'}, 'Ring centers')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
# M² Real Examples
**Scott Prahl**
**Mar 2021**
This notebook demonstrates what happens when the ISO 11146 guidelines are violated.
---
*If* `` laserbeamsize `` *is not installed, uncomment the following cell (i.e., delete the initial #) and execute it with* `` shift-enter ``. *Afterwards, you may need to restart the kernel/runtime before the module will import successfully.*
```
#!pip install --user laserbeamsize
import imageio
import numpy as np
import matplotlib.pyplot as plt
try:
import laserbeamsize as lbs
except ModuleNotFoundError:
print('laserbeamsize is not installed. To install, uncomment and run the cell above.')
print('Once installation is successful, rerun this cell again.')
pixel_size = 3.75e-6 # pixel size in microns
pixel_size_mm = pixel_size * 1e3
pixel_size_µm = pixel_size * 1e6
repo = "https://github.com/scottprahl/laserbeamsize/raw/master/docs/"
```
## A simple example from images to M²
Here is an analysis of a set of images that were insufficient for ISO 11146.
```
lambda0 = 632.8e-9 # meters
z10 = np.array([247,251,259,266,281,292])*1e-3 # image location in meters
filenames = [repo + "sb_%.0fmm_10.pgm" % (number*1e3) for number in z10]
# the 12-bit pixel images are stored in high-order bits in 16-bit values
tem10 = [imageio.imread(name)>>4 for name in filenames]
# remove top to eliminate artifact
for i in range(len(z10)):
tem10[i] = tem10[i][200:,:]
# find beam in all the images and create arrays of beam diameters
options = {'pixel_size': 3.75, 'units': "µm", 'crop': [1400,1400], 'z':z10}
dy, dx= lbs.beam_size_montage(tem10, **options) # dy and dx in microns
plt.show()
```
Here is one way to plot the fit using the above diameters::
```
lbs.M2_diameter_plot(z10, dx*1e-6, lambda0, dy=dy*1e-6)
plt.show()
```
In the graph above for the semi-minor axis, the dashed line shows the expected divergence
of a pure gaussian beam. Since real beams should diverge faster than this (not slower)
there is some problem with the measurements (too few!). On the other hand, the M² value
the semi-major axis 2.6±0.7 is consistent with the expected value of 3 for the TEM₁₀ mode.
## Images on only one side of focus
This time images were only collected on one side of the focus. The results show that the center was not located properly because M² < 1!
```
## Some Examples
f=100e-3 # m
lambda6 = 632.8e-9 # m
z6 = np.array([168, 210, 280, 348, 414, 480, 495, 510, 520, 580, 666, 770]) * 1e-3
d6 = np.array([597, 572, 547, 554, 479, 404, 415, 399, 377, 391, 326, 397]) * 1e-6
print(lbs.M2_report(z6,d6,lambda6))
lbs.M2_diameter_plot(z6, d6, lambda6)
plt.show()
lbs.M2_radius_plot(z6, d6, lambda6)
plt.show()
```
## Too many images near focus (1)
The standard requires half the points near the focus and half the points more than two Rayleigh distances away. This was not done for this set of data.
```
lambda7=632.8e-9
z7 = np.array([200,300,400,420,470,490,500,520,540,550,570,590,600,650,700,800]) * 1e-3
d7 = np.array([0.64199014,0.56911747,0.44826505,0.43933241,0.38702287,0.40124416,
0.39901968,0.37773683,0.38849226,0.39409733,0.37727374,0.41093666,
0.40613024,0.45203464,0.5085964,0.65115378]) * 1e-3
lbs.M2_diameter_plot(z7,d7,lambda7)
plt.show()
lbs.M2_radius_plot(z7, d7, lambda7)
plt.show()
```
## Too many images near focus (2)
The standard requires half the points near the focus and half the points more than two Rayleigh distances away. This was not done for this set of data.
```
lambda8 = 633e-9 # m
z8 = np.array([168,210,280,348,414,480,495,510,520,580,666,770]) * 1e-3
dx8 = np.array([160,142,135,136,121,102,105,102,97,103,91,109]) * pixel_size
dy8 = np.array([159,162,156,158,134,112,115,111,105,105,83,102]) * pixel_size
phi8=np.array([0.72030965,0.60364794,0.41548236,0.48140986,0.36119897,0.0289199,0.568598,-0.0810475,-0.13710729,-0.43326888,-0.02038848,0.38256955])
lbs.M2_diameter_plot(z8, dx8, lambda8, dy=dy8)
plt.show()
lbs.M2_radius_plot(z8, dx8, lambda8)
plt.show()
s = lbs.M2_report(z8, dx8, lambda8, dy=dy8, f=100e-3)
print(s)
lbs.M2_diameter_plot(z8[3:], dx8[3:], lambda8, dy=dy8[3:])
```
## Too many images near focus (3)
The standard requires half the points near the focus and half the points more than two Rayleigh distances away. This was not done for this set of data.
```
lambda2=632.8e-9
# array of distances at which images were collected
z2 = np.array([200,300,400,420,470,490,500,520,540,550,570,590,600,650,700,800],dtype=float) #mm
d2 = np.array([0.64199014,0.56911747,0.44826505,0.43933241,0.38702287,0.40124416,
0.39901968,0.37773683,0.38849226,0.39409733,0.37727374,0.41093666,
0.40613024,0.45203464,0.5085964,0.65115378])
z2 *= 1e-3
d2 *= 1e-3
lbs.M2_diameter_plot(z2,d2,lambda2)
```
| github_jupyter |
# Introduction to Numpy
This is a NumPy cheat sheet that is created in the Treehouse course [Introduction to NumPy](https://teamtreehouse.com/library/introduction-to-numpy)
```
import matplotlib.pyplot as plt
import numpy as np
np.__version__
```
## Differences between lists and NumPy Arrays
* An array's size is immutable. You cannot append, insert or remove elements, like you can with a list.
* All of an array's elements must be of the same [data type](https://docs.scipy.org/doc/numpy-1.14.0/user/basics.types.html).
* A NumPy array behaves in a Pythonic fashion. You can `len(my_array)` just like you would assume.
```
gpas_as_list = [4.0, 3.286, 3.5]
# Can have elements appended to it
gpas_as_list.append(4.0)
# Can have multiple datatypes in it.
gpas_as_list.insert(1, "Whatevs")
# Can have items removed
gpas_as_list.pop(1)
gpas_as_list
gpas = np.array(gpas_as_list)
gpas.dtype
gpas.itemsize
gpas.size
len(gpas)
gpas.nbytes
```
## Multidimensional Arrays
* The data structure is actually called `ndarray`, representing any **n**umber of **d**imensions
* Arrays can have multiple dimensions, you declare them on creation
* Dimensions help define what each element in the array represents. A two dimensional array is just an array of arrays
* **Rank** defines how many dimensions an array contains
* **Shape** defines the length of each of the array's dimensions
* Each dimension is also referred to as an **axis**, and they are zero-indexed. Multiples are called **axes**.
* A 2d array is AKA **matrix**.
```
students_gpas = np.array([
[4.0, 3.286, 3.5, 4.0],
[3.2, 3.8, 4.0, 4.0],
[3.96, 3.92, 4.0, 4.0]
], np.float16)
students_gpas
students_gpas.ndim
students_gpas.shape
students_gpas.size
len(students_gpas)
students_gpas.itemsize
students_gpas.itemsize * students_gpas.size
%whos ndarray
np.info(students_gpas)
students_gpas[2]
students_gpas[2][3]
students_gpas
```
## Common Routines
* Common [mathematical](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.math.html) [routines](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.html) are exposed so the formula can be abstracted away.
* [`mean`](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.mean.html#numpy.mean) is a [statistics](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.statistics.html) routine used to calculate the average.
* Reduction functions take a dimension and collapse it into a single value.
* These functions define an axis parameter, and you should remember that the function works across the dimension.
```
students_gpas.mean()
students_gpas.mean(axis=1)
plt.boxplot(students_gpas.T)
plt.plot()
```
## About data types
* By choosing the proper [data type](https://docs.scipy.org/doc/numpy-1.14.0/user/basics.types.html) you can greatly reduce the size required to store objects
* Data types are maintained by wrapping values in a [scalar representation](https://docs.scipy.org/doc/numpy-1.14.0/reference/arrays.scalars.html)
* `np.zeros` is a handy way to create an empty array filled with zeros.
```
study_minutes = np.zeros(100, np.uint16)
study_minutes
%whos
60 * 24
study_minutes[0] = 150
first_day_minutes = study_minutes[0]
first_day_minutes
type(first_day_minutes)
# TODO: Add 60 minutes to the second day in the study_minutes array
study_minutes[1] = 60
study_minutes[2:6] = [80, 60, 30, 90]
```
## Creation
* You can create a random but bound grouping of values using the `np.random` package.
* `RandomState` let's you seed your randomness in a way that is repeatable.
* You can append a row in a couple of ways
* You can use the `np.append` method. Make sure the new row is the same shape.
* You can create/reassign a new array by including the existing array as part of the iterable in creation.
## Indexing
* You can use an indexing shortcut by separating dimensions with a comma.
* You can index using a `list` or `np.array`. Values will be pulled out at that specific index. This is known as fancy indexing.
* Resulting array shape matches the index array layout. Be careful to distinguish between the tuple shortcut and fancy indexing.
```
study_minutes = np.array([
study_minutes,
np.zeros(100, np.uint16)
])
study_minutes.shape
# Set round 2 day 1 to 60
study_minutes[1][0] = 60
study_minutes[1, 0]
1, 0
rand = np.random.RandomState(42)
fake_log = rand.randint(30, 180, size=100, dtype=np.uint16)
fake_log
[fake_log[3], fake_log[8]]
fake_log[[3, 8]]
index = np.array([
[3, 8],
[0, 1]
])
fake_log[index]
study_minutes = np.append(study_minutes, [fake_log], axis=0)
study_minutes[1, 1] = 360
```
## Boolean Array Indexing
* You can create a boolean array by using comparison operators on an array.
* You can use boolean arrays for fancy indexing.
* Boolean arrays can be compared by using bitwise operators (`&`, `|`)
* Do not use the `and` keyword.
* Remember to mind the order of operations when combining
* Even though boolean indexing returns a new array, you can update an existing array using a boolean index.
```
fake_log[fake_log < 60]
results = []
for value in fake_log:
if value < 60:
results.append(value)
np.array(results)
study_minutes[study_minutes < 60]
np.array([False, True, True]) & np.array([True, False, True])
study_minutes[(study_minutes < 60) & (study_minutes > 0)]
study_minutes[study_minutes < 60] = 0
study_minutes[2]
```
## Universal Functions - Reduce / Accumulate
* Universal Functions expose a function to [`reduce`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduce.html) an array to a single value.
* There is also a function named [`accumulate`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.accumulate.html) which will show the reduction and it's accumulation as it happens.
```
np.add.reduce(study_minutes[0])
np.add.accumulate(study_minutes[0])
np.sum(study_minutes[0])
np.sum(study_minutes, axis=1)
# Pass in a one dimensional array of all the minutes that are
# greater than zero
plt.hist(study_minutes[study_minutes > 0])
plt.plot()
```
## Slicing
* Works a lot like normal list slicing.
* You can use commas to separate each dimension slice.
* Always returns a data view **not a copy**
* You can access the base object using the `ndarray.base` property
```
fruit = ["apple", "banana", "cherry", "durian"]
fruit[1:3]
fruit[:3]
fruit[3:]
fruit[:]
copied = fruit[:]
copied[3] = 'cheese'
# Slicing a list returns a copy
fruit, copied
fruit[::2]
fruit[::-1]
np.arange(20)
practice = np.arange(42)
practice.shape = (7, 6)
practice
practice[2:5, 3::2]
# Any slicing of ndarray returns a view and not a copy!
not_copied = practice[:]
not_copied[0, 0] = 90210
practice, not_copied
practice.base is None
not_copied.base is None
not_copied.base is practice
practice.flags['OWNDATA'], not_copied.flags['OWNDATA']
```
## Array Manipulation
* The documentation on [Array Manipulation](https://docs.scipy.org/doc/numpy/reference/routines.array-manipulation.html) is a good one to keep bookmarked.
* `ndarray.reshape` creates a view with a new shape
* You can use `-1` as a value to infer the missing dimension
* `ndarray.ravel` returns a single dimensional view of the array.
* `ndarray.flatten` can be used to make a single dimensional copy.
* `np.lookfor` is great for searching docstrings from within a notebook.
```
practice_view = practice.reshape(3, 14)
practice, practice_view, practice_view.base is practice
practice.reshape(-1, 2).shape
practice.ravel()
```
## Linear Algebra
* There is a module for linear algebra, [linalg](https://docs.scipy.org/doc/numpy/reference/routines.linalg.html)
* You can solve for a system of equations using the [solve function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.solve.html#numpy.linalg.solve)
* You can create a square 2 dimensional matrix and a constant row vector and solve for each variable column
* You can double check the answer using the inner product or [dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html#numpy.dot).
* You can use the `@` to produce the dot product of two arrays.
```
orders = np.array([
[2, 0, 0, 0],
[4, 1, 2, 2],
[0, 1, 0, 1],
[6, 0, 1, 2]
])
totals = np.array([3, 20.50, 10, 14.25])
prices = np.linalg.solve(orders, totals)
prices
# A • B
orders @ prices
orders.dot(prices)
```
## Universal Functions
* [ufuncs](https://docs.scipy.org/doc/numpy/reference/ufuncs.html) are commonly needed vectorized functions
* Vectorized functions allow you to operate element by element without using a loop
* The standard math and comparison operations have all been overloaded so that they can make use of vectorization
* Values can be [broadcasted](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html), or stretched to be applied to the ufuncs.
```
a, b = np.split(np.arange(1, 11), 2)
a, b
a + b
a - b
b - a
a * b
a + 2
a + np.repeat(2, 5)
>>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
x1, x2
>>> np.add(x1, x2)
np.add(x1, 2)
```
| github_jupyter |
# Energy terms and energy equation
There are several different energy terms that are implemented in `micromagneticmodel`. Here, we will provide a short list of them, together with some basic properties.
## Energy terms
### 1. Exchange energy
The main parameter required for the exchange energy is the exchange energy constant `A`.
```
import micromagneticmodel as mm
exchange = mm.Exchange(A=1e-11)
```
The values of its arguments are
```
exchange.A
exchange.name
```
String and LaTeX representations are
```
repr(exchange)
exchange
```
### 2. Zeeman energy
#### Time-independent
Zeeman energy requires the external magnetic field vector to be provided. Optionally, `name` can be given to the energy term.
```
zeeman = mm.Zeeman(H=(0, 0, 1e6))
```
The values of attributes are
```
zeeman.H
zeeman.name
```
LaTeX representation is
```
repr(zeeman)
zeeman
type(zeeman.wave)
```
#### Time-dependent
In order to define a time-dependent field, either sine or sinc wave can be used, which multiplies `H`. For instance, in order to define a time-dependent external field, which is a sine wave with $1 \,\text{GHz}$ frequency and $t_{0} = 2\,\text{ps}$ shift.
```
zeeman_sin = mm.Zeeman(H=(0, 0, 1e6), wave='sin', f=1e9, t0=2e-12)
```
LaTeX representation is:
```
repr(zeeman_sin)
```
Similarly, we can define a "sinc" pulse:
```
zeeman_sinc = mm.Zeeman(H=(0, 0, 1e6), wave='sinc', f=1e9, t0=2e-12)
```
LaTeX representation is:
```
zeeman_sinc
```
### 3. Uniaxial anisotropy
This energy term requires the anisotropy constant $K_{1}$ and uniaxial anisotropy axis $\mathbf{u}$ to be passed. As before, `name` is optional as well.
```
uniaxialanisotropy = mm.UniaxialAnisotropy(K=1e5, u=(0, 1, 0))
```
The attributes are:
```
uniaxialanisotropy.K
uniaxialanisotropy.u
```
String and LaTeX representations are
```
repr(uniaxialanisotropy)
uniaxialanisotropy
```
In order to define higher-order uniaxial anisotropy term, `K1` and `K2` should be passed.
```
uniaxialanisotropy = mm.UniaxialAnisotropy(K1=1e5, K2=1e3, u=(0, 1, 0))
uniaxialanisotropy
```
### 4. Demagnetisation energy
Demagnetisation energy does not require any input parameters. If needed, `name` can be passed.
```
demag = mm.Demag()
```
The only attribute is
```
demag.name
```
String and LaTeX representations are
```
repr(demag)
demag
```
### 5. Dzyaloshinskii-Moriya energy
DM energy takes two mandatory input parameters: DM constant $D$ and the crystallographic class `crystalclass`. The allowed crystallographic classes are
1. `Cnv`
2. `T` or `O`
3. `D2d`
For `Cnv` and `D2d` the normal axis must also be specified. Allowed values for `crystalclass` are:
1. `Cnv_x`, `Cnv_y`, or `Cnv_z`
2. `D2d_x`, `D2d_y`, or `D2d_z`
As usual, `name` argument is optional.
```
dmi_cnv_x = mm.DMI(D=5e-3, crystalclass='Cnv_x')
dmi_cnv_y = mm.DMI(D=5e-3, crystalclass='Cnv_y')
dmi_cnv_z = mm.DMI(D=5e-3, crystalclass='Cnv_z')
dmi_t = mm.DMI(D=5e-3, crystalclass='T')
dmi_d2d_x = mm.DMI(D=5e-3, crystalclass='D2d_x')
dmi_d2d_y = mm.DMI(D=5e-3, crystalclass='D2d_y')
dmi_d2d_z = mm.DMI(D=5e-3, crystalclass='D2d_z')
```
Attributes are
```
dmi_cnv_x.D == dmi_cnv_y.D == dmi_cnv_z.D == dmi_t.D == dmi_d2d_x.D == dmi_d2d_y.D == dmi_d2d_z.D
dmi_cnv_x.crystalclass
```
LaTeX representations are different for different crystallographic classes.
```
dmi_cnv_x
dmi_cnv_y
dmi_cnv_z
dmi_t
dmi_d2d_x
dmi_d2d_y
dmi_d2d_z
```
### 6. Cubic anisotropy
Cubic anisotropy energy term requires the anisotropy constant $K_{1}$ and two mutually perpendicular anisotropy axes $\mathbf{u}_{1}$ and $\mathbf{u}_{2}$. Argument `name` is optional.
```
cubicanisotropy = mm.CubicAnisotropy(K=1e5, u1=(1, 0, 0), u2=(0, 1, 0))
```
The attributes are:
```
cubicanisotropy.K
cubicanisotropy.u1
cubicanisotropy.u2
```
String and LaTeX representations are:
```
repr(cubicanisotropy)
cubicanisotropy
```
### 7. RKKY
RKKY energy term requires the sigma constant $\sigma$ and optionally $\sigma_{2}$. In addition, two subregions defined in the mesh must be passed as a list.
```
rkky = mm.RKKY(sigma=-1e-4, subregions=['subregion1', 'subregion2'])
```
The attributes are:
```
rkky.sigma
rkky.subregions
```
String and LaTeX representations are:
```
repr(rkky)
rkky
```
## Energy equation
The energy equation of the micromagnetic system is the sum of energy terms. For instance, if we sum two energy terms, we get:
```
type(exchange + dmi_cnv_z)
```
If we assign this value to a separate variable, we can explore some of its properties.
```
energy = exchange + dmi_cnv_z
```
The string representation is:
```
repr(energy)
```
Similarly, the LaTeX representation is
```
energy
```
This Hamiltonian consists of two energy term. To add another term to it `+=` operator can be used.
```
energy += zeeman
```
The Hamiltonian is now
```
energy
```
### Accesing the individual energy terms from the energy equation
There are two ways of retrieving an individual energy term from the energy equation. Let us say we want to change the value of the Dzyaloshinkii-Moriya constant $D$.
If an energy term with name `myenergy` was added to the Hamiltonian, that term can be accessed by typing `energy.myenergy`. In the case of DMI:
```
energy.dmi
energy.dmi.D
energy.dmi.D = 5e-3
energy.dmi.D
```
Similarly, the exchange energy term is
```
energy.exchange
```
because we used name `'myexchange'` at the time of initialisation.
```
exchange.name
```
| github_jupyter |
# Edafa on ImageNet dataset
This notebook shows an example on how to use Edafa to obtain better results on **classification task**. We use [ImageNet](http://www.image-net.org/) dataset which has **1000 classes**. We use *pytorch* and pretrained weights of AlexNet. At the end we compare results of the same model with and without augmentations.
#### Import dependencies
```
%load_ext autoreload
%autoreload 2
# add our package directory to the path
import sys
sys.path.append('../../')
import torchvision.models as models
import torchvision.transforms as transforms
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
#### Constants
```
# Filename to use for comparison (4 sample files are given in 'data' folder)
FILE = '000559'
# Input size of the model
IN_SIZE = 224
```
#### get labels
```
# Let's get our class labels.
labels = []
with open('labels.txt') as f:
for line in f:
labels.append(line.split(': ')[-1][1:-3])
```
#### Now we build our model (using pretrained weights)
```
model = models.alexnet(pretrained=True)
```
#### Read and preprocess image
```
img_path = '../data/images/%s.jpg'%FILE
img = plt.imread(img_path)
plt.imshow(img)
transform_pipeline = transforms.Compose([ transforms.ToPILImage(),
transforms.Resize((IN_SIZE,IN_SIZE)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])])
x = transform_pipeline(img)
x = x.unsqueeze(0)
x = Variable(x)
```
### Exp1: Predict image without augmentation
```
pred = model(x)
pred_without = pred.data.numpy()
```
### Exp2: Using same model with Edafa
#### step 1: import base class `ClassPredictor`
```
from edafa import ClassPredictor
```
#### step 2: inherit `ClassPredictor` and implement the main virtual functions: predict_patches()
```
class myPredictor(ClassPredictor):
def __init__(self,vgg16,pipeline,*args,**kwargs):
super().__init__(*args,**kwargs)
self.model = vgg16
self.pipe = pipeline
def predict_patches(self,patches):
preds = []
for i in range(patches.shape[0]):
processed = self.pipe(patches[i])
processed = processed.unsqueeze(0)
processed = Variable(processed)
pred = self.model(processed)
preds.append(pred.data.numpy())
return np.array(preds)
```
#### step 3: make an instance of your class with the correct parameters
```
p = myPredictor(model,transform_pipeline,"../../conf/imagenet.json")
```
#### step 4: call predict_images()
```
preds_with = p.predict_images([img])
```
### Compare results of Exp1 and Exp2
```
print('Predicted without augmentation: ', labels[pred_without.argmax()])
print('Predicted with augmentation:', labels[preds_with.argmax()])
```
We can clearly see from the object image that it's a desktop computer.
With *no augmentation* the top prediction is **Polaroid camera, Polaroid Land camera**.
With *augmentation* the top prediction is **desktop computer**
### Conclusion
Results showed that with the exact same model and by applying Edafa we can obtain better results!
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Multi-worker Training with Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/distribute/multi_worker_with_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/distribute/multi_worker_with_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
## Overview
This tutorial demonstrates multi-worker distributed training with Keras model using `tf.distribute.Strategy` API. With the help of the strategies specifically designed for multi-worker training, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.
[Distributed Training in TensorFlow](../../guide/distribute_strategy.ipynb) guide is available for an overview of the distribution strategies TensorFlow supports for those interested in a deeper understanding of `tf.distribute.Strategy` APIs.
## Setup
First, setup TensorFlow and the necessary imports.
```
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow==2.0.0-alpha0
import tensorflow_datasets as tfds
import tensorflow as tf
```
## Preparing dataset
Now, let's prepare the MNIST dataset from [TensorFlow Datasets](https://www.tensorflow.org/datasets). The [MNIST dataset](http://yann.lecun.com/exdb/mnist/) comprises 60,000
training examples and 10,000 test examples of the handwritten digits 0–9,
formatted as 28x28-pixel monochrome images.
```
BUFFER_SIZE = 10000
BATCH_SIZE = 64
# Scaling MNIST data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
datasets, info = tfds.load(name='mnist',
with_info=True,
as_supervised=True)
train_datasets_unbatched = datasets['train'].map(scale).shuffle(BUFFER_SIZE)
train_datasets = train_datasets_unbatched.batch(BATCH_SIZE)
```
## Build the Keras model
Here we use `tf.keras.Sequential` API to build and compile a simple convolutional neural networks Keras model to train with our MNIST dataset.
Note: For a more comprehensive walkthrough of building Keras model, please see [TensorFlow Keras Guide](https://www.tensorflow.org/guide/keras#sequential_model).
```
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
```
Let's first try training the model for a small number of epochs and observe the results in single worker to make sure everything works correctly. You should expect to see the loss dropping and accuracy approaching 1.0 as epoch advances.
```
single_worker_model = build_and_compile_cnn_model()
single_worker_model.fit(x=train_datasets, epochs=3)
```
## Multi-worker Configuration
Now let's enter the world of multi-worker training. In TensorFlow, `TF_CONFIG` environment variable is required for training on multiple machines, each of which possibly has a different role. `TF_CONFIG` is used to specify the cluster configuration on each worker that is part of the cluster.
There are two components of `TF_CONFIG`: `cluster` and `task`. `cluster` provides information about the training cluster, which is a dict consisting of different types of jobs such as `worker`. In multi-worker training, there is usually one `worker` that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular `worker` does. Such worker is referred to as the 'chief' worker, and it is customary that the `worker` with `index` 0 is appointed as the chief `worker` (in fact this is how `tf.distribute.Strategy` is implemented). `task` on the other hand provides information of the current task.
In this example, we set the task `type` to `"worker"` and the task `index` to `0`. This means the machine that has such setting is the first worker, which will be appointed as the chief worker and do more work than other workers. Note that other machines will need to have `TF_CONFIG` environment variable set as well, and it should have the same `cluster` dict, but different task `type` or task `index` depending on what the roles of those machines are.
For illustration purposes, this tutorial shows how one may set a `TF_CONFIG` with a single worker on `localhost`. In practice, users would create multiple workers on external IP addresses/ports, and set `TF_CONFIG` on each worker appropriately.
Warning: Do not execute the following code in Colab. TensorFlow's runtime will attempt to create a gRPC server at the specified IP address and port, which will likely fail.
```
os.environ['TF_CONFIG'] = json.dumps({
'cluster': {
'worker': ["localhost:12345", "localhost:23456"]
},
'task': {'type': 'worker', 'index': 0}
})
```
Note that while the learning rate is fixed in this example, in general it may be necessary to adjust the learning rate based on the global batch size.
## Choose the right strategy
In TensorFlow, distributed training consists of synchronous training, where the steps of training are synced across the workers and replicas, and asynchronous training, where the training steps are not strictly synced.
`MultiWorkerMirroredStrategy`, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.
To train the model, use an instance of `tf.distribute.experimental.MultiWorkerMirroredStrategy`. `MultiWorkerMirroredStrategy` creates copies of all variables in the model's layers on each device across all workers. It uses `CollectiveOps`, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The [`tf.distribute.Strategy` guide](../../guide/distribute_strategy.ipynb) has more details about this strategy.
```
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
```
`MultiWorkerMirroredStrategy` provides multiple implementations via the [`CollectiveCommunication`](https://github.com/tensorflow/tensorflow/blob/a385a286a930601211d78530734368ccb415bee4/tensorflow/python/distribute/cross_device_ops.py#L928) parameter. `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster.
## Train the model with MultiWorkerMirroredStrategy
With the integration of `tf.distribute.Strategy` API into `tf.keras`, the only change you will make to distribute the training to multi-worker is enclosing the model building and `model.compile()` call inside `strategy.scope()`. The distribution strategy's scope dictates how and where the variables are created, and in the case of `MultiWorkerMirroredStrategy`, the variables created are `MirroredVariable`s, and they are replicated on each of the workers.
Note: In this Colab, the following code can run with expected result, but however this is effectively single-worker training since `TF_CONFIG` is not set. Once you set `TF_CONFIG` in your own example, you should expect speed-up with training on multiple machines.
```
NUM_WORKERS = 2
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size. Previously we used 64,
# and now this becomes 128.
GLOBAL_BATCH_SIZE = 64 * NUM_WORKERS
train_datasets = train_datasets_unbatched.batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
multi_worker_model = build_and_compile_cnn_model()
multi_worker_model.fit(x=train_datasets, epochs=3)
```
In multi-worker training, sharding data into multiple parts is needed to ensure convergence and performance. However, note that in above code snippet, the datasets are directly sent to `model.fit()` without needing to shard; this is because `tf.distribute.Strategy` API takes care of the dataset sharding automatically in multi-worker trainings.
Another thing to notice is the batch size for the `datasets`. Here we change the batch size to be twice as large as the case it was for single worker, because the effective per worker batch size is the global batch size (the parameter passed in `tf.data.Dataset.batch()`) divided by the number of workers, and with this change we are keeping the per worker batch size same as before.
## Performance
You now have a Keras model that is all set up to run in multiple workers with `MultiWorkerMirroredStrategy`. You can try the following techniques to tweak performance of multi-worker training.
* `MultiWorkerMirroredStrategy` provides multiple [collective communication implementations](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/distribute/cross_device_ops.py). `RING` implements ring-based collectives using gRPC as the cross-host communication layer. `NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify a valid value to the `communication` parameter of `MultiWorkerMirroredStrategy`'s constructor, e.g. `communication=tf.distribute.experimental.CollectiveCommunication.NCCL`.
* Cast the variables to `tf.float` if possible. The official ResNet model includes [an example](https://github.com/tensorflow/models/blob/8367cf6dabe11adf7628541706b660821f397dce/official/resnet/resnet_model.py#L466) of how this can be done.
## See also
1. [Distributed Training in TensorFlow](https://www.tensorflow.org/guide/distribute_strategy) guide provides an overview of the available distribution strategies.
2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/resnet/imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`.
| github_jupyter |
# What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook).
### What is PyTorch?
PyTorch is a system for executing dynamic computational graphs over Tensor objects that behave similarly as numpy ndarray. It comes with a powerful automatic differentiation engine that removes the need for manual back-propagation.
### Why?
* Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).
* We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
* We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
* We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
### PyTorch versions
This notebook assumes that you are using **PyTorch version 0.4**. Prior to this version, Tensors had to be wrapped in Variable objects to be used in autograd; however Variables have now been deprecated. In addition 0.4 also separates a Tensor's datatype from its device, and uses numpy-style factories for constructing Tensors rather than directly invoking Tensor constructors.
## How will I learn PyTorch?
Justin Johnson has made an excellent [tutorial](https://github.com/jcjohnson/pytorch-examples) for PyTorch.
You can also find the detailed [API doc](http://pytorch.org/docs/stable/index.html) here. If you have other questions that are not addressed by the API docs, the [PyTorch forum](https://discuss.pytorch.org/) is a much better place to ask than StackOverflow.
# Table of Contents
This assignment has 5 parts. You will learn PyTorch on different levels of abstractions, which will help you understand it better and prepare you for the final project.
1. Preparation: we will use CIFAR-10 dataset.
2. Barebones PyTorch: we will work directly with the lowest-level PyTorch Tensors.
3. PyTorch Module API: we will use `nn.Module` to define arbitrary neural network architecture.
4. PyTorch Sequential API: we will use `nn.Sequential` to define a linear feed-forward network very conveniently.
5. CIFAR-10 open-ended challenge: please implement your own network to get as high accuracy as possible on CIFAR-10. You can experiment with any layer, optimizer, hyperparameters or other advanced features.
Here is a table of comparison:
| API | Flexibility | Convenience |
|---------------|-------------|-------------|
| Barebone | High | Low |
| `nn.Module` | High | Medium |
| `nn.Sequential` | Low | High |
# Part I. Preparation
First, we load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.
In previous parts of the assignment we had to write our own code to download the CIFAR-10 dataset, preprocess it, and iterate through it in minibatches; PyTorch provides convenient tools to automate this process for us.
```
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import torchvision.transforms as T
import numpy as np
NUM_TRAIN = 49000
# The torchvision.transforms package provides tools for preprocessing data
# and for performing data augmentation; here we set up a transform to
# preprocess the data by subtracting the mean RGB value and dividing by the
# standard deviation of each RGB value; we've hardcoded the mean and std.
transform = T.Compose([
T.ToTensor(),
T.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
])
# We set up a Dataset object for each split (train / val / test); Datasets load
# training examples one at a time, so we wrap each Dataset in a DataLoader which
# iterates through the Dataset and forms minibatches. We divide the CIFAR-10
# training set into train and val sets by passing a Sampler object to the
# DataLoader telling how it should sample from the underlying Dataset.
cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=transform)
loader_train = DataLoader(cifar10_train, batch_size=64,
sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN)))
cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=transform)
loader_val = DataLoader(cifar10_val, batch_size=64,
sampler=sampler.SubsetRandomSampler(range(NUM_TRAIN, 50000)))
cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,
transform=transform)
loader_test = DataLoader(cifar10_test, batch_size=64)
```
You have an option to **use GPU by setting the flag to True below**. It is not necessary to use GPU for this assignment. Note that if your computer does not have CUDA enabled, `torch.cuda.is_available()` will return False and this notebook will fallback to CPU mode.
The global variables `dtype` and `device` will control the data types throughout this assignment.
```
USE_GPU = True
dtype = torch.float32 # we will be using float throughout this tutorial
if USE_GPU and torch.cuda.is_available():
device = torch.device('cuda')
else:
device = torch.device('cpu')
# Constant to control how frequently we print train loss
print_every = 100
print('using device:', device)
```
# Part II. Barebones PyTorch
PyTorch ships with high-level APIs to help us define model architectures conveniently, which we will cover in Part II of this tutorial. In this section, we will start with the barebone PyTorch elements to understand the autograd engine better. After this exercise, you will come to appreciate the high-level model API more.
We will start with a simple fully-connected ReLU network with two hidden layers and no biases for CIFAR classification.
This implementation computes the forward pass using operations on PyTorch Tensors, and uses PyTorch autograd to compute gradients. It is important that you understand every line, because you will write a harder version after the example.
When we create a PyTorch Tensor with `requires_grad=True`, then operations involving that Tensor will not just compute values; they will also build up a computational graph in the background, allowing us to easily backpropagate through the graph to compute gradients of some Tensors with respect to a downstream loss. Concretely if x is a Tensor with `x.requires_grad == True` then after backpropagation `x.grad` will be another Tensor holding the gradient of x with respect to the scalar loss at the end.
### PyTorch Tensors: Flatten Function
A PyTorch Tensor is conceptionally similar to a numpy array: it is an n-dimensional grid of numbers, and like numpy PyTorch provides many functions to efficiently operate on Tensors. As a simple example, we provide a `flatten` function below which reshapes image data for use in a fully-connected neural network.
Recall that image data is typically stored in a Tensor of shape N x C x H x W, where:
* N is the number of datapoints
* C is the number of channels
* H is the height of the intermediate feature map in pixels
* W is the height of the intermediate feature map in pixels
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we use fully connected affine layers to process the image, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "flatten" operation to collapse the `C x H x W` values per representation into a single long vector. The flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
```
def flatten(x):
N = x.shape[0] # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
def test_flatten():
x = torch.arange(12).view(2, 1, 3, 2)
print('Before flattening: ', x)
print('After flattening: ', flatten(x))
test_flatten()
```
### Barebones PyTorch: Two-Layer Network
Here we define a function `two_layer_fc` which performs the forward pass of a two-layer fully-connected ReLU network on a batch of image data. After defining the forward pass we check that it doesn't crash and that it produces outputs of the right shape by running zeros through the network.
You don't have to write any code here, but it's important that you read and understand the implementation.
```
import torch.nn.functional as F # useful stateless functions
def two_layer_fc(x, params):
"""
A fully-connected neural networks; the architecture is:
NN is fully connected -> ReLU -> fully connected layer.
Note that this function only defines the forward pass;
PyTorch will take care of the backward pass for us.
The input to the network will be a minibatch of data, of shape
(N, d1, ..., dM) where d1 * ... * dM = D. The hidden layer will have H units,
and the output layer will produce scores for C classes.
Inputs:
- x: A PyTorch Tensor of shape (N, d1, ..., dM) giving a minibatch of
input data.
- params: A list [w1, w2] of PyTorch Tensors giving weights for the network;
w1 has shape (D, H) and w2 has shape (H, C).
Returns:
- scores: A PyTorch Tensor of shape (N, C) giving classification scores for
the input data x.
"""
# first we flatten the image
x = flatten(x) # shape: [batch_size, C x H x W]
w1, w2 = params
# Forward pass: compute predicted y using operations on Tensors. Since w1 and
# w2 have requires_grad=True, operations involving these Tensors will cause
# PyTorch to build a computational graph, allowing automatic computation of
# gradients. Since we are no longer implementing the backward pass by hand we
# don't need to keep references to intermediate values.
# you can also use `.clamp(min=0)`, equivalent to F.relu()
x = F.relu(x.mm(w1))
x = x.mm(w2)
return x
def two_layer_fc_test():
hidden_layer_size = 42
x = torch.zeros((64, 50), dtype=dtype) # minibatch size 64, feature dimension 50
w1 = torch.zeros((50, hidden_layer_size), dtype=dtype)
w2 = torch.zeros((hidden_layer_size, 10), dtype=dtype)
scores = two_layer_fc(x, [w1, w2])
print(scores.size()) # you should see [64, 10]
two_layer_fc_test()
```
### Barebones PyTorch: Three-Layer ConvNet
Here you will complete the implementation of the function `three_layer_convnet`, which will perform the forward pass of a three-layer convolutional network. Like above, we can immediately test our implementation by passing zeros through the network. The network should have the following architecture:
1. A convolutional layer (with bias) with `channel_1` filters, each with shape `KW1 x KH1`, and zero-padding of two
2. ReLU nonlinearity
3. A convolutional layer (with bias) with `channel_2` filters, each with shape `KW2 x KH2`, and zero-padding of one
4. ReLU nonlinearity
5. Fully-connected layer with bias, producing scores for C classes.
**HINT**: For convolutions: http://pytorch.org/docs/stable/nn.html#torch.nn.functional.conv2d; pay attention to the shapes of convolutional filters!
```
def three_layer_convnet(x, params):
"""
Performs the forward pass of a three-layer convolutional network with the
architecture defined above.
Inputs:
- x: A PyTorch Tensor of shape (N, 3, H, W) giving a minibatch of images
- params: A list of PyTorch Tensors giving the weights and biases for the
network; should contain the following:
- conv_w1: PyTorch Tensor of shape (channel_1, 3, KH1, KW1) giving weights
for the first convolutional layer
- conv_b1: PyTorch Tensor of shape (channel_1,) giving biases for the first
convolutional layer
- conv_w2: PyTorch Tensor of shape (channel_2, channel_1, KH2, KW2) giving
weights for the second convolutional layer
- conv_b2: PyTorch Tensor of shape (channel_2,) giving biases for the second
convolutional layer
- fc_w: PyTorch Tensor giving weights for the fully-connected layer. Can you
figure out what the shape should be?
- fc_b: PyTorch Tensor giving biases for the fully-connected layer. Can you
figure out what the shape should be?
Returns:
- scores: PyTorch Tensor of shape (N, C) giving classification scores for x
"""
conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b = params
scores = None
################################################################################
# TODO: Implement the forward pass for the three-layer ConvNet. #
################################################################################
scores = F.conv2d(x, conv_w1, conv_b1, padding=2)
scores = F.relu(scores)
scores = F.conv2d(scores, conv_w2, conv_b2, padding=1)
scores = F.relu(scores)
scores = flatten(scores).mm(fc_w) + fc_b
################################################################################
# END OF YOUR CODE #
################################################################################
return scores
```
After defining the forward pass of the ConvNet above, run the following cell to test your implementation.
When you run this function, scores should have shape (64, 10).
```
def three_layer_convnet_test():
x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]
conv_w1 = torch.zeros((6, 3, 5, 5), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]
conv_b1 = torch.zeros((6,)) # out_channel
conv_w2 = torch.zeros((9, 6, 3, 3), dtype=dtype) # [out_channel, in_channel, kernel_H, kernel_W]
conv_b2 = torch.zeros((9,)) # out_channel
# you must calculate the shape of the tensor after two conv layers, before the fully-connected layer
fc_w = torch.zeros((9 * 32 * 32, 10))
fc_b = torch.zeros(10)
scores = three_layer_convnet(x, [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b])
print(scores.size()) # you should see [64, 10]
three_layer_convnet_test()
```
### Barebones PyTorch: Initialization
Let's write a couple utility methods to initialize the weight matrices for our models.
- `random_weight(shape)` initializes a weight tensor with the Kaiming normalization method.
- `zero_weight(shape)` initializes a weight tensor with all zeros. Useful for instantiating bias parameters.
The `random_weight` function uses the Kaiming normal initialization method, described in:
He et al, *Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification*, ICCV 2015, https://arxiv.org/abs/1502.01852
```
def random_weight(shape):
"""
Create random Tensors for weights; setting requires_grad=True means that we
want to compute gradients for these Tensors during the backward pass.
We use Kaiming normalization: sqrt(2 / fan_in)
"""
if len(shape) == 2: # FC weight
fan_in = shape[0]
else:
fan_in = np.prod(shape[1:]) # conv weight [out_channel, in_channel, kH, kW]
# randn is standard normal distribution generator.
w = torch.randn(shape, device=device, dtype=dtype) * np.sqrt(2. / fan_in)
w.requires_grad = True
return w
def zero_weight(shape):
return torch.zeros(shape, device=device, dtype=dtype, requires_grad=True)
# create a weight of shape [3 x 5]
# you should see the type `torch.cuda.FloatTensor` if you use GPU.
# Otherwise it should be `torch.FloatTensor`
random_weight((3, 5))
```
### Barebones PyTorch: Check Accuracy
When training the model we will use the following function to check the accuracy of our model on the training or validation sets.
When checking accuracy we don't need to compute any gradients; as a result we don't need PyTorch to build a computational graph for us when we compute scores. To prevent a graph from being built we scope our computation under a `torch.no_grad()` context manager.
```
def check_accuracy_part2(loader, model_fn, params):
"""
Check the accuracy of a classification model.
Inputs:
- loader: A DataLoader for the data split we want to check
- model_fn: A function that performs the forward pass of the model,
with the signature scores = model_fn(x, params)
- params: List of PyTorch Tensors giving parameters of the model
Returns: Nothing, but prints the accuracy of the model
"""
split = 'val' if loader.dataset.train else 'test'
print('Checking accuracy on the %s set' % split)
num_correct, num_samples = 0, 0
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.int64)
scores = model_fn(x, params)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f%%)' % (num_correct, num_samples, 100 * acc))
```
### BareBones PyTorch: Training Loop
We can now set up a basic training loop to train our network. We will train the model using stochastic gradient descent without momentum. We will use `torch.functional.cross_entropy` to compute the loss; you can [read about it here](http://pytorch.org/docs/stable/nn.html#cross-entropy).
The training loop takes as input the neural network function, a list of initialized parameters (`[w1, w2]` in our example), and learning rate.
```
def train_part2(model_fn, params, learning_rate):
"""
Train a model on CIFAR-10.
Inputs:
- model_fn: A Python function that performs the forward pass of the model.
It should have the signature scores = model_fn(x, params) where x is a
PyTorch Tensor of image data, params is a list of PyTorch Tensors giving
model weights, and scores is a PyTorch Tensor of shape (N, C) giving
scores for the elements in x.
- params: List of PyTorch Tensors giving weights for the model
- learning_rate: Python scalar giving the learning rate to use for SGD
Returns: Nothing
"""
for t, (x, y) in enumerate(loader_train):
# Move the data to the proper device (GPU or CPU)
x = x.to(device=device, dtype=dtype)
y = y.to(device=device, dtype=torch.long)
# Forward pass: compute scores and loss
scores = model_fn(x, params)
loss = F.cross_entropy(scores, y)
# Backward pass: PyTorch figures out which Tensors in the computational
# graph has requires_grad=True and uses backpropagation to compute the
# gradient of the loss with respect to these Tensors, and stores the
# gradients in the .grad attribute of each Tensor.
loss.backward()
# Update parameters. We don't want to backpropagate through the
# parameter updates, so we scope the updates under a torch.no_grad()
# context manager to prevent a computational graph from being built.
with torch.no_grad():
for w in params:
w -= learning_rate * w.grad
# Manually zero the gradients after running the backward pass
w.grad.zero_()
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss.item()))
check_accuracy_part2(loader_val, model_fn, params)
print()
print('After final iteration:')
check_accuracy_part2(loader_val, model_fn, params)
```
### BareBones PyTorch: Train a Two-Layer Network
Now we are ready to run the training loop. We need to explicitly allocate tensors for the fully connected weights, `w1` and `w2`.
Each minibatch of CIFAR has 64 examples, so the tensor shape is `[64, 3, 32, 32]`.
After flattening, `x` shape should be `[64, 3 * 32 * 32]`. This will be the size of the first dimension of `w1`.
The second dimension of `w1` is the hidden layer size, which will also be the first dimension of `w2`.
Finally, the output of the network is a 10-dimensional vector that represents the probability distribution over 10 classes.
You don't need to tune any hyperparameters but you should see accuracies above 40% after training for one epoch.
```
hidden_layer_size = 4000
learning_rate = 1e-2
w1 = random_weight((3 * 32 * 32, hidden_layer_size))
w2 = random_weight((hidden_layer_size, 10))
train_part2(two_layer_fc, [w1, w2], learning_rate)
```
### BareBones PyTorch: Training a ConvNet
In the below you should use the functions defined above to train a three-layer convolutional network on CIFAR. The network should have the following architecture:
1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 2
2. ReLU
3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 1
4. ReLU
5. Fully-connected layer (with bias) to compute scores for 10 classes
You should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.
You don't need to tune any hyperparameters, but if everything works correctly you should achieve an accuracy above 42% after one epoch.
```
learning_rate = 3e-3
channel_1 = 32
channel_2 = 16
conv_w1 = None
conv_b1 = None
conv_w2 = None
conv_b2 = None
fc_w = None
fc_b = None
################################################################################
# TODO: Initialize the parameters of a three-layer ConvNet. #
################################################################################
conv_w1 = random_weight((32, 3, 5, 5))
conv_b1 = zero_weight((32,))
conv_w2 = random_weight((16, 32, 3, 3))
conv_b2 = zero_weight((16,))
fc_w = random_weight((16*32*32, 10))
fc_b = zero_weight((10,))
################################################################################
# END OF YOUR CODE #
################################################################################
params = [conv_w1, conv_b1, conv_w2, conv_b2, fc_w, fc_b]
train_part2(three_layer_convnet, params, learning_rate)
```
# Part III. PyTorch Module API
Barebone PyTorch requires that we track all the parameter tensors by hand. This is fine for small networks with a few tensors, but it would be extremely inconvenient and error-prone to track tens or hundreds of tensors in larger networks.
PyTorch provides the `nn.Module` API for you to define arbitrary network architectures, while tracking every learnable parameters for you. In Part II, we implemented SGD ourselves. PyTorch also provides the `torch.optim` package that implements all the common optimizers, such as RMSProp, Adagrad, and Adam. It even supports approximate second-order methods like L-BFGS! You can refer to the [doc](http://pytorch.org/docs/master/optim.html) for the exact specifications of each optimizer.
To use the Module API, follow the steps below:
1. Subclass `nn.Module`. Give your network class an intuitive name like `TwoLayerFC`.
2. In the constructor `__init__()`, define all the layers you need as class attributes. Layer objects like `nn.Linear` and `nn.Conv2d` are themselves `nn.Module` subclasses and contain learnable parameters, so that you don't have to instantiate the raw tensors yourself. `nn.Module` will track these internal parameters for you. Refer to the [doc](http://pytorch.org/docs/master/nn.html) to learn more about the dozens of builtin layers. **Warning**: don't forget to call the `super().__init__()` first!
3. In the `forward()` method, define the *connectivity* of your network. You should use the attributes defined in `__init__` as function calls that take tensor as input and output the "transformed" tensor. Do *not* create any new layers with learnable parameters in `forward()`! All of them must be declared upfront in `__init__`.
After you define your Module subclass, you can instantiate it as an object and call it just like the NN forward function in part II.
### Module API: Two-Layer Network
Here is a concrete example of a 2-layer fully connected network:
```
class TwoLayerFC(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super().__init__()
# assign layer objects to class attributes
self.fc1 = nn.Linear(input_size, hidden_size)
# nn.init package contains convenient initialization methods
# http://pytorch.org/docs/master/nn.html#torch-nn-init
nn.init.kaiming_normal_(self.fc1.weight)
self.fc2 = nn.Linear(hidden_size, num_classes)
nn.init.kaiming_normal_(self.fc2.weight)
def forward(self, x):
# forward always defines connectivity
x = flatten(x)
scores = self.fc2(F.relu(self.fc1(x)))
return scores
def test_TwoLayerFC():
input_size = 50
x = torch.zeros((64, input_size), dtype=dtype) # minibatch size 64, feature dimension 50
model = TwoLayerFC(input_size, 42, 10)
scores = model(x)
print(scores.size()) # you should see [64, 10]
test_TwoLayerFC()
```
### Module API: Three-Layer ConvNet
It's your turn to implement a 3-layer ConvNet followed by a fully connected layer. The network architecture should be the same as in Part II:
1. Convolutional layer with `channel_1` 5x5 filters with zero-padding of 2
2. ReLU
3. Convolutional layer with `channel_2` 3x3 filters with zero-padding of 1
4. ReLU
5. Fully-connected layer to `num_classes` classes
You should initialize the weight matrices of the model using the Kaiming normal initialization method.
**HINT**: http://pytorch.org/docs/stable/nn.html#conv2d
After you implement the three-layer ConvNet, the `test_ThreeLayerConvNet` function will run your implementation; it should print `(64, 10)` for the shape of the output scores.
```
class ThreeLayerConvNet(nn.Module):
def __init__(self, in_channel, channel_1, channel_2, num_classes):
super().__init__()
########################################################################
# TODO: Set up the layers you need for a three-layer ConvNet with the #
# architecture defined above. #
########################################################################
self.conv1 = nn.Conv2d(in_channels=in_channel, out_channels=channel_1,
kernel_size=(5, 5), padding=2, bias=True)
nn.init.kaiming_normal_(self.conv1.weight)
nn.init.constant_(self.conv1.bias, 0)
self.conv2 = nn.Conv2d(in_channels=channel_1, out_channels=channel_2,
kernel_size=(3, 3), padding=1, bias=True)
nn.init.kaiming_normal_(self.conv2.weight)
nn.init.constant_(self.conv2.bias, 0)
self.fc = nn.Linear(in_features = channel_2 * 32 *32,
out_features = num_classes, bias = True)
nn.init.kaiming_normal_(self.fc.weight)
nn.init.constant_(self.fc.bias, 0)
########################################################################
# END OF YOUR CODE #
########################################################################
def forward(self, x):
scores = None
########################################################################
# TODO: Implement the forward function for a 3-layer ConvNet. you #
# should use the layers you defined in __init__ and specify the #
# connectivity of those layers in forward() #
########################################################################
scores = F.relu(self.conv1(x))
scores = F.relu(self.conv2(scores))
scores = self.fc(flatten(scores))
########################################################################
# END OF YOUR CODE #
########################################################################
return scores
def test_ThreeLayerConvNet():
x = torch.zeros((64, 3, 32, 32), dtype=dtype) # minibatch size 64, image size [3, 32, 32]
model = ThreeLayerConvNet(in_channel=3, channel_1=12, channel_2=8, num_classes=10)
scores = model(x)
print(scores.size()) # you should see [64, 10]
test_ThreeLayerConvNet()
```
### Module API: Check Accuracy
Given the validation or test set, we can check the classification accuracy of a neural network.
This version is slightly different from the one in part II. You don't manually pass in the parameters anymore.
```
def check_accuracy_part34(loader, model):
if loader.dataset.train:
print('Checking accuracy on validation set')
else:
print('Checking accuracy on test set')
num_correct = 0
num_samples = 0
model.eval() # set model to evaluation mode
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
```
### Module API: Training Loop
We also use a slightly different training loop. Rather than updating the values of the weights ourselves, we use an Optimizer object from the `torch.optim` package, which abstract the notion of an optimization algorithm and provides implementations of most of the algorithms commonly used to optimize neural networks.
```
def train_part34(model, optimizer, epochs=1):
"""
Train a model on CIFAR-10 using the PyTorch Module API.
Inputs:
- model: A PyTorch Module giving the model to train.
- optimizer: An Optimizer object we will use to train the model
- epochs: (Optional) A Python integer giving the number of epochs to train for
Returns: Nothing, but prints model accuracies during training.
"""
model = model.to(device=device) # move the model parameters to CPU/GPU
for e in range(epochs):
#print(f'Epoch number {e}')
for t, (x, y) in enumerate(loader_train):
model.train() # put model to training mode
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
loss = F.cross_entropy(scores, y)
# Zero out all of the gradients for the variables which the optimizer
# will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with
# respect to each parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients
# computed by the backwards pass.
optimizer.step()
if t % print_every == 0:
print('Iteration %d, loss = %.4f' % (t, loss.item()))
check_accuracy_part34(loader_val, model)
print()
```
### Module API: Train a Two-Layer Network
Now we are ready to run the training loop. In contrast to part II, we don't explicitly allocate parameter tensors anymore.
Simply pass the input size, hidden layer size, and number of classes (i.e. output size) to the constructor of `TwoLayerFC`.
You also need to define an optimizer that tracks all the learnable parameters inside `TwoLayerFC`.
You don't need to tune any hyperparameters, but you should see model accuracies above 40% after training for one epoch.
```
hidden_layer_size = 4000
learning_rate = 1e-2
model = TwoLayerFC(3 * 32 * 32, hidden_layer_size, 10)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
train_part34(model, optimizer)
```
### Module API: Train a Three-Layer ConvNet
You should now use the Module API to train a three-layer ConvNet on CIFAR. This should look very similar to training the two-layer network! You don't need to tune any hyperparameters, but you should achieve above above 45% after training for one epoch.
You should train the model using stochastic gradient descent without momentum.
```
learning_rate = 3e-3
channel_1 = 32
channel_2 = 16
model = None
optimizer = None
################################################################################
# TODO: Instantiate your ThreeLayerConvNet model and a corresponding optimizer #
################################################################################
model = ThreeLayerConvNet(3, channel_1, channel_2, 10)
optimizer = optim.SGD(model.parameters(), lr=learning_rate)
################################################################################
# END OF YOUR CODE
################################################################################
train_part34(model, optimizer)
```
# Part IV. PyTorch Sequential API
Part III introduced the PyTorch Module API, which allows you to define arbitrary learnable layers and their connectivity.
For simple models like a stack of feed forward layers, you still need to go through 3 steps: subclass `nn.Module`, assign layers to class attributes in `__init__`, and call each layer one by one in `forward()`. Is there a more convenient way?
Fortunately, PyTorch provides a container Module called `nn.Sequential`, which merges the above steps into one. It is not as flexible as `nn.Module`, because you cannot specify more complex topology than a feed-forward stack, but it's good enough for many use cases.
### Sequential API: Two-Layer Network
Let's see how to rewrite our two-layer fully connected network example with `nn.Sequential`, and train it using the training loop defined above.
Again, you don't need to tune any hyperparameters here, but you shoud achieve above 40% accuracy after one epoch of training.
```
# We need to wrap `flatten` function in a module in order to stack it
# in nn.Sequential
class Flatten(nn.Module):
def forward(self, x):
return flatten(x)
hidden_layer_size = 4000
learning_rate = 1e-2
model = nn.Sequential(
Flatten(),
nn.Linear(3 * 32 * 32, hidden_layer_size),
nn.ReLU(),
nn.Linear(hidden_layer_size, 10),
)
# you can use Nesterov momentum in optim.SGD
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=0.9, nesterov=True)
train_part34(model, optimizer)
```
### Sequential API: Three-Layer ConvNet
Here you should use `nn.Sequential` to define and train a three-layer ConvNet with the same architecture we used in Part III:
1. Convolutional layer (with bias) with 32 5x5 filters, with zero-padding of 2
2. ReLU
3. Convolutional layer (with bias) with 16 3x3 filters, with zero-padding of 1
4. ReLU
5. Fully-connected layer (with bias) to compute scores for 10 classes
You should initialize your weight matrices using the `random_weight` function defined above, and you should initialize your bias vectors using the `zero_weight` function above.
You should optimize your model using stochastic gradient descent with Nesterov momentum 0.9.
Again, you don't need to tune any hyperparameters but you should see accuracy above 55% after one epoch of training.
```
channel_1 = 32
channel_2 = 16
learning_rate = 1e-2
model = None
optimizer = None
################################################################################
# TODO: Rewrite the 2-layer ConvNet with bias from Part III with the #
# Sequential API. #
################################################################################
model = nn.Sequential(
nn.Conv2d(3, channel_1, (5,5), padding=2),
nn.ReLU(),
nn.Conv2d(channel_1, channel_2, (3,3), padding=1),
nn.ReLU(),
Flatten(),
nn.Linear(channel_2*32*32, 10)
)
optimizer = optim.SGD(model.parameters(), lr=learning_rate,
momentum=0.9, nesterov=True)
################################################################################
# END OF YOUR CODE
################################################################################
train_part34(model, optimizer)
```
# Part V. CIFAR-10 open-ended challenge
In this section, you can experiment with whatever ConvNet architecture you'd like on CIFAR-10.
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves **at least 70%** accuracy on the CIFAR-10 **validation** set within 10 epochs. You can use the check_accuracy and train functions from above. You can use either `nn.Module` or `nn.Sequential` API.
Describe what you did at the end of this notebook.
Here are the official API documentation for each component. One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.
* Layers in torch.nn package: http://pytorch.org/docs/stable/nn.html
* Activations: http://pytorch.org/docs/stable/nn.html#non-linear-activations
* Loss functions: http://pytorch.org/docs/stable/nn.html#loss-functions
* Optimizers: http://pytorch.org/docs/stable/optim.html
### Things you might try:
- **Filter size**: Above we used 5x5; would smaller filters be more efficient?
- **Number of filters**: Above we used 32 filters. Do more or fewer do better?
- **Pooling vs Strided Convolution**: Do you use max pooling or just stride convolutions?
- **Batch normalization**: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
- **Network architecture**: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
- [conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
- [conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
- [batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
- **Global Average Pooling**: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in [Google's Inception Network](https://arxiv.org/abs/1512.00567) (See Table 1 for their architecture).
- **Regularization**: Add l2 weight regularization, or perhaps use Dropout.
### Tips for training
For each network architecture that you try, you should tune the learning rate and other hyperparameters. When doing this there are a couple important things to keep in mind:
- If the parameters are working well, you should see improvement within a few hundred iterations
- Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
- Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
- You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
### Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are **not required** to implement any of these, but don't miss the fun if you have time!
- Alternative optimizers: you can try Adam, Adagrad, RMSprop, etc.
- Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
- Model ensembles
- Data augmentation
- New Architectures
- [ResNets](https://arxiv.org/abs/1512.03385) where the input from the previous layer is added to the output.
- [DenseNets](https://arxiv.org/abs/1608.06993) where inputs into previous layers are concatenated together.
- [This blog has an in-depth overview](https://chatbotslife.com/resnets-highwaynets-and-densenets-oh-my-9bb15918ee32)
### Have fun and happy training!
```
def check_accuracy(loader, model):
num_correct = 0
num_samples = 0
model.eval() # set model to evaluation mode
with torch.no_grad():
for x, y in loader:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
return float(num_correct) / num_samples
def train(model, optimizer, print_every=700, epochs=3, verbose=False):
model = model.to(device=device) # move the model parameters to CPU/GPU
best_val_acc = 0.7
model_hist = {}
for e in range(epochs):
if verbose or print_every==-1:
print(f'Epoch number {e+1}')
for t, (x, y) in enumerate(loader_train):
model.train() # put model to training mode
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model(x)
loss = F.cross_entropy(scores, y)
# Zero out all of the gradients for the variables which the optimizer
# will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with
# respect to each parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients
# computed by the backwards pass.
optimizer.step()
if (print_every == -1 and t==len(loader_train)-1) or (verbose and t % print_every == 0):
with torch.no_grad():
_, preds = scores.max(1)
tr_acc = float(torch.sum(preds==y)) / len(y)
val_acc = check_accuracy(loader_val, model)
print('Iteration %d, loss = %.4f' % (t, loss.item()))
print(f' training accuracy = {tr_acc:.4f}\nvalidation accuracy = {val_acc:.4f}')
print()
cur_val_acc = check_accuracy(loader_val, model)
if best_val_acc <= cur_val_acc:
best_val_acc = cur_val_acc
model_hist[cur_val_acc] = model
return model_hist
################################################################################
# TODO: #
# Experiment with any architectures, optimizers, and hyperparameters. #
# Achieve AT LEAST 70% accuracy on the *validation set* within 10 epochs. #
# #
# Note that you can use the check_accuracy function to evaluate on either #
# the test set or the validation set, by passing either loader_test or #
# loader_val as the second argument to check_accuracy. You should not touch #
# the test set until you have finished your architecture and hyperparameter #
# tuning, and only run the test set once at the end to report a final value. #
################################################################################
#lr = 1e-3
def init_weights(m):
if type(m)==nn.Linear:
nn.init.kaiming_normal_(m.weight)
print_every = 700
best_model, best_acc = None, -1
arange = [10**(-i) for i in range(10)]
# 7.5e-4
# 1.8e-3
for _ in range(10):
lr, reg = np.random.uniform(1e-3 * 0.9, 1e-3 * 1.5), np.random.uniform(1e-4, 10e-4)
#lr, reg = 1.5e-4, 1e-4
model = nn.Sequential(
nn.BatchNorm2d(3),
nn.ReLU(),
nn.Conv2d(3, 64, (11,11), padding=5, stride=2), # output size 64 x 16 x 16
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 48, (7,7), padding=3), # output size 48 x 16 x 16
nn.MaxPool2d(2), # output size 48 x 8 x 8
nn.BatchNorm2d(48),
nn.ReLU(),
nn.Conv2d(48, 32, (5,5), padding=2), # output size 32 x 8 x 8
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 16, (3,3), padding=1), # output size 16 x 8 x 8
nn.MaxPool2d(2), # output size 16 x 4 x 4
Flatten(),
nn.Linear(16*4*4, 16*2*2),
nn.ReLU(),
nn.Linear(16*2*2, 10)
)
model.apply(init_weights)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=reg)
train(model, optimizer, epochs=10, verbose=True)
print(f'\nlearning rate {lr:e} weight decay {reg:e} ')
acc = check_accuracy(loader_val, model)
print(f'validation accuracy {acc:.4f}\n')
if best_acc < acc:
best_acc = acc
best_model = model
print('\ndone!')
################################################################################
# END OF YOUR CODE
################################################################################
# You should get at least 70% accuracy
#train_part34(model, optimizer, epochs=10)
```
## Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and/or any graphs that you made in the process of training and evaluating your network.
TODO: Describe what you did
```
model = nn.Sequential(
nn.BatchNorm2d(3),
nn.ReLU(),
nn.Conv2d(3, 16, (11,11), padding=5), # output size 16 * 32 * 32
nn.MaxPool2d(2), # output size 16 * 16 * 16
nn.BatchNorm2d(16),
nn.ReLU(),
nn.Conv2d(16, 32, (7,7), padding=3), # output size 32 x 16 x 16
nn.MaxPool2d(2), # output size 32 x 8 x 8
nn.BatchNorm2d(32),
nn.ReLU(),
nn.Conv2d(32, 64, (5,5), padding=2), # output size 64 * 8 * 8
nn.MaxPool2d(2), # output size 64 * 4 * 4
nn.BatchNorm2d(64),
nn.ReLU(),
nn.Conv2d(64, 128, (3,3), padding=1), # output size 128 * 4 * 4
nn.MaxPool2d(2), # output size 128 * 2 * 2
Flatten(),
nn.Linear(128*2*2, 64),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(64, 128),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(128, 256),
nn.ReLU(),
nn.Dropout(p=0.4),
nn.Linear(256, 10)
)
model.apply(init_weights)
optimizer = optim.Adam(model.parameters(), lr=1e-03, weight_decay=2e-03)
model_hist = train(model, optimizer, print_every=-1, epochs=60, verbose=False)
check_accuracy(loader_val, model)
class Ensemble(nn.Module):
def __init__(self, models):
self.models = models
def forward(self, x):
scores = torch.mean(torch.stack([a.forward(x) for a in self.models.values()]), dim=0)
return scores
print(f'Ensemble of {len(model_hist)} models with the following validation accuracies:')
print(*model_hist.keys(), sep='\n')
model = Ensemble(model_hist)
#check_accuracy(loader_val, model)
num_correct = 0
num_samples = 0
with torch.no_grad():
for x, y in loader_val:
x = x.to(device=device, dtype=dtype) # move to device, e.g. GPU
y = y.to(device=device, dtype=torch.long)
scores = model.forward(x)
_, preds = scores.max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
print(float(num_correct) / num_samples)
```
## Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). Think about how this compares to your validation set accuracy.
```
check_accuracy(loader_test, model)
```
| github_jupyter |
**Chapter 16 – Natural Language Processing with RNNs and Attention**
_This notebook contains all the sample code in chapter 16._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml2/blob/master/16_nlp_with_rnns_and_attention.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
# Setup
First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20 and TensorFlow ≥2.0.
```
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
!pip install -q -U tensorflow-addons
IS_COLAB = True
except Exception:
IS_COLAB = False
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
if not tf.config.list_physical_devices('GPU'):
print("No GPU was detected. LSTMs and CNNs can be very slow without a GPU.")
if IS_COLAB:
print("Go to Runtime > Change runtime and select a GPU hardware accelerator.")
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
tf.random.set_seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "nlp"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
# Char-RNN
## Splitting a sequence into batches of shuffled windows
For example, let's split the sequence 0 to 14 into windows of length 5, each shifted by 2 (e.g.,`[0, 1, 2, 3, 4]`, `[2, 3, 4, 5, 6]`, etc.), then shuffle them, and split them into inputs (the first 4 steps) and targets (the last 4 steps) (e.g., `[2, 3, 4, 5, 6]` would be split into `[[2, 3, 4, 5], [3, 4, 5, 6]]`), then create batches of 3 such input/target pairs:
```
np.random.seed(42)
tf.random.set_seed(42)
n_steps = 5
dataset = tf.data.Dataset.from_tensor_slices(tf.range(15))
dataset = dataset.window(n_steps, shift=2, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(n_steps))
dataset = dataset.shuffle(10).map(lambda window: (window[:-1], window[1:]))
dataset = dataset.batch(3).prefetch(1)
for index, (X_batch, Y_batch) in enumerate(dataset):
print("_" * 20, "Batch", index, "\nX_batch")
print(X_batch.numpy())
print("=" * 5, "\nY_batch")
print(Y_batch.numpy())
```
## Loading the Data and Preparing the Dataset
```
shakespeare_url = "https://raw.githubusercontent.com/karpathy/char-rnn/master/data/tinyshakespeare/input.txt"
filepath = keras.utils.get_file("shakespeare.txt", shakespeare_url)
with open(filepath) as f:
shakespeare_text = f.read()
print(shakespeare_text[:148])
"".join(sorted(set(shakespeare_text.lower())))
tokenizer = keras.preprocessing.text.Tokenizer(char_level=True)
tokenizer.fit_on_texts(shakespeare_text)
tokenizer.texts_to_sequences(["First"])
tokenizer.sequences_to_texts([[20, 6, 9, 8, 3]])
max_id = len(tokenizer.word_index) # number of distinct characters
dataset_size = tokenizer.document_count # total number of characters
[encoded] = np.array(tokenizer.texts_to_sequences([shakespeare_text])) - 1
train_size = dataset_size * 90 // 100
dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])
n_steps = 100
window_length = n_steps + 1 # target = input shifted 1 character ahead
dataset = dataset.repeat().window(window_length, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_length))
np.random.seed(42)
tf.random.set_seed(42)
batch_size = 32
dataset = dataset.shuffle(10000).batch(batch_size)
dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))
dataset = dataset.map(
lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
dataset = dataset.prefetch(1)
for X_batch, Y_batch in dataset.take(1):
print(X_batch.shape, Y_batch.shape)
```
## Creating and Training the Model
```
model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id],
dropout=0.2, recurrent_dropout=0.2),
keras.layers.GRU(128, return_sequences=True,
dropout=0.2, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
history = model.fit(dataset, steps_per_epoch=train_size // batch_size,
epochs=10)
```
## Using the Model to Generate Text
```
def preprocess(texts):
X = np.array(tokenizer.texts_to_sequences(texts)) - 1
return tf.one_hot(X, max_id)
X_new = preprocess(["How are yo"])
Y_pred = model.predict_classes(X_new)
tokenizer.sequences_to_texts(Y_pred + 1)[0][-1] # 1st sentence, last char
tf.random.set_seed(42)
tf.random.categorical([[np.log(0.5), np.log(0.4), np.log(0.1)]], num_samples=40).numpy()
def next_char(text, temperature=1):
X_new = preprocess([text])
y_proba = model.predict(X_new)[0, -1:, :]
rescaled_logits = tf.math.log(y_proba) / temperature
char_id = tf.random.categorical(rescaled_logits, num_samples=1) + 1
return tokenizer.sequences_to_texts(char_id.numpy())[0]
tf.random.set_seed(42)
next_char("How are yo", temperature=1)
def complete_text(text, n_chars=50, temperature=1):
for _ in range(n_chars):
text += next_char(text, temperature)
return text
tf.random.set_seed(42)
print(complete_text("t", temperature=0.2))
print(complete_text("t", temperature=1))
print(complete_text("t", temperature=2))
```
## Stateful RNN
```
tf.random.set_seed(42)
dataset = tf.data.Dataset.from_tensor_slices(encoded[:train_size])
dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_length))
dataset = dataset.repeat().batch(1)
dataset = dataset.map(lambda windows: (windows[:, :-1], windows[:, 1:]))
dataset = dataset.map(
lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
dataset = dataset.prefetch(1)
batch_size = 32
encoded_parts = np.array_split(encoded[:train_size], batch_size)
datasets = []
for encoded_part in encoded_parts:
dataset = tf.data.Dataset.from_tensor_slices(encoded_part)
dataset = dataset.window(window_length, shift=n_steps, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_length))
datasets.append(dataset)
dataset = tf.data.Dataset.zip(tuple(datasets)).map(lambda *windows: tf.stack(windows))
dataset = dataset.repeat().map(lambda windows: (windows[:, :-1], windows[:, 1:]))
dataset = dataset.map(
lambda X_batch, Y_batch: (tf.one_hot(X_batch, depth=max_id), Y_batch))
dataset = dataset.prefetch(1)
model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, stateful=True,
dropout=0.2, recurrent_dropout=0.2,
batch_input_shape=[batch_size, None, max_id]),
keras.layers.GRU(128, return_sequences=True, stateful=True,
dropout=0.2, recurrent_dropout=0.2),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
class ResetStatesCallback(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.model.reset_states()
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
steps_per_epoch = train_size // batch_size // n_steps
history = model.fit(dataset, steps_per_epoch=steps_per_epoch, epochs=50,
callbacks=[ResetStatesCallback()])
```
To use the model with different batch sizes, we need to create a stateless copy. We can get rid of dropout since it is only used during training:
```
stateless_model = keras.models.Sequential([
keras.layers.GRU(128, return_sequences=True, input_shape=[None, max_id]),
keras.layers.GRU(128, return_sequences=True),
keras.layers.TimeDistributed(keras.layers.Dense(max_id,
activation="softmax"))
])
```
To set the weights, we first need to build the model (so the weights get created):
```
stateless_model.build(tf.TensorShape([None, None, max_id]))
stateless_model.set_weights(model.get_weights())
model = stateless_model
tf.random.set_seed(42)
print(complete_text("t"))
```
# Sentiment Analysis
```
tf.random.set_seed(42)
```
You can load the IMDB dataset easily:
```
(X_train, y_test), (X_valid, y_test) = keras.datasets.imdb.load_data()
X_train[0][:10]
word_index = keras.datasets.imdb.get_word_index()
id_to_word = {id_ + 3: word for word, id_ in word_index.items()}
for id_, token in enumerate(("<pad>", "<sos>", "<unk>")):
id_to_word[id_] = token
" ".join([id_to_word[id_] for id_ in X_train[0][:10]])
import tensorflow_datasets as tfds
datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True)
datasets.keys()
train_size = info.splits["train"].num_examples
test_size = info.splits["test"].num_examples
train_size, test_size
for X_batch, y_batch in datasets["train"].batch(2).take(1):
for review, label in zip(X_batch.numpy(), y_batch.numpy()):
print("Review:", review.decode("utf-8")[:200], "...")
print("Label:", label, "= Positive" if label else "= Negative")
print()
def preprocess(X_batch, y_batch):
X_batch = tf.strings.substr(X_batch, 0, 300)
X_batch = tf.strings.regex_replace(X_batch, rb"<br\s*/?>", b" ")
X_batch = tf.strings.regex_replace(X_batch, b"[^a-zA-Z']", b" ")
X_batch = tf.strings.split(X_batch)
return X_batch.to_tensor(default_value=b"<pad>"), y_batch
preprocess(X_batch, y_batch)
from collections import Counter
vocabulary = Counter()
for X_batch, y_batch in datasets["train"].batch(32).map(preprocess):
for review in X_batch:
vocabulary.update(list(review.numpy()))
vocabulary.most_common()[:3]
len(vocabulary)
vocab_size = 10000
truncated_vocabulary = [
word for word, count in vocabulary.most_common()[:vocab_size]]
word_to_id = {word: index for index, word in enumerate(truncated_vocabulary)}
for word in b"This movie was faaaaaantastic".split():
print(word_to_id.get(word) or vocab_size)
words = tf.constant(truncated_vocabulary)
word_ids = tf.range(len(truncated_vocabulary), dtype=tf.int64)
vocab_init = tf.lookup.KeyValueTensorInitializer(words, word_ids)
num_oov_buckets = 1000
table = tf.lookup.StaticVocabularyTable(vocab_init, num_oov_buckets)
table.lookup(tf.constant([b"This movie was faaaaaantastic".split()]))
def encode_words(X_batch, y_batch):
return table.lookup(X_batch), y_batch
train_set = datasets["train"].repeat().batch(32).map(preprocess)
train_set = train_set.map(encode_words).prefetch(1)
for X_batch, y_batch in train_set.take(1):
print(X_batch)
print(y_batch)
embed_size = 128
model = keras.models.Sequential([
keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size,
mask_zero=True, # not shown in the book
input_shape=[None]),
keras.layers.GRU(128, return_sequences=True),
keras.layers.GRU(128),
keras.layers.Dense(1, activation="sigmoid")
])
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
history = model.fit(train_set, steps_per_epoch=train_size // 32, epochs=5)
```
Or using manual masking:
```
K = keras.backend
embed_size = 128
inputs = keras.layers.Input(shape=[None])
mask = keras.layers.Lambda(lambda inputs: K.not_equal(inputs, 0))(inputs)
z = keras.layers.Embedding(vocab_size + num_oov_buckets, embed_size)(inputs)
z = keras.layers.GRU(128, return_sequences=True)(z, mask=mask)
z = keras.layers.GRU(128)(z, mask=mask)
outputs = keras.layers.Dense(1, activation="sigmoid")(z)
model = keras.models.Model(inputs=[inputs], outputs=[outputs])
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
history = model.fit(train_set, steps_per_epoch=train_size // 32, epochs=5)
```
## Reusing Pretrained Embeddings
```
tf.random.set_seed(42)
TFHUB_CACHE_DIR = os.path.join(os.curdir, "my_tfhub_cache")
os.environ["TFHUB_CACHE_DIR"] = TFHUB_CACHE_DIR
import tensorflow_hub as hub
model = keras.Sequential([
hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1",
dtype=tf.string, input_shape=[], output_shape=[50]),
keras.layers.Dense(128, activation="relu"),
keras.layers.Dense(1, activation="sigmoid")
])
model.compile(loss="binary_crossentropy", optimizer="adam",
metrics=["accuracy"])
for dirpath, dirnames, filenames in os.walk(TFHUB_CACHE_DIR):
for filename in filenames:
print(os.path.join(dirpath, filename))
import tensorflow_datasets as tfds
datasets, info = tfds.load("imdb_reviews", as_supervised=True, with_info=True)
train_size = info.splits["train"].num_examples
batch_size = 32
train_set = datasets["train"].repeat().batch(batch_size).prefetch(1)
history = model.fit(train_set, steps_per_epoch=train_size // batch_size, epochs=5)
```
## Automatic Translation
```
tf.random.set_seed(42)
vocab_size = 100
embed_size = 10
import tensorflow_addons as tfa
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
embeddings = keras.layers.Embedding(vocab_size, embed_size)
encoder_embeddings = embeddings(encoder_inputs)
decoder_embeddings = embeddings(decoder_inputs)
encoder = keras.layers.LSTM(512, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = keras.layers.LSTMCell(512)
output_layer = keras.layers.Dense(vocab_size)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell, sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings, initial_state=encoder_state,
sequence_length=sequence_lengths)
Y_proba = tf.nn.softmax(final_outputs.rnn_output)
model = keras.models.Model(
inputs=[encoder_inputs, decoder_inputs, sequence_lengths],
outputs=[Y_proba])
model.compile(loss="sparse_categorical_crossentropy", optimizer="adam")
X = np.random.randint(100, size=10*1000).reshape(1000, 10)
Y = np.random.randint(100, size=15*1000).reshape(1000, 15)
X_decoder = np.c_[np.zeros((1000, 1)), Y[:, :-1]]
seq_lengths = np.full([1000], 15)
history = model.fit([X, X_decoder, seq_lengths], Y, epochs=2)
```
### Bidirectional Recurrent Layers
```
model = keras.models.Sequential([
keras.layers.GRU(10, return_sequences=True, input_shape=[None, 10]),
keras.layers.Bidirectional(keras.layers.GRU(10, return_sequences=True))
])
model.summary()
```
### Positional Encoding
```
class PositionalEncoding(keras.layers.Layer):
def __init__(self, max_steps, max_dims, dtype=tf.float32, **kwargs):
super().__init__(dtype=dtype, **kwargs)
if max_dims % 2 == 1: max_dims += 1 # max_dims must be even
p, i = np.meshgrid(np.arange(max_steps), np.arange(max_dims // 2))
pos_emb = np.empty((1, max_steps, max_dims))
pos_emb[0, :, ::2] = np.sin(p / 10000**(2 * i / max_dims)).T
pos_emb[0, :, 1::2] = np.cos(p / 10000**(2 * i / max_dims)).T
self.positional_embedding = tf.constant(pos_emb.astype(self.dtype))
def call(self, inputs):
shape = tf.shape(inputs)
return inputs + self.positional_embedding[:, :shape[-2], :shape[-1]]
max_steps = 201
max_dims = 512
pos_emb = PositionalEncoding(max_steps, max_dims)
PE = pos_emb(np.zeros((1, max_steps, max_dims), np.float32))[0].numpy()
i1, i2, crop_i = 100, 101, 150
p1, p2, p3 = 22, 60, 35
fig, (ax1, ax2) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(9, 5))
ax1.plot([p1, p1], [-1, 1], "k--", label="$p = {}$".format(p1))
ax1.plot([p2, p2], [-1, 1], "k--", label="$p = {}$".format(p2), alpha=0.5)
ax1.plot(p3, PE[p3, i1], "bx", label="$p = {}$".format(p3))
ax1.plot(PE[:,i1], "b-", label="$i = {}$".format(i1))
ax1.plot(PE[:,i2], "r-", label="$i = {}$".format(i2))
ax1.plot([p1, p2], [PE[p1, i1], PE[p2, i1]], "bo")
ax1.plot([p1, p2], [PE[p1, i2], PE[p2, i2]], "ro")
ax1.legend(loc="center right", fontsize=14, framealpha=0.95)
ax1.set_ylabel("$P_{(p,i)}$", rotation=0, fontsize=16)
ax1.grid(True, alpha=0.3)
ax1.hlines(0, 0, max_steps - 1, color="k", linewidth=1, alpha=0.3)
ax1.axis([0, max_steps - 1, -1, 1])
ax2.imshow(PE.T[:crop_i], cmap="gray", interpolation="bilinear", aspect="auto")
ax2.hlines(i1, 0, max_steps - 1, color="b")
cheat = 2 # need to raise the red line a bit, or else it hides the blue one
ax2.hlines(i2+cheat, 0, max_steps - 1, color="r")
ax2.plot([p1, p1], [0, crop_i], "k--")
ax2.plot([p2, p2], [0, crop_i], "k--", alpha=0.5)
ax2.plot([p1, p2], [i2+cheat, i2+cheat], "ro")
ax2.plot([p1, p2], [i1, i1], "bo")
ax2.axis([0, max_steps - 1, 0, crop_i])
ax2.set_xlabel("$p$", fontsize=16)
ax2.set_ylabel("$i$", rotation=0, fontsize=16)
plt.savefig("positional_embedding_plot")
plt.show()
embed_size = 512; max_steps = 500; vocab_size = 10000
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
embeddings = keras.layers.Embedding(vocab_size, embed_size)
encoder_embeddings = embeddings(encoder_inputs)
decoder_embeddings = embeddings(decoder_inputs)
positional_encoding = PositionalEncoding(max_steps, max_dims=embed_size)
encoder_in = positional_encoding(encoder_embeddings)
decoder_in = positional_encoding(decoder_embeddings)
```
Here is a (very) simplified Transformer (the actual architecture has skip connections, layer norm, dense nets, and most importantly it uses Multi-Head Attention instead of regular Attention):
```
Z = encoder_in
for N in range(6):
Z = keras.layers.Attention(use_scale=True)([Z, Z])
encoder_outputs = Z
Z = decoder_in
for N in range(6):
Z = keras.layers.Attention(use_scale=True, causal=True)([Z, Z])
Z = keras.layers.Attention(use_scale=True)([Z, encoder_outputs])
outputs = keras.layers.TimeDistributed(
keras.layers.Dense(vocab_size, activation="softmax"))(Z)
```
Here's a basic implementation of the `MultiHeadAttention` layer. One will likely be added to `keras.layers` in the near future. Note that `Conv1D` layers with `kernel_size=1` (and the default `padding="valid"` and `strides=1`) is equivalent to a `TimeDistributed(Dense(...))` layer.
```
K = keras.backend
class MultiHeadAttention(keras.layers.Layer):
def __init__(self, n_heads, causal=False, use_scale=False, **kwargs):
self.n_heads = n_heads
self.causal = causal
self.use_scale = use_scale
super().__init__(**kwargs)
def build(self, batch_input_shape):
self.dims = batch_input_shape[0][-1]
self.q_dims, self.v_dims, self.k_dims = [self.dims // self.n_heads] * 3 # could be hyperparameters instead
self.q_linear = keras.layers.Conv1D(self.n_heads * self.q_dims, kernel_size=1, use_bias=False)
self.v_linear = keras.layers.Conv1D(self.n_heads * self.v_dims, kernel_size=1, use_bias=False)
self.k_linear = keras.layers.Conv1D(self.n_heads * self.k_dims, kernel_size=1, use_bias=False)
self.attention = keras.layers.Attention(causal=self.causal, use_scale=self.use_scale)
self.out_linear = keras.layers.Conv1D(self.dims, kernel_size=1, use_bias=False)
super().build(batch_input_shape)
def _multi_head_linear(self, inputs, linear):
shape = K.concatenate([K.shape(inputs)[:-1], [self.n_heads, -1]])
projected = K.reshape(linear(inputs), shape)
perm = K.permute_dimensions(projected, [0, 2, 1, 3])
return K.reshape(perm, [shape[0] * self.n_heads, shape[1], -1])
def call(self, inputs):
q = inputs[0]
v = inputs[1]
k = inputs[2] if len(inputs) > 2 else v
shape = K.shape(q)
q_proj = self._multi_head_linear(q, self.q_linear)
v_proj = self._multi_head_linear(v, self.v_linear)
k_proj = self._multi_head_linear(k, self.k_linear)
multi_attended = self.attention([q_proj, v_proj, k_proj])
shape_attended = K.shape(multi_attended)
reshaped_attended = K.reshape(multi_attended, [shape[0], self.n_heads, shape_attended[1], shape_attended[2]])
perm = K.permute_dimensions(reshaped_attended, [0, 2, 1, 3])
concat = K.reshape(perm, [shape[0], shape_attended[1], -1])
return self.out_linear(concat)
Q = np.random.rand(2, 50, 512)
V = np.random.rand(2, 80, 512)
multi_attn = MultiHeadAttention(8)
multi_attn([Q, V]).shape
```
# Exercise solutions
## 1. to 7.
See Appendix A.
## 8.
_Exercise:_ Embedded Reber grammars _were used by Hochreiter and Schmidhuber in [their paper](https://homl.info/93) about LSTMs. They are artificial grammars that produce strings such as "BPBTSXXVPSEPE." Check out Jenny Orr's [nice introduction](https://homl.info/108) to this topic. Choose a particular embedded Reber grammar (such as the one represented on Jenny Orr's page), then train an RNN to identify whether a string respects that grammar or not. You will first need to write a function capable of generating a training batch containing about 50% strings that respect the grammar, and 50% that don't._
First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
```
default_reber_grammar = [
[("B", 1)], # (state 0) =B=>(state 1)
[("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)
[("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)
[("T", 3), ("V", 5)], # and so on...
[("X", 3), ("S", 6)],
[("P", 4), ("V", 6)],
[("E", None)]] # (state 6) =E=>(terminal state)
embedded_reber_grammar = [
[("B", 1)],
[("T", 2), ("P", 3)],
[(default_reber_grammar, 4)],
[(default_reber_grammar, 5)],
[("T", 6)],
[("P", 6)],
[("E", None)]]
def generate_string(grammar):
state = 0
output = []
while state is not None:
index = np.random.randint(len(grammar[state]))
production, state = grammar[state][index]
if isinstance(production, list):
production = generate_string(grammar=production)
output.append(production)
return "".join(output)
```
Let's generate a few strings based on the default Reber grammar:
```
np.random.seed(42)
for _ in range(25):
print(generate_string(default_reber_grammar), end=" ")
```
Looks good. Now let's generate a few strings based on the embedded Reber grammar:
```
np.random.seed(42)
for _ in range(25):
print(generate_string(embedded_reber_grammar), end=" ")
```
Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:
```
POSSIBLE_CHARS = "BEPSTVX"
def generate_corrupted_string(grammar, chars=POSSIBLE_CHARS):
good_string = generate_string(grammar)
index = np.random.randint(len(good_string))
good_char = good_string[index]
bad_char = np.random.choice(sorted(set(chars) - set(good_char)))
return good_string[:index] + bad_char + good_string[index + 1:]
```
Let's look at a few corrupted strings:
```
np.random.seed(42)
for _ in range(25):
print(generate_corrupted_string(embedded_reber_grammar), end=" ")
```
We cannot feed strings directly to an RNN, so we need to encode them somehow. One option would be to one-hot encode each character. Another option is to use embeddings. Let's go for the second option (but since there are just a handful of characters, one-hot encoding would probably be a good option as well). For embeddings to work, we need to convert each string into a sequence of character IDs. Let's write a function for that, using each character's index in the string of possible characters "BEPSTVX":
```
def string_to_ids(s, chars=POSSIBLE_CHARS):
return [POSSIBLE_CHARS.index(c) for c in s]
string_to_ids("BTTTXXVVETE")
```
We can now generate the dataset, with 50% good strings, and 50% bad strings:
```
def generate_dataset(size):
good_strings = [string_to_ids(generate_string(embedded_reber_grammar))
for _ in range(size // 2)]
bad_strings = [string_to_ids(generate_corrupted_string(embedded_reber_grammar))
for _ in range(size - size // 2)]
all_strings = good_strings + bad_strings
X = tf.ragged.constant(all_strings, ragged_rank=1)
y = np.array([[1.] for _ in range(len(good_strings))] +
[[0.] for _ in range(len(bad_strings))])
return X, y
np.random.seed(42)
X_train, y_train = generate_dataset(10000)
X_valid, y_valid = generate_dataset(2000)
```
Let's take a look at the first training sequence:
```
X_train[0]
```
What classes does it belong to?
```
y_train[0]
```
Perfect! We are ready to create the RNN to identify good strings. We build a simple sequence binary classifier:
```
np.random.seed(42)
tf.random.set_seed(42)
embedding_size = 5
model = keras.models.Sequential([
keras.layers.InputLayer(input_shape=[None], dtype=tf.int32, ragged=True),
keras.layers.Embedding(input_dim=len(POSSIBLE_CHARS), output_dim=embedding_size),
keras.layers.GRU(30),
keras.layers.Dense(1, activation="sigmoid")
])
optimizer = keras.optimizers.SGD(lr=0.02, momentum = 0.95, nesterov=True)
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"])
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_valid, y_valid))
```
Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).
```
test_strings = ["BPBTSSSSSSSXXTTVPXVPXTTTTTVVETE",
"BPBTSSSSSSSXXTTVPXVPXTTTTTVVEPE"]
X_test = tf.ragged.constant([string_to_ids(s) for s in test_strings], ragged_rank=1)
y_proba = model.predict(X_test)
print()
print("Estimated probability that these are Reber strings:")
for index, string in enumerate(test_strings):
print("{}: {:.2f}%".format(string, 100 * y_proba[index][0]))
```
Ta-da! It worked fine. The RNN found the correct answers with very high confidence. :)
## 9.
_Exercise: Train an Encoder–Decoder model that can convert a date string from one format to another (e.g., from "April 22, 2019" to "2019-04-22")._
Let's start by creating the dataset. We will use random days between 1000-01-01 and 9999-12-31:
```
from datetime import date
# cannot use strftime()'s %B format since it depends on the locale
MONTHS = ["January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"]
def random_dates(n_dates):
min_date = date(1000, 1, 1).toordinal()
max_date = date(9999, 12, 31).toordinal()
ordinals = np.random.randint(max_date - min_date, size=n_dates) + min_date
dates = [date.fromordinal(ordinal) for ordinal in ordinals]
x = [MONTHS[dt.month - 1] + " " + dt.strftime("%d, %Y") for dt in dates]
y = [dt.isoformat() for dt in dates]
return x, y
```
Here are a few random dates, displayed in both the input format and the target format:
```
np.random.seed(42)
n_dates = 3
x_example, y_example = random_dates(n_dates)
print("{:25s}{:25s}".format("Input", "Target"))
print("-" * 50)
for idx in range(n_dates):
print("{:25s}{:25s}".format(x_example[idx], y_example[idx]))
```
Let's get the list of all possible characters in the inputs:
```
INPUT_CHARS = "".join(sorted(set("".join(MONTHS)))) + "01234567890, "
INPUT_CHARS
```
And here's the list of possible characters in the outputs:
```
OUTPUT_CHARS = "0123456789-"
```
Let's write a function to convert a string to a list of character IDs, as we did in the previous exercise:
```
def date_str_to_ids(date_str, chars=INPUT_CHARS):
return [chars.index(c) for c in date_str]
date_str_to_ids(x_example[0], INPUT_CHARS)
date_str_to_ids(y_example[0], OUTPUT_CHARS)
def prepare_date_strs(date_strs, chars=INPUT_CHARS):
X_ids = [date_str_to_ids(dt, chars) for dt in date_strs]
X = tf.ragged.constant(X_ids, ragged_rank=1)
return (X + 1).to_tensor() # using 0 as the padding token ID
def create_dataset(n_dates):
x, y = random_dates(n_dates)
return prepare_date_strs(x, INPUT_CHARS), prepare_date_strs(y, OUTPUT_CHARS)
np.random.seed(42)
X_train, Y_train = create_dataset(10000)
X_valid, Y_valid = create_dataset(2000)
X_test, Y_test = create_dataset(2000)
Y_train[0]
```
### First version: a very basic seq2seq model
Let's first try the simplest possible model: we feed in the input sequence, which first goes through the encoder (an embedding layer followed by a single LSTM layer), which outputs a vector, then it goes through a decoder (a single LSTM layer, followed by a dense output layer), which outputs a sequence of vectors, each representing the estimated probabilities for all possible output character.
Since the decoder expects a sequence as input, we repeat the vector (which is output by the decoder) as many times as the longest possible output sequence.
```
embedding_size = 32
max_output_length = Y_train.shape[1]
np.random.seed(42)
tf.random.set_seed(42)
encoder = keras.models.Sequential([
keras.layers.Embedding(input_dim=len(INPUT_CHARS) + 1,
output_dim=embedding_size,
input_shape=[None]),
keras.layers.LSTM(128)
])
decoder = keras.models.Sequential([
keras.layers.LSTM(128, return_sequences=True),
keras.layers.Dense(len(OUTPUT_CHARS) + 1, activation="softmax")
])
model = keras.models.Sequential([
encoder,
keras.layers.RepeatVector(max_output_length),
decoder
])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit(X_train, Y_train, epochs=20,
validation_data=(X_valid, Y_valid))
```
Looks great, we reach 100% validation accuracy! Let's use the model to make some predictions. We will need to be able to convert a sequence of character IDs to a readable string:
```
def ids_to_date_strs(ids, chars=OUTPUT_CHARS):
return ["".join([("?" + chars)[index] for index in sequence])
for sequence in ids]
```
Now we can use the model to convert some dates
```
X_new = prepare_date_strs(["September 17, 2009", "July 14, 1789"])
ids = model.predict_classes(X_new)
for date_str in ids_to_date_strs(ids):
print(date_str)
```
Perfect! :)
However, since the model was only trained on input strings of length 18 (which is the length of the longest date), it does not perform well if we try to use it to make predictions on shorter sequences:
```
X_new = prepare_date_strs(["May 02, 2020", "July 14, 1789"])
ids = model.predict_classes(X_new)
for date_str in ids_to_date_strs(ids):
print(date_str)
```
Oops! We need to ensure that we always pass sequences of the same length as during training, using padding if necessary. Let's write a little helper function for that:
```
max_input_length = X_train.shape[1]
def prepare_date_strs_padded(date_strs):
X = prepare_date_strs(date_strs)
if X.shape[1] < max_input_length:
X = tf.pad(X, [[0, 0], [0, max_input_length - X.shape[1]]])
return X
def convert_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
ids = model.predict_classes(X)
return ids_to_date_strs(ids)
convert_date_strs(["May 02, 2020", "July 14, 1789"])
```
Cool! Granted, there are certainly much easier ways to write a date conversion tool (e.g., using regular expressions or even basic string manipulation), but you have to admit that using neural networks is way cooler. ;-)
However, real-life sequence-to-sequence problems will usually be harder, so for the sake of completeness, let's build a more powerful model.
### Second version: feeding the shifted targets to the decoder (teacher forcing)
Instead of feeding the decoder a simple repetition of the encoder's output vector, we can feed it the target sequence, shifted by one time step to the right. This way, at each time step the decoder will know what the previous target character was. This should help is tackle more complex sequence-to-sequence problems.
Since the first output character of each target sequence has no previous character, we will need a new token to represent the start-of-sequence (sos).
During inference, we won't know the target, so what will we feed the decoder? We can just predict one character at a time, starting with an sos token, then feeding the decoder all the characters that were predicted so far (we will look at this in more details later in this notebook).
But if the decoder's LSTM expects to get the previous target as input at each step, how shall we pass it it the vector output by the encoder? Well, one option is to ignore the output vector, and instead use the encoder's LSTM state as the initial state of the decoder's LSTM (which requires that encoder's LSTM must have the same number of units as the decoder's LSTM).
Now let's create the decoder's inputs (for training, validation and testing). The sos token will be represented using the last possible output character's ID + 1.
```
sos_id = len(OUTPUT_CHARS) + 1
def shifted_output_sequences(Y):
sos_tokens = tf.fill(dims=(len(Y), 1), value=sos_id)
return tf.concat([sos_tokens, Y[:, :-1]], axis=1)
X_train_decoder = shifted_output_sequences(Y_train)
X_valid_decoder = shifted_output_sequences(Y_valid)
X_test_decoder = shifted_output_sequences(Y_test)
```
Let's take a look at the decoder's training inputs:
```
X_train_decoder
```
Now let's build the model. It's not a simple sequential model anymore, so let's use the functional API:
```
encoder_embedding_size = 32
decoder_embedding_size = 32
lstm_units = 128
np.random.seed(42)
tf.random.set_seed(42)
encoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)
encoder_embedding = keras.layers.Embedding(
input_dim=len(INPUT_CHARS) + 1,
output_dim=encoder_embedding_size)(encoder_input)
_, encoder_state_h, encoder_state_c = keras.layers.LSTM(
lstm_units, return_state=True)(encoder_embedding)
encoder_state = [encoder_state_h, encoder_state_c]
decoder_input = keras.layers.Input(shape=[None], dtype=tf.int32)
decoder_embedding = keras.layers.Embedding(
input_dim=len(OUTPUT_CHARS) + 2,
output_dim=decoder_embedding_size)(decoder_input)
decoder_lstm_output = keras.layers.LSTM(lstm_units, return_sequences=True)(
decoder_embedding, initial_state=encoder_state)
decoder_output = keras.layers.Dense(len(OUTPUT_CHARS) + 1,
activation="softmax")(decoder_lstm_output)
model = keras.models.Model(inputs=[encoder_input, decoder_input],
outputs=[decoder_output])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=10,
validation_data=([X_valid, X_valid_decoder], Y_valid))
```
This model also reaches 100% validation accuracy, but it does so even faster.
Let's once again use the model to make some predictions. This time we need to predict characters one by one.
```
sos_id = len(OUTPUT_CHARS) + 1
def predict_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
Y_pred = tf.fill(dims=(len(X), 1), value=sos_id)
for index in range(max_output_length):
pad_size = max_output_length - Y_pred.shape[1]
X_decoder = tf.pad(Y_pred, [[0, 0], [0, pad_size]])
Y_probas_next = model.predict([X, X_decoder])[:, index:index+1]
Y_pred_next = tf.argmax(Y_probas_next, axis=-1, output_type=tf.int32)
Y_pred = tf.concat([Y_pred, Y_pred_next], axis=1)
return ids_to_date_strs(Y_pred[:, 1:])
predict_date_strs(["July 14, 1789", "May 01, 2020"])
```
Works fine! :)
### Third version: using TF-Addons's seq2seq implementation
Let's build exactly the same model, but using TF-Addon's seq2seq API. The implementation below is almost very similar to the TFA example higher in this notebook, except without the model input to specify the output sequence length, for simplicity (but you can easily add it back in if you need it for your projects, when the output sequences have very different lengths).
```
import tensorflow_addons as tfa
np.random.seed(42)
tf.random.set_seed(42)
encoder_embedding_size = 32
decoder_embedding_size = 32
units = 128
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
encoder_embeddings = keras.layers.Embedding(
len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)
decoder_embedding_layer = keras.layers.Embedding(
len(INPUT_CHARS) + 2, decoder_embedding_size)
decoder_embeddings = decoder_embedding_layer(decoder_inputs)
encoder = keras.layers.LSTM(units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.TrainingSampler()
decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,
sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings,
initial_state=encoder_state)
Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output)
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],
outputs=[Y_proba])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=15,
validation_data=([X_valid, X_valid_decoder], Y_valid))
```
And once again, 100% validation accuracy! To use the model, we can just reuse the `predict_date_strs()` function:
```
predict_date_strs(["July 14, 1789", "May 01, 2020"])
```
However, there's a much more efficient way to perform inference. Until now, during inference, we've run the model once for each new character. Instead, we can create a new decoder, based on the previously trained layers, but using a `GreedyEmbeddingSampler` instead of a `TrainingSampler`.
At each time step, the `GreedyEmbeddingSampler` will compute the argmax of the decoder's outputs, and run the resulting token IDs through the decoder's embedding layer. Then it will feed the resulting embeddings to the decoder's LSTM cell at the next time step. This way, we only need to run the decoder once to get the full prediction.
```
inference_sampler = tfa.seq2seq.sampler.GreedyEmbeddingSampler(
embedding_fn=decoder_embedding_layer)
inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(
decoder_cell, inference_sampler, output_layer=output_layer,
maximum_iterations=max_output_length)
batch_size = tf.shape(encoder_inputs)[:1]
start_tokens = tf.fill(dims=batch_size, value=sos_id)
final_outputs, final_state, final_sequence_lengths = inference_decoder(
start_tokens,
initial_state=encoder_state,
start_tokens=start_tokens,
end_token=0)
inference_model = keras.models.Model(inputs=[encoder_inputs],
outputs=[final_outputs.sample_id])
```
A few notes:
* The `GreedyEmbeddingSampler` needs the `start_tokens` (a vector containing the start-of-sequence ID for each decoder sequence), and the `end_token` (the decoder will stop decoding a sequence once the model outputs this token).
* We must set `maximum_iterations` when creating the `BasicDecoder`, or else it may run into an infinite loop (if the model never outputs the end token for at least one of the sequences). This would force you would to restart the Jupyter kernel.
* The decoder inputs are not needed anymore, since all the decoder inputs are generated dynamically based on the outputs from the previous time step.
* The model's outputs are `final_outputs.sample_id` instead of the softmax of `final_outputs.rnn_outputs`. This allows us to directly get the argmax of the model's outputs. If you prefer to have access to the logits, you can replace `final_outputs.sample_id` with `final_outputs.rnn_outputs`.
Now we can write a simple function that uses the model to perform the date format conversion:
```
def fast_predict_date_strs(date_strs):
X = prepare_date_strs_padded(date_strs)
Y_pred = inference_model.predict(X)
return ids_to_date_strs(Y_pred)
fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
```
Let's check that it really is faster:
```
%timeit predict_date_strs(["July 14, 1789", "May 01, 2020"])
%timeit fast_predict_date_strs(["July 14, 1789", "May 01, 2020"])
```
That's more than a 10x speedup! And it would be even more if we were handling longer sequences.
### Fourth version: using TF-Addons's seq2seq implementation with a scheduled sampler
**Warning**: due to a TF bug, this version only works using TensorFlow 2.2.
When we trained the previous model, at each time step _t_ we gave the model the target token for time step _t_ - 1. However, at inference time, the model did not get the previous target at each time step. Instead, it got the previous prediction. So there is a discrepancy between training and inference, which may lead to disappointing performance. To alleviate this, we can gradually replace the targets with the predictions, during training. For this, we just need to replace the `TrainingSampler` with a `ScheduledEmbeddingTrainingSampler`, and use a Keras callback to gradually increase the `sampling_probability` (i.e., the probability that the decoder will use the prediction from the previous time step rather than the target for the previous time step).
```
import tensorflow_addons as tfa
np.random.seed(42)
tf.random.set_seed(42)
n_epochs = 20
encoder_embedding_size = 32
decoder_embedding_size = 32
units = 128
encoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
decoder_inputs = keras.layers.Input(shape=[None], dtype=np.int32)
sequence_lengths = keras.layers.Input(shape=[], dtype=np.int32)
encoder_embeddings = keras.layers.Embedding(
len(INPUT_CHARS) + 1, encoder_embedding_size)(encoder_inputs)
decoder_embedding_layer = keras.layers.Embedding(
len(INPUT_CHARS) + 2, decoder_embedding_size)
decoder_embeddings = decoder_embedding_layer(decoder_inputs)
encoder = keras.layers.LSTM(units, return_state=True)
encoder_outputs, state_h, state_c = encoder(encoder_embeddings)
encoder_state = [state_h, state_c]
sampler = tfa.seq2seq.sampler.ScheduledEmbeddingTrainingSampler(
sampling_probability=0.,
embedding_fn=decoder_embedding_layer)
# we must set the sampling_probability after creating the sampler
# (see https://github.com/tensorflow/addons/pull/1714)
sampler.sampling_probability = tf.Variable(0.)
decoder_cell = keras.layers.LSTMCell(units)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
decoder = tfa.seq2seq.basic_decoder.BasicDecoder(decoder_cell,
sampler,
output_layer=output_layer)
final_outputs, final_state, final_sequence_lengths = decoder(
decoder_embeddings,
initial_state=encoder_state)
Y_proba = keras.layers.Activation("softmax")(final_outputs.rnn_output)
model = keras.models.Model(inputs=[encoder_inputs, decoder_inputs],
outputs=[Y_proba])
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
def update_sampling_probability(epoch, logs):
proba = min(1.0, epoch / (n_epochs - 10))
sampler.sampling_probability.assign(proba)
sampling_probability_cb = keras.callbacks.LambdaCallback(
on_epoch_begin=update_sampling_probability)
history = model.fit([X_train, X_train_decoder], Y_train, epochs=n_epochs,
validation_data=([X_valid, X_valid_decoder], Y_valid),
callbacks=[sampling_probability_cb])
```
Not quite 100% validation accuracy, but close enough!
For inference, we could do the exact same thing as earlier, using a `GreedyEmbeddingSampler`. However, just for the sake of completeness, let's use a `SampleEmbeddingSampler` instead. It's almost the same thing, except that instead of using the argmax of the model's output to find the token ID, it treats the outputs as logits and uses them to sample a token ID randomly. This can be useful when you want to generate text. The `softmax_temperature` argument serves the
same purpose as when we generated Shakespeare-like text (the higher this argument, the more random the generated text will be).
```
softmax_temperature = tf.Variable(1.)
inference_sampler = tfa.seq2seq.sampler.SampleEmbeddingSampler(
embedding_fn=decoder_embedding_layer,
softmax_temperature=softmax_temperature)
inference_decoder = tfa.seq2seq.basic_decoder.BasicDecoder(
decoder_cell, inference_sampler, output_layer=output_layer,
maximum_iterations=max_output_length)
batch_size = tf.shape(encoder_inputs)[:1]
start_tokens = tf.fill(dims=batch_size, value=sos_id)
final_outputs, final_state, final_sequence_lengths = inference_decoder(
start_tokens,
initial_state=encoder_state,
start_tokens=start_tokens,
end_token=0)
inference_model = keras.models.Model(inputs=[encoder_inputs],
outputs=[final_outputs.sample_id])
def creative_predict_date_strs(date_strs, temperature=1.0):
softmax_temperature.assign(temperature)
X = prepare_date_strs_padded(date_strs)
Y_pred = inference_model.predict(X)
return ids_to_date_strs(Y_pred)
tf.random.set_seed(42)
creative_predict_date_strs(["July 14, 1789", "May 01, 2020"])
```
Dates look good at room temperature. Now let's heat things up a bit:
```
tf.random.set_seed(42)
creative_predict_date_strs(["July 14, 1789", "May 01, 2020"],
temperature=5.)
```
Oops, the dates are overcooked, now. Let's call them "creative" dates.
### Fifth version: using TFA seq2seq, the Keras subclassing API and attention mechanisms
The sequences in this problem are pretty short, but if we wanted to tackle longer sequences, we would probably have to use attention mechanisms. While it's possible to code our own implementation, it's simpler and more efficient to use TF-Addons's implementation instead. Let's do that now, this time using Keras' subclassing API.
**Warning**: due to a TensorFlow bug (see [this issue](https://github.com/tensorflow/addons/issues/1153) for details), the `get_initial_state()` method fails in eager mode, so for now we have to use the subclassing API, as Keras automatically calls `tf.function()` on the `call()` method (so it runs in graph mode).
In this implementation, we've reverted back to using the `TrainingSampler`, for simplicity (but you can easily tweak it to use a `ScheduledEmbeddingTrainingSampler` instead). We also use a `GreedyEmbeddingSampler` during inference, so this class is pretty easy to use:
```
class DateTranslation(keras.models.Model):
def __init__(self, units=128, encoder_embedding_size=32,
decoder_embedding_size=32, **kwargs):
super().__init__(**kwargs)
self.encoder_embedding = keras.layers.Embedding(
input_dim=len(INPUT_CHARS) + 1,
output_dim=encoder_embedding_size)
self.encoder = keras.layers.LSTM(units,
return_sequences=True,
return_state=True)
self.decoder_embedding = keras.layers.Embedding(
input_dim=len(OUTPUT_CHARS) + 2,
output_dim=decoder_embedding_size)
self.attention = tfa.seq2seq.LuongAttention(units)
decoder_inner_cell = keras.layers.LSTMCell(units)
self.decoder_cell = tfa.seq2seq.AttentionWrapper(
cell=decoder_inner_cell,
attention_mechanism=self.attention)
output_layer = keras.layers.Dense(len(OUTPUT_CHARS) + 1)
self.decoder = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.TrainingSampler(),
output_layer=output_layer)
self.inference_decoder = tfa.seq2seq.BasicDecoder(
cell=self.decoder_cell,
sampler=tfa.seq2seq.sampler.GreedyEmbeddingSampler(
embedding_fn=self.decoder_embedding),
output_layer=output_layer,
maximum_iterations=max_output_length)
def call(self, inputs, training=None):
encoder_input, decoder_input = inputs
encoder_embeddings = self.encoder_embedding(encoder_input)
encoder_outputs, encoder_state_h, encoder_state_c = self.encoder(
encoder_embeddings,
training=training)
encoder_state = [encoder_state_h, encoder_state_c]
self.attention(encoder_outputs,
setup_memory=True)
decoder_embeddings = self.decoder_embedding(decoder_input)
decoder_initial_state = self.decoder_cell.get_initial_state(
decoder_embeddings)
decoder_initial_state = decoder_initial_state.clone(
cell_state=encoder_state)
if training:
decoder_outputs, _, _ = self.decoder(
decoder_embeddings,
initial_state=decoder_initial_state,
training=training)
else:
start_tokens = tf.zeros_like(encoder_input[:, 0]) + sos_id
decoder_outputs, _, _ = self.inference_decoder(
decoder_embeddings,
initial_state=decoder_initial_state,
start_tokens=start_tokens,
end_token=0)
return tf.nn.softmax(decoder_outputs.rnn_output)
np.random.seed(42)
tf.random.set_seed(42)
model = DateTranslation()
optimizer = keras.optimizers.Nadam()
model.compile(loss="sparse_categorical_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model.fit([X_train, X_train_decoder], Y_train, epochs=25,
validation_data=([X_valid, X_valid_decoder], Y_valid))
```
Not quite 100% validation accuracy, but close. It took a bit longer to converge this time, but there were also more parameters and more computations per iteration. And we did not use a scheduled sampler.
To use the model, we can write yet another little function:
```
def fast_predict_date_strs_v2(date_strs):
X = prepare_date_strs_padded(date_strs)
X_decoder = tf.zeros(shape=(len(X), max_output_length), dtype=tf.int32)
Y_probas = model.predict([X, X_decoder])
Y_pred = tf.argmax(Y_probas, axis=-1)
return ids_to_date_strs(Y_pred)
fast_predict_date_strs_v2(["July 14, 1789", "May 01, 2020"])
```
There are still a few interesting features from TF-Addons that you may want to look at:
* Using a `BeamSearchDecoder` rather than a `BasicDecoder` for inference. Instead of outputing the character with the highest probability, this decoder keeps track of the several candidates, and keeps only the most likely sequences of candidates (see chapter 16 in the book for more details).
* Setting masks or specifying `sequence_length` if the input or target sequences may have very different lengths.
* Using a `ScheduledOutputTrainingSampler`, which gives you more flexibility than the `ScheduledEmbeddingTrainingSampler` to decide how to feed the output at time _t_ to the cell at time _t_+1. By default it feeds the outputs directly to cell, without computing the argmax ID and passing it through an embedding layer. Alternatively, you specify a `next_inputs_fn` function that will be used to convert the cell outputs to inputs at the next step.
## 10.
_Exercise: Go through TensorFlow's [Neural Machine Translation with Attention tutorial](https://homl.info/nmttuto)._
Simply open the Colab and follow its instructions. Alternatively, if you want a simpler example of using TF-Addons's seq2seq implementation for Neural Machine Translation (NMT), look at the solution to the previous question. The last model implementation will give you a simpler example of using TF-Addons to build an NMT model using attention mechanisms.
## 11.
_Exercise: Use one of the recent language models (e.g., GPT) to generate more convincing Shakespearean text._
The simplest way to use recent language models is to use the excellent [transformers library](https://huggingface.co/transformers/), open sourced by Hugging Face. It provides many modern neural net architectures (including BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet and more) for Natural Language Processing (NLP), including many pretrained models. It relies on either TensorFlow or PyTorch. Best of all: it's amazingly simple to use.
First, let's load a pretrained model. In this example, we will use OpenAI's GPT model, with an additional Language Model on top (just a linear layer with weights tied to the input embeddings). Let's import it and load the pretrained weights (this will download about 445MB of data to `~/.cache/torch/transformers`):
```
from transformers import TFOpenAIGPTLMHeadModel
model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt")
```
Next we will need a specialized tokenizer for this model. This one will try to use the [spaCy](https://spacy.io/) and [ftfy](https://pypi.org/project/ftfy/) libraries if they are installed, or else it will fall back to BERT's `BasicTokenizer` followed by Byte-Pair Encoding (which should be fine for most use cases).
```
from transformers import OpenAIGPTTokenizer
tokenizer = OpenAIGPTTokenizer.from_pretrained("openai-gpt")
```
Now let's use the tokenizer to tokenize and encode the prompt text:
```
prompt_text = "This royal throne of kings, this sceptred isle"
encoded_prompt = tokenizer.encode(prompt_text,
add_special_tokens=False,
return_tensors="tf")
encoded_prompt
```
Easy! Next, let's use the model to generate text after the prompt. We will generate 5 different sentences, each starting with the prompt text, followed by 40 additional tokens. For an explanation of what all the hyperparameters do, make sure to check out this great [blog post](https://huggingface.co/blog/how-to-generate) by Patrick von Platen (from Hugging Face). You can play around with the hyperparameters to try to obtain better results.
```
num_sequences = 5
length = 40
generated_sequences = model.generate(
input_ids=encoded_prompt,
do_sample=True,
max_length=length + len(encoded_prompt[0]),
temperature=1.0,
top_k=0,
top_p=0.9,
repetition_penalty=1.0,
num_return_sequences=num_sequences,
)
generated_sequences
```
Now let's decode the generated sequences and print them:
```
for sequence in generated_sequences:
text = tokenizer.decode(sequence, clean_up_tokenization_spaces=True)
print(text)
print("-" * 80)
```
You can try more recent (and larger) models, such as GPT-2, CTRL, Transformer-XL or XLNet, which are all available as pretrained models in the transformers library, including variants with Language Models on top. The preprocessing steps vary slightly between models, so make sure to check out this [generation example](https://github.com/huggingface/transformers/blob/master/examples/run_generation.py) from the transformers documentation (this example uses PyTorch, but it will work with very little tweaks, such as adding `TF` at the beginning of the model class name, removing the `.to()` method calls, and using `return_tensors="tf"` instead of `"pt"`.
Hope you enjoyed this chapter! :)
| github_jupyter |
## The Psychology of Growth
The field of positive psychology studies what are the human behaviours that lead to a great life. You can think of it as the intersection between self help books with the academic rigor of statistics. One of the famous findings of positive psychology is the **Growth Mindset**. The idea is that people can have a fixed or a growth mindset. If you have a fixed mindset, you believe that abilities are given at birth or in early childhood. As such, intelligence is fixed and can't change throughout life. If you don't have it by now, you can't acquire it. The corollary of this though is that you should not waste time on areas where you don't excel, since you will never learn how to handle them. On the other hand, if you have a growth mindset, you believe that intelligence can be developed. The direct consequence of this is you see failure not as lack of intelligence, put as part of a learning process.
I don't want to debate which of these mindsets is the correct one (its probably somewhere in the middle). For our purpose, it doesn't matter much. What does matter is that psychologists found out that people who have a growth mindset tend to do better in life. They are more likely to achieve what they've set to.
As versed as we are with causal inference, we've learned to see those statements with skepticism. Is it that a growth mindset causes people to achieve more? Or is simply the case that people who achieve more are prone to develop a growth mindset as a result of their success? Who came first, the egg or the chicken? In potential outcome notation, we have reasons to believe that there is bias in these statements. \\(Y_0|T=1\\) is probably larger than \\(Y_0|T=0\\), which means that those with a growth mindset would have achieved more even if they had a fixed mindset.
To settle things, researches designed the [The National Study of Learning Mindsets](https://mindsetscholarsnetwork.org/about-the-network/current-initatives/national-mindset-study/#). It is a randomised study conducted in U.S. public high schools which aims at finding the impact of a growth mindset. The way it works is that students receive from the school a seminary to instil in them a growth mindset. Then, they follow up the students in their college years to measure how well they've performed academically. This measurement was compiled into an achievement score and standardised. The real data on this study is not publicly available in order to preserve students' privacy. However, we have a simulated dataset with the same statistical properties provided by [Athey and Wager](https://arxiv.org/pdf/1902.07409.pdf), so we will use that instead.
```
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
import graphviz as gr
%matplotlib inline
style.use("fivethirtyeight")
pd.set_option("display.max_columns", 6)
```
Besides the treated and outcome variables, the study also recorded some other features:
* schoolid: identifier of the student's school;
* success_expect: self-reported expectations for success in the future, a proxy for prior achievement, measured prior to random assignment;
* ethnicity: categorical variable for student race/ethnicity;
* gender: categorical variable for student identified gender;
* frst_in_family: categorical variable for student first-generation status, i.e. first in family to go to college;
* school_urbanicity: school-level categorical variable for urbanicity of the school, i.e. rural, suburban, etc;
* school_mindset: school-level mean of students’ fixed mindsets, reported prior to random assignment, standardised;
* school_achievement: school achievement level, as measured by test scores and college preparation for the previous 4 cohorts of students, standardised;
* school_ethnic_minority: school racial/ethnic minority composition, i.e., percentage of student body that is Black, Latino, or Native American, standardised;
* school_poverty: school poverty concentration, i.e., percentage of students who are from families whose incomes fall below the federal poverty line, standardised;
* school_size: total number of students in all four grade levels in the school, standardised.
```
data = pd.read_csv("./data/learning_mindset.csv")
data.sample(5, random_state=5)
```
Although the study was randomised, it doesn't seem to be the case that this data is free from confounding. If we look at the additional features, we will notice that they vary systematically between treatment and control. One possible reason for this is that the treatment variable is measured by the student's receipt of the seminar. So, although the opportunity to participate was random, participation itself is not. We are here dealing with a case of non-compliance here. One evidence of this is how the student's success expectation is correlated with the participation in the seminar. Students with higher self-reported participation are more likely to have joined the growth mindset seminar.
```
data.groupby("success_expect")["intervention"].mean()
```
Still, let's see what the difference in means \\(E[Y|T=1] - E[Y|T=0]\\) looks like. This will be a useful baseline to compare against.
```
smf.ols("achievement_score ~ intervention", data=data).fit().summary().tables[1]
```
Simply comparing those with and without the intervention, we can see that the treated have an achievement score that is, on average, 0.3185 (0.4723 - 0.1538) higher than the untreated. But is this big or small? I know that interpreting standardised outcomes can be challenging, but bear with me for a moment. I think it is worth going through this because it won't be the last time you will encounter standardized scores.
The outcome variable being standardised means that it is measured in standard deviations. So, the treated are 0.3185 deviations above the untreated. That is what this means. As for if this is small or big, let's remember some stuff about the normal distribution. We know that 95% of its mass is between 2 standard deviations, leaving 2.5% on one tail and 2.5% on another. This also means that if someone is 2 standard deviations above the mean, 97.5% (95% plus the left 2.5% tail) of all the individuals are below that person. By looking at the normal CDF, we also know that about 0.85% of its mass is below 1 standard deviation and that 70% of its mass is below 0.5 standard deviations. So, this means that the average of the treated is above 70% of the individual achievements. The untreated mean, on the other hand, have only 44% of the individuals below.
Here is what this looks like in a picture.
```
plt.hist(data["achievement_score"], bins=20, alpha=0.3, label="All")
plt.hist(data.query("intervention==0")["achievement_score"], bins=20, alpha=0.3, color="C2")
plt.hist(data.query("intervention==1")["achievement_score"], bins=20, alpha=0.3, color="C3")
plt.vlines(-0.1538, 0, 300, label="Untreated", color="C2")
plt.vlines(-0.1538+0.4723, 0, 300, label="Treated", color="C3")
plt.legend();
```
Of course, we still think this result is biased. The difference between treated and untreated is probably smaller than this, because we think the bias is positive. We've already seen that more ambitious people are more willing to go to the seminar, so they probably would have achieved more even if they had attended it. To control for this bias, we could use regression or matching, but it's time to learn about a new technique.
## Propensity Score
Propensity score comes from the realisation that you don't need to directly control for confounders X to achieve conditional independence \\((Y_1, Y_0) \perp T | X\\). Instead, it is sufficient to control for a balancing score \\(E[T|X]\\). This balancing score is often the conditional probability of the treatment, \\(P(T|X)\\), also called the propensity score \\(P(x)\\). The propensity score makes it so that you don't have to condition on the entirety of X to achieve independence of the potential outcomes on the treatment. It is sufficient to condition on this single variable with is the propensity score:
$
(Y_1, Y_0) \perp T | P(x)
$
There is a formal proof for why this is, but we can forget it for now and approach the matter in a more intuitive way. The propensity score is the conditional probability of receiving the treatment right? So we can think of it as some sort of function that converts X into the treatment T. The propensity score makes this middle ground between the variable X and the treatment T. If we show this in a causal graph, this is what it would look like.
```
g = gr.Digraph()
g.edge("T", "Y")
g.edge("X", "Y")
g.edge("X", "P(x)")
g.edge("P(x)", "T")
g
```
If I know what P(x) is, X alone tells me nothing more that can help me learn what T would be. Which means that controlling for P(x) acts the same way as controlling for X directly. Think of it in terms of our mindset program. Treated and non treated are initially not comparable because the more ambitious are both more likely to take the treatment and of achieving more in life. However, if I take 2 individuals, one from the treated and one from the control, but with the same probability of receiving the treatment, they are comparable. Think about it. If they have the exact same probability of receiving the treatment, the only reason one of them received it and the other did not is pure chance. Holding the propensity score constant acts in a way of making the data look as good as randomly assigned.
Now that we got the intuition, let's look at the mathematical proof. We want to show that \\((Y_1, Y_0) \perp T | P(x)\\) is equivalent to saying that
$
E[T|P(x), X] = E[T|P(x)]
$
This simply says that once I condition on P(x), X can give me no extra information about T. The proof of this is quite weird. We will show that the equation above is true by converting it to a trivial statement. First take a look at the left hand side \\(E[T|P(x), X]\\).
$
E[T|P(x), X] = E[T|X] = P(x)
$
We use the fact that P(x) is just a function of X, so conditioning on it gives no further information after we've conditioned on X itself. Then, we use the definition of the propensity score \\(E[T|X]\\). For the right hand side, we will use the law of iterated expectations \\(E[A] = E[E[A|B]]\\) This law says that we can compute the expected value of A by looking at the value of A broken down by B and then averaging that.
$
E[T|P(x)] = E[E[T|P(x),X]|P(x)] = E[P(x)|P(x)] = P(x)
$
The first equality comes from the law of iterated expectations. The second comes from what we've figured out when dealing with the left hand side. Since both the left and right hand side equals, \\(P(x)\\), this equation is trivially true.
## Propensity Weighting

OK, we got the propensity score. Now what? Like I've said, all we need to do is condition on it. For example, we could run a linear regression that conditions only on the propensity score, instead of all the Xs. For now, let's look at a technique that just uses the propensity score and nothing else. The idea is to write the conditional difference in means with the propensity score
$
E[Y|X,T=1]−E[Y|X,T=0] = E\bigg[\dfrac{Y}{P(x)}|X,T=1\bigg]P(x) - E\bigg[\dfrac{Y}{(1-P(x))}|X,T=0\bigg](1-P(x))
$
We can simplify this further, but let's take a look at it like this because it gives us some nice intuition of what the propensity score is doing. The first term is estimating \\(Y_1\\). It is taking all those that are treated and scaling them by the inverse probability of treatment. What this does is makes those with very low probability of treatment have a high weight. This makes sense right? If someone has a low probability of treatment, that individual looks like the untreated. However, that same individual was treated. This must be interesting. We have a treatment that looks like the untreated, so we will give that entity a high weight. What this does is create a population with the same size as the original, but where everyone is treated. By the same reasoning, the other term looks at the untreated and gives a high weight to those that look like the treated. This estimator is called the Inverse Probability of Treatment Weighting, since it scales each unit by the probability of receiving some treatment other than the one it received.
In a picture, here is what this weightin does.

The upper left plot shows the original data. The blue dots are the untreated and the red dots are the treated. The bottom plot shows the propensity score P(x). Notice how it is between 0 and 1 and it grows as X increases. Finally, the upper right plot is the data after weighting. Notice how the red (treated) that are more to the left (lower propensity score) have a higher weight. Similarly, the blue plots that are to the right have also a higher weight.
Not that we got the intuition, we can simplify the terms above to
$
E\bigg[Y \dfrac{T-P(x)}{P(x)(1-P(x))}\bigg|X\bigg]
$
which if we integrate over X becomes our propensity score weighting estimator.
$
E\bigg[Y \dfrac{T-P(x)}{P(x)(1-P(x))}\bigg]
$
Notice that this estimator requires that neither \\(P(x)\\) nor \\(1-P(x)\\) be strictly positive. In words, this means that everyone needs to have at least some chance of receiving the treatment and of not receiving it. Another way of stating this is that the treated and untreated distributions overlap. This is the **positivity assumption** of causal inference. It also makes intuitive sense. If treated and untreated don't overlap, it means they are very different and I won't be able to extrapolate the effect of one group to the other. This extrapolation is not impossible (regression does it), but it is very dangerous. It is like testing a new drug in an experiment where only men receive the treatment and then assume women will respond to it equally well.
## Propensity Score Estimation
In an ideal world, we would have the true propensity score \\(P(x)\\). However, in practice, the mechanism that assigns the treatment is unknown and we need to replace the true propensity score by an estimation of it \\(\hat{P}(x)\\). One common way of doing so is using logistic regression but other machine learning methods, like gradient boosting, can be used as well (although it requires some additional steps to avoid overfitting).
Here, I'll stick to logistic regression. This means that I'll have to convert the categorical features in the dataset to dummies.
```
categ = ["ethnicity", "gender", "school_urbanicity"]
cont = ["school_mindset", "school_achievement", "school_ethnic_minority", "school_poverty", "school_size"]
data_with_categ = pd.concat([
data.drop(columns=categ), # dataset without the categorical features
pd.get_dummies(data[categ], columns=categ, drop_first=False)# dataset without categorical converted to dummies
], axis=1)
print(data_with_categ.shape)
```
Now, let's estimate the propensity score using logistic regression.
```
from sklearn.linear_model import LogisticRegression
T = 'intervention'
Y = 'achievement_score'
X = data_with_categ.columns.drop(['schoolid', T, Y])
ps_model = LogisticRegression(C=1e6).fit(data_with_categ[X], data_with_categ[T])
data_ps = data.assign(propensity_score=ps_model.predict_proba(data_with_categ[X])[:, 1])
data_ps[["intervention", "achievement_score", "propensity_score"]].head()
```
First, we can make sure that the propensity score weight indeed reconstructs a population where everyone is treated. By producing weights \\(1/P(x)\\), it creates the population where everyone is treated and by providing the weights \\(1/(1-P(x))\\) it creates the population where everyone is untreated.
```
weight_t = 1/data_ps.query("intervention==1")["propensity_score"]
weight_nt = 1/(1-data_ps.query("intervention==0")["propensity_score"])
print("Original Sample Size", data.shape[0])
print("Treated Population Sample Size", sum(weight_t))
print("Untreated Population Sample Size", sum(weight_nt))
```
We can also use the propensity score to find evidence of confounding. If a segmentation of the population has a higher propensity score than another, it means that something that is not random is causing the treatment. If that same thing is also causing the outcome, we have confounding. In our case, we can see that students that reported to be more ambitious also have a higher probability of attending the growth mindset seminar.
```
sns.boxplot(x="success_expect", y="propensity_score", data=data_ps)
plt.title("Confounding Evidence");
```
We also have to check that there is overlap between the treated and untreated population. To do so, we can see the empirical distribution of the propensity score on the untreated and on the treated. Looking at the image below, we can see that no one has a propensity score of zero and that even in lower regions of the propensity score we can find both treated and untreated individuals. This is what we called a nicely balanced treated and untreated population.
```
sns.distplot(data_ps.query("intervention==0")["propensity_score"], kde=False, label="Non Treated")
sns.distplot(data_ps.query("intervention==1")["propensity_score"], kde=False, label="Treated")
plt.title("Positivity Check")
plt.legend();
```
Finally, we can use our propensity score weighting estimator to estimate the average treatment effect.
```
weight = ((data_ps["intervention"]-data_ps["propensity_score"]) /
(data_ps["propensity_score"]*(1-data_ps["propensity_score"])))
y1 = sum(data_ps.query("intervention==1")["achievement_score"]*weight_t) / len(data)
y0 = sum(data_ps.query("intervention==0")["achievement_score"]*weight_nt) / len(data)
ate = np.mean(weight * data_ps["achievement_score"])
print("Y1:", y1)
print("Y0:", y0)
print("ATE", np.mean(weight * data_ps["achievement_score"]))
```
Propensity score weighting is saying that we should expect treated individuals to be 0.38 standard deviations above their untreated fellows, in terms of achievements. We can also see that if no one got the treatment, we should expect the general level of achievements to be 0.12 standard deviation lower than what it is now. By the same reasoning, we should expect the general level of achievement to be 0.25 standards deviation higher if we've given everyone the seminar. Compare this to the 0.47 ATE estimate we've got by simply comparing treated and untreated. This is evidence that the bias we have is indeed positive and that controlling for X gives us a more modest estimate of the impact of the growth mindset.
## Standard Error

To compute the standard error for the IPTW estimator, we can use the formula of the variance of a weighted average.
$
\sigma^2_w = \dfrac{\sum_{i=1}^{n}w_i(y_i-\hat{\mu})^2}{\sum_{i=1}^{n}w_i}
$
However, we can only use this if we have the true propensity score. If we are using the estimated version of it, \\(\hat{P}(x)\\), we need to account for the errors in the estimation process. The easiest way of doing this is by bootstrapping the whole procedure. This is achieved by sampling with replacement from the original data and computing the ATE like we did above. We then repeat this many times to get the distribution of the ATE estimate.
```
from joblib import Parallel, delayed # for parallel processing
# define function that computes the IPTW estimator
def run_ps(df, X, T, y):
# estimate the propensity score
ps = LogisticRegression(C=1e6).fit(df[X], df[T]).predict_proba(df[X])[:, 1]
weight = (df[T]-ps) / (ps*(1-ps)) # define the weights
return np.mean(weight * df[y]) # compute the ATE
np.random.seed(88)
# run 1000 bootstrap samples
bootstrap_sample = 1000
ates = Parallel(n_jobs=4)(delayed(run_ps)(data_with_categ.sample(frac=1, replace=True), X, T, Y)
for _ in range(bootstrap_sample))
ates = np.array(ates)
```
The ATE is then the mean of the bootstrap samples and the standard error is the standard deviation of these samples.
```
print(f"ATE 95% CI: {ates.mean()} +- {1.96*ates.std()}")
```
We can also have a visual on what the bootstrap samples look like, along with the confidence intervals.
```
sns.distplot(ates, kde=False)
plt.vlines(ates.mean()-1.96*ates.std(), 0, 20, linestyles="dotted")
plt.vlines(ates.mean()+1.96*ates.std(), 0, 20, linestyles="dotted", label="95% CI")
plt.title("ATE Bootstrap Distribution")
plt.legend();
```
## Common Issues with Propensity Score
As a data scientist, I know it can be tempting to use all the power of the machine learning toolkit to make propensity score estimation as precise as possible. You can quickly get taken away by the all AUC optimisation, cross validation and bayesian hyper-parameter tuning. Now, I'm not saying you shouldn't do that. In fact, all of the theory about propensity score and machine learning is very recent, so there are lots we don't know yet. But it pays to understand something first.
The first thing is that the predictive quality of the propensity score does not translate into its balancing properties. Coming from the field of machine learning, one of the most challenging aspects of getting acquainted with causal inference is letting go of treating everything as a prediction problem. In fact, maximising the prediction power of the propensity score can even hurt the causal inference goal. **Propensity score doesn't need to predict the treatment very well. It just needs to include all the confounding variables**. If we include variables that are very good in predicting the treatment but have no bearing on the outcome this will actually increase the variance of the propensity score estimator. This is similar to the problem linear regression faces when we include variables correlated with the treatment but not with the outcome.

To see this, consider the following example (adapted from Hernán's Book). You have 2 schools, one of them apply the growth mindset seminar to 99% of its students and the other to 1%. Suppose that the schools have no impact on the treatment effect (except through the treatment), so it's not necessary to control for it. If you add the school variable to the propensity score model, it's going to have a very high predictive power. However, by chance, we could end up with a sample where everyone in school A got the treatment, leading to a propensity score of 1 for that school, which would lead to an infinite variance. This is an extreme example, but let's see how it would work with simulated data.
```
np.random.seed(42)
school_a = pd.DataFrame(dict(T=np.random.binomial(1, .99, 400), school=0, intercept=1))
school_b = pd.DataFrame(dict(T=np.random.binomial(1, .01, 400), school=1, intercept=1))
ex_data = pd.concat([school_a, school_b]).assign(y = lambda d: np.random.normal(1 + 0.1 * d["T"]))
ex_data.head()
```
Having simulated this data, we run bootstrap with the Propensity Score algorithm twice. The first including school as a feature to the propensity score model. The second time, we don't include school in the model.
```
ate_w_f = np.array([run_ps(ex_data.sample(frac=1, replace=True), ["school"], "T", "y") for _ in range(500)])
ate_wo_f = np.array([run_ps(ex_data.sample(frac=1, replace=True), ["intercept"], "T", "y") for _ in range(500)])
sns.distplot(ate_w_f, kde=False, label="PS W School")
sns.distplot(ate_wo_f, kde=False, label="PS W/O School")
plt.legend();
```
As you can see, the propensity score estimator that adds the feature school has a humongous variance, while the one without it is much more well behaved. Also, since school is not a confounder, the model without it is also not biased.
As I've said, simply predicting the treatment is not what this is about. We actually need to construct the prediction in a way that controls for confounding, not in a way to predict the treatment. This leads to another problem often encountered in propensity score methods. In our mindset case, the data turned out to be very balanced. But this is not always the case. In some situations, the treated have a much higher probability of treatment than the untreated and the propensity score distribution doesn't overlap much.
```
sns.distplot(np.random.beta(4,1,500), kde=False, label="Non Treated")
sns.distplot(np.random.beta(1,3,500), kde=False, label="Treated")
plt.title("Positivity Check")
plt.legend();
```
If this happens, it means that positivity is not very strong. If a treated has a propensity score of, say, 0.9 and the maximum propensity score of the untreated is 0.7, we won't have any untreated to compare to the individual with the 0.9 propensity score. This lack of balancing can generate some bias, because we will have to extrapolate the treatment effect to unknown regions. Not only that, entities with very high or very low propensity scores have a very high weight, which increases variance. As a general rule of thumb, you are in trouble if any weight is higher than 20 (which happens with an untreated propensity score of 0.95 or a treated with a propensity score of 0.05).
An alternative is clipping the weight to be at a maximum size of 20. This will decrease the variance, but it will actually generate more bias. To be honest, although this is a common practice to reduce variance, I don't really like it. You will never know if the bias you are inducing with clipping is too much. Also, if the distributions don't overlap, your data is probably not enough to make a causal conclusion anyway. To gain some further intuition about this, we can look at a technique that combines propensity score and matching
## Propensity Score Matching
As I've said before, you don't need to control for X when you have the propensity score. It suffices to control it. As such, you can think of the propensity score as performing a kind of dimensionality reduction on the feature space. It condenses all the features in X into a single treatment assignment dimension. For this reason, we can treat the propensity score as an input feature for other models. Take a regression, model for instance.
```
smf.ols("achievement_score ~ intervention + propensity_score", data=data_ps).fit().summary().tables[1]
```
If we control for the propensity score, we now estimate a ATE of 0.39, which is lower than the 0.47 we did previously with a regression model without controlling for the propensity score. We can also use matching on the propensity score. This time, instead of trying to find matches that are similar in all the X features, we can find matches that just have the same propensity score.
This is a huge improvement on top of the matching estimator, since it deals with the curse of dimensionality. Also, if a feature is unimportant for the treatment assignment, the propensity score model will learn that and give low importance to it when fitting the treatment mechanism. Matching on the features, on the other hand, would still try to find matches where individuals are similar on this unimportant feature.
```
from causalinference import CausalModel
cm = CausalModel(
Y=data_ps["achievement_score"].values,
D=data_ps["intervention"].values,
X=data_ps[["propensity_score"]].values
)
cm.est_via_matching(matches=1, bias_adj=True)
print(cm.estimates)
```
As we can see, we also get an ATE of 0.38, which is more in line with what we've seen before with propensity score weighting. Matching on the propensity score also give us some intuition about why is dangerous to have a small overlap in the propensity score between treated and untreated. If this happens, the matching on the propensity score discrepancy will be large, which will lead to bias.
One final word of caution here is that the above standard errors are wrong, as they don't account for the uncertainty in the estimation of the propensity score. Unfortunately, [bootstrap doesn't work with matching](https://economics.mit.edu/files/11862). Also, the theory above is so recent that there are no Python implementations of propensity score methods with the correct standard errors. For this reason, we don't see a lot of propensity score matching in Python.
## Key Ideas
Here, we've learned that the probability of getting the treatment is called the propensity score and that we can use this as a balancing score. What this means is that, if we have the propensity score, we don't need to control for the confounders directly. It is sufficient to control for the propensity score in order to identify the causal effect. We saw how the propensity scores acts as a dimensionality reduction on the confounder space.
These proprieties allowed us to derive a weighting estimator for causal inference. Not only that, we saw how the propensity score can be used along other methods to control for confounding bias.
Finally, we looked at some extrapolation problems that we might run into if we are unable to have a good overlap between the treated and untreated propensity score distribution.
## References
I like to think of this entire series is a tribute to Joshua Angrist, Alberto Abadie and Christopher Walters for their amazing Econometrics class. Most of the ideas here are taken from their classes at the American Economic Association. Watching them is what is keeping me sane during this tough year of 2020.
* [Cross-Section Econometrics](https://www.aeaweb.org/conference/cont-ed/2017-webcasts)
* [Mastering Mostly Harmless Econometrics](https://www.aeaweb.org/conference/cont-ed/2020-webcasts)
I'll also like to reference the amazing books from Angrist. They have shown me that Econometrics, or 'Metrics as they call it, is not only extremely useful but also profoundly fun.
* [Mostly Harmless Econometrics](https://www.mostlyharmlesseconometrics.com/)
* [Mastering 'Metrics](https://www.masteringmetrics.com/)
My final reference is Miguel Hernan and Jamie Robins' book. It has been my trustworthy companion in the most thorny causal questions I had to answer.
* [Causal Inference Book](https://www.hsph.harvard.edu/miguel-hernan/causal-inference-book/)
The data that we used was taken from the article [Estimating Treatment Effects with Causal Forests: An Application](https://arxiv.org/pdf/1902.07409.pdf), by Susan Athey and Stefan Wager.
| github_jupyter |
<a href="https://colab.research.google.com/github/satyajitghana/TSAI-DeepVision-EVA4.0/blob/master/05_CodingDrill/EVA4S5F3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Import Libraries
```
from __future__ import print_function
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
```
## Data Transformations
We first start with defining our data transformations. We need to think what our data is and how can we augment it to correct represent images which it might not see otherwise.
```
# Train Phase transformations
train_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,)) # The mean and std have to be sequences (e.g., tuples), therefore you should add a comma after the values.
# Note the difference between (0.1307) and (0.1307,)
])
# Test Phase transformations
test_transforms = transforms.Compose([
# transforms.Resize((28, 28)),
# transforms.ColorJitter(brightness=0.10, contrast=0.1, saturation=0.10, hue=0.1),
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
```
# Dataset and Creating Train/Test Split
```
train = datasets.MNIST('./data', train=True, download=True, transform=train_transforms)
test = datasets.MNIST('./data', train=False, download=True, transform=test_transforms)
```
# Dataloader Arguments & Test/Train Dataloaders
```
SEED = 1
# CUDA?
cuda = torch.cuda.is_available()
print("CUDA Available?", cuda)
# For reproducibility
torch.manual_seed(SEED)
if cuda:
torch.cuda.manual_seed(SEED)
# dataloader arguments - something you'll fetch these from cmdprmt
dataloader_args = dict(shuffle=True, batch_size=128, num_workers=4, pin_memory=True) if cuda else dict(shuffle=True, batch_size=64)
# train dataloader
train_loader = torch.utils.data.DataLoader(train, **dataloader_args)
# test dataloader
test_loader = torch.utils.data.DataLoader(test, **dataloader_args)
```
# Data Statistics
It is important to know your data very well. Let's check some of the statistics around our data and how it actually looks like
```
# We'd need to convert it into Numpy! Remember above we have converted it into tensors already
train_data = train.train_data
train_data = train.transform(train_data.numpy())
print('[Train]')
print(' - Numpy Shape:', train.train_data.cpu().numpy().shape)
print(' - Tensor Shape:', train.train_data.size())
print(' - min:', torch.min(train_data))
print(' - max:', torch.max(train_data))
print(' - mean:', torch.mean(train_data))
print(' - std:', torch.std(train_data))
print(' - var:', torch.var(train_data))
dataiter = iter(train_loader)
images, labels = dataiter.next()
print(images.shape)
print(labels.shape)
# Let's visualize some of the images
%matplotlib inline
import matplotlib.pyplot as plt
plt.imshow(images[0].numpy().squeeze(), cmap='gray_r')
```
## MORE
It is important that we view as many images as possible. This is required to get some idea on image augmentation later on
```
figure = plt.figure()
num_of_images = 60
for index in range(1, num_of_images + 1):
plt.subplot(6, 10, index)
plt.axis('off')
plt.imshow(images[index].numpy().squeeze(), cmap='gray_r')
```
# The model
Let's start with the model we first saw
```
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# Input Block
self.convblock1 = nn.Sequential(
nn.Conv2d(in_channels=1, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 26
# CONVOLUTION BLOCK 1
self.convblock2 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 24
self.convblock3 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 22
# TRANSITION BLOCK 1
self.pool1 = nn.MaxPool2d(2, 2) # output_size = 11
self.convblock4 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU()
) # output_size = 11
# CONVOLUTION BLOCK 2
self.convblock5 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 9
self.convblock6 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=20, kernel_size=(3, 3), padding=0, bias=False),
nn.ReLU()
) # output_size = 7
# OUTPUT BLOCK
self.convblock7 = nn.Sequential(
nn.Conv2d(in_channels=20, out_channels=10, kernel_size=(1, 1), padding=0, bias=False),
nn.ReLU()
) # output_size = 7
self.convblock8 = nn.Sequential(
nn.Conv2d(in_channels=10, out_channels=10, kernel_size=(7, 7), padding=0, bias=False),
# nn.BatchNorm2d(10), NEVER
# nn.ReLU() NEVER!
) # output_size = 1
def forward(self, x):
x = self.convblock1(x)
x = self.convblock2(x)
x = self.convblock3(x)
x = self.pool1(x)
x = self.convblock4(x)
x = self.convblock5(x)
x = self.convblock6(x)
x = self.convblock7(x)
x = self.convblock8(x)
x = x.view(-1, 10)
return F.log_softmax(x, dim=-1)
```
# Model Params
Can't emphasize on how important viewing Model Summary is.
Unfortunately, there is no in-built model visualizer, so we have to take external help
```
!pip install torchsummary
from torchsummary import summary
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
print(device)
model = Net().to(device)
summary(model, input_size=(1, 28, 28))
```
# Training and Testing
Looking at logs can be boring, so we'll introduce **tqdm** progressbar to get cooler logs.
Let's write train and test functions
```
from tqdm import tqdm
train_losses = []
test_losses = []
train_acc = []
test_acc = []
def train(model, device, train_loader, optimizer, epoch):
model.train()
pbar = tqdm(train_loader)
correct = 0
processed = 0
for batch_idx, (data, target) in enumerate(pbar):
# get samples
data, target = data.to(device), target.to(device)
# Init
optimizer.zero_grad()
# In PyTorch, we need to set the gradients to zero before starting to do backpropragation because PyTorch accumulates the gradients on subsequent backward passes.
# Because of this, when you start your training loop, ideally you should zero out the gradients so that you do the parameter update correctly.
# Predict
y_pred = model(data)
# Calculate loss
loss = F.nll_loss(y_pred, target)
train_losses.append(loss)
# Backpropagation
loss.backward()
optimizer.step()
# Update pbar-tqdm
pred = y_pred.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
processed += len(data)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_acc.append(100*correct/processed)
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_acc.append(100. * correct / len(test_loader.dataset))
```
# Let's Train and test our model
```
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS = 20
for epoch in range(EPOCHS):
print("EPOCH:", epoch)
train(model, device, train_loader, optimizer, epoch)
test(model, device, test_loader)
fig, axs = plt.subplots(2,2,figsize=(15,10))
axs[0, 0].plot(train_losses)
axs[0, 0].set_title("Training Loss")
axs[1, 0].plot(train_acc)
axs[1, 0].set_title("Training Accuracy")
axs[0, 1].plot(test_losses)
axs[0, 1].set_title("Test Loss")
axs[1, 1].plot(test_acc)
axs[1, 1].set_title("Test Accuracy")
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torch
import random
device = 'cuda' if torch.cuda.is_available() else 'cpu'
import os,sys
opj = os.path.join
from tqdm import tqdm
import acd
from copy import deepcopy
import torchvision.utils as vutils
import models
from visualize import *
from data import *
sys.path.append('../trim')
from transforms_torch import transform_bandpass, tensor_t_augment, batch_fftshift2d, batch_ifftshift2d
from trim import *
from util import *
from attributions import *
from captum.attr import *
from functools import partial
import warnings
warnings.filterwarnings("ignore")
data_path = './cosmo'
```
# load dataset and model
```
# params
img_size = 256
class_num = 1
# cosmo dataset
transformer = transforms.Compose([ToTensor()])
mnu_dataset = MassMapsDataset(opj(data_path, 'cosmological_parameters.txt'),
opj(data_path, 'z1_256'),
transform=transformer)
# dataloader
data_loader = torch.utils.data.DataLoader(mnu_dataset, batch_size=32, shuffle=False, num_workers=4)
# load model
model = models.load_model(model_name='resnet18', device=device, inplace=False, data_path=data_path).to(device)
model = model.eval()
# freeze layers
for param in model.parameters():
param.requires_grad = False
```
optimize over mask
```
# test im
X = iter(data_loader).next()['image'][0:1].to(device)
X.requires_grad = True
# output
with torch.no_grad():
output = model(X).flatten()[1]
class Mask(nn.Module):
def __init__(self, img_size=256):
super(Mask, self).__init__()
self.mask = nn.Parameter(torch.ones(img_size, img_size))
# self.mask = nn.Parameter(torch.clamp(abs(torch.randn(img_size, img_size)), 0, 1))
def forward(self, x):
return torch.mul(self.mask, x)
# mask
mask = Mask().to(device)
# criterion
criterion = nn.MSELoss()
# l1-loss
l1loss = nn.L1Loss()
# Setup Adam optimizer
optimizer = optim.Adam(mask.parameters(), lr=0.05)
# Training Loop
# Lists to keep track of progress
losses = []
num_epochs = 1000
lamb_l1 = 5.0
print("Starting Training Loop...")
# For each epoch
for epoch in range(num_epochs):
im_mask = mask(X)
output_ = model(im_mask).flatten()[1]
# loss
loss = -output_ + lamb_l1 * l1loss(mask.mask, torch.zeros_like(mask.mask))
# zero grad
optimizer.zero_grad()
# backward
loss.backward()
# Update G
optimizer.step()
# projection
mask.mask.data = torch.clamp(mask.mask.data, 0, 1)
# Output training stats
print('\rTrain Epoch: {}/{}'.format(epoch, num_epochs), end='')
# Save Losses for plotting later
losses.append(loss.item())
plt.plot(losses)
plt.show()
cshow(mask(X).data.cpu().squeeze())
cshow(X.data.cpu().squeeze())
```
| github_jupyter |
# Autoencoder
```
from keras.layers import Input, Dense
from keras.models import Model
import matplotlib.pyplot as plt
import matplotlib.colors as mcol
from matplotlib import cm
def graph_colors(nx_graph):
#cm1 = mcol.LinearSegmentedColormap.from_list("MyCmapName",["blue","red"])
#cm1 = mcol.Colormap('viridis')
cnorm = mcol.Normalize(vmin=0,vmax=9)
cpick = cm.ScalarMappable(norm=cnorm,cmap='Set1')
cpick.set_array([])
val_map = {}
for k,v in nx.get_node_attributes(nx_graph,'attr').items():
#print(v)
val_map[k]=cpick.to_rgba(v)
#print(val_map)
colors=[]
for node in nx_graph.nodes():
#print(node,val_map.get(str(node), 'black'))
colors.append(val_map[node])
return colors
```
##### 1 Write a function that builds a simple autoencoder
The autoencoder must have a simple Dense layer with relu activation. The number of node of the dense layer is a parameter of the function
The function must return the entire autoencoder model as well as the encoder and the decoder
##### Load the mnist dataset
##### 2. Buil the autoencoder with a embedding size of 32 and print the number of parameters of the model. What do they relate to ?
##### 3. Fit the autoencoder using 32 epochs with a batch size of 256
##### 4. Using the history module of the autoencoder write a function that plots the learning curves with respect to the epochs on the train and test set. What can you say about these learning curves ? Give also the last loss on the test set
##### 5. Write a function that plots a fix number of example of the original images on the test as weel as their reconstruction
### Nearest neighbours graphs
The goal of this part is to visualize the neighbours graph in the embedding. It corresponds the the graph of the k-nearest neighbours using the euclidean distance of points the element in the embedding
```
from sklearn.neighbors import kneighbors_graph
import networkx as nx
def plot_nearest_neighbour_graph(encoder,x_test,y_test,ntest=100,p=3): #to explain
X=encoder.predict(x_test[1:ntest])
y=y_test[1:ntest]
A = kneighbors_graph(X, p, mode='connectivity', include_self=True)
G=nx.from_numpy_array(A.toarray())
nx.set_node_attributes(G,dict(zip(range(ntest),y)),'attr')
fig, ax = plt.subplots(figsize=(10,10))
pos=nx.layout.kamada_kawai_layout(G)
nx.draw(G,pos=pos
,with_labels=True
,labels=nx.get_node_attributes(G,'attr')
,node_color=graph_colors(G))
plt.tight_layout()
plt.title('Nearest Neighbours Graph',fontsize=15)
plt.show()
```
### Reduce the dimension of the embedding
##### 6. Rerun the previous example using an embedding dimension of 16
## Adding sparsity
##### 7. In this part we will add sparisity over the weights on the embedding layer. Write a function that build such a autoencoder (using a l1 regularization with a configurable regularization parameter and using the same autoencoder architecture that before)
# Deep autoencoder
# Convolutionnal autoencoder
# Application to denoising
| github_jupyter |
```
import os
import json
import matplotlib.pyplot as plt
import math
data_folder = "../../data/convergence_tests"
def load_summary(path):
with open(data_folder + "/" + path) as file:
return json.load(file)
def load_summaries(directory):
summary_files = [file for file in os.listdir(directory) if file.endswith(".json")]
summaries = [ load_summary(os.path.join(directory, filename)) for filename in summary_files ]
return summaries
# Draw line depicting convergence rate p
def draw_convergence_line(ax, *, p, x0, y0, label_xoffset):
# Choose an arbitrary value of x1 in order to determine some point y1
# which we can give as input to axline
x1 = 1
X0 = math.log10(x0)
Y0 = math.log10(y0)
X1 = math.log10(x1)
C = Y0 - p * X0
Y1 = C + p * X1
y1 = 10 ** Y1
ax.axline((x0, y0), (x1, y1), color = "Gray", linestyle="--")
ax.annotate("$O(h^{})$".format(p), (x0 + label_xoffset, y0))
def error_field_from_type(error_type):
if error_type == 'L2':
return 'L2_errors'
elif error_type == 'H1_seminorm':
return 'H1_seminorm_errors'
else:
raise "Unknown error type"
def label_for_error_type(error_type):
if error_type == 'L2':
return '$L^2$ error $|| u - u_h ||_{L^2}$'
elif error_type == 'H1_seminorm':
return '$H^1$ seminorm error $| u - u_h |_{H^1}$'
else:
raise "Unknown error type"
def title_for_error_type(error_type):
if error_type == 'L2':
return '$L^2$ error'
elif error_type == 'H1_seminorm':
return '$H^1$ seminorm error'
else:
raise "Unknown error type"
def prepare_convergence_plot(summaries, error_type = 'L2'):
fig = plt.figure(figsize=(8, 6), dpi = 200)
ax = fig.gca()
for summary in summaries:
x = summary['resolutions']
y = summary[error_field_from_type(error_type)]
ax.loglog(x, y, label = summary['element_name'], marker = 'o')
ax.legend(loc = 'lower right')
ax.set_xlabel("h")
ax.set_ylabel(label_for_error_type(error_type))
ax.set_title(title_for_error_type(error_type))
ax.grid()
return fig
```
# 2D convergence plots
```
def summary_key_2d(summary):
element = summary['element_name']
if element == "Tri3":
return 0
elif element == "Quad4":
return 1
elif element == "Tri6":
return 2
elif element == "Quad9":
return 3
else:
return 10
data_folder_2d = os.path.join(data_folder, "poisson_2d_mms")
# Sort summaries according to order in plot
summaries_2d = sorted(load_summaries(data_folder_2d), key = summary_key_2d)
# L2 error
fig = prepare_convergence_plot(summaries_2d, error_type='L2')
draw_convergence_line(fig.gca(), p=3, x0=0.25, y0=1e-3, label_xoffset = 0.1)
draw_convergence_line(fig.gca(), p=2, x0=0.25, y0=2e-1, label_xoffset = -0.1)
plt.show(fig)
# H1 seminorm errors
fig = prepare_convergence_plot(summaries_2d, error_type='H1_seminorm')
draw_convergence_line(fig.gca(), p=1, x0=0.25, y0=1.25, label_xoffset = -0.1)
draw_convergence_line(fig.gca(), p=2, x0=0.25, y0=3e-2, label_xoffset = 0.1)
plt.show(fig)
```
# 3D convergence plots
```
def summary_key_3d(summary):
element = summary['element_name']
if element == 'Tet4':
return 1
if element == 'Hex8':
return 2
if element == 'Tet10':
return 3
elif element == 'Hex20':
return 4
elif element == 'Hex27':
return 5
return 3
data_folder_3d = os.path.join(data_folder, "poisson_3d_mms")
# Sort summaries according to order in plot
summaries_3d = sorted(load_summaries(data_folder_3d), key = summary_key_3d)
fig = prepare_convergence_plot(summaries_3d, error_type='L2')
draw_convergence_line(fig.gca(), p=3, x0=0.25, y0=1e-3, label_xoffset = 0.1)
draw_convergence_line(fig.gca(), p=2, x0=0.25, y0=4e-2, label_xoffset = -0.06)
plt.show(fig)
fig = prepare_convergence_plot(summaries_3d, error_type='H1_seminorm')
draw_convergence_line(fig.gca(), p=1, x0=0.25, y0=1.25, label_xoffset = -0.1)
draw_convergence_line(fig.gca(), p=2, x0=0.25, y0=3e-2, label_xoffset = 0.1)
plt.show(fig)
```
| github_jupyter |
# GPU
```
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
```
# CFG
```
CONFIG_NAME = 'config06.yml'
TITLE = '06t-efficientnet_b4_ns-512'
! git clone https://github.com/raijin0704/cassava.git
# ====================================================
# CFG
# ====================================================
import yaml
CONFIG_PATH = f'./cassava/config/{CONFIG_NAME}'
with open(CONFIG_PATH) as f:
config = yaml.load(f)
INFO = config['info']
TAG = config['tag']
CFG = config['cfg']
CFG['train'] = True
CFG['inference'] = False
# CFG['debug'] = True
if CFG['debug']:
CFG['epochs'] = 1
assert INFO['TITLE'] == TITLE
```
# colab & kaggle notebookでの環境面の処理
## colab
```
def _colab_kaggle_authority():
from googleapiclient.discovery import build
import io, os
from googleapiclient.http import MediaIoBaseDownload
drive_service = build('drive', 'v3')
results = drive_service.files().list(
q="name = 'kaggle.json'", fields="files(id)").execute()
kaggle_api_key = results.get('files', [])
filename = "/root/.kaggle/kaggle.json"
os.makedirs(os.path.dirname(filename), exist_ok=True)
request = drive_service.files().get_media(fileId=kaggle_api_key[0]['id'])
fh = io.FileIO(filename, 'wb')
downloader = MediaIoBaseDownload(fh, request)
done = False
while done is False:
status, done = downloader.next_chunk()
print("Download %d%%." % int(status.progress() * 100))
os.chmod(filename, 600)
def process_colab():
import subprocess
# ドライブのマウント
from google.colab import drive
drive.mount('/content/drive')
# Google Cloudの権限設定
from google.colab import auth
auth.authenticate_user()
# kaggle設定
# _colab_kaggle_authority()
# subprocess.run('pip install --upgrade --force-reinstall --no-deps kaggle'.split(' '))
# ライブラリ関係
subprocess.run('pip install --upgrade opencv-python'.split(' '))
subprocess.run('pip install --upgrade albumentations'.split(' '))
subprocess.run('pip install timm'.split(' '))
# 各種pathの設定
# DATA_PATH = '/content/drive/Shareddrives/便利用/kaggle/cassava/input/'
# ! cp -r /content/drive/Shareddrives/便利用/kaggle/cassava/input /content/input
DATA_PATH = '/content/input/'
OUTPUT_DIR = './output/'
NOTEBOOK_PATH = f'/content/drive/Shareddrives/便利用/kaggle/cassava/notebook/{TITLE}.ipynb'
return DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH
```
## kaggle notebook
```
def _kaggle_gcp_authority():
from kaggle_secrets import UserSecretsClient
user_secrets = UserSecretsClient()
user_credential = user_secrets.get_gcloud_credential()
user_secrets.set_tensorflow_credential(user_credential)
def process_kaggle():
# GCP設定
_kaggle_gcp_authority()
# 各種pathの設定
DATA_PATH = '../input/cassava-leaf-disease-classification/'
OUTPUT_DIR = './'
# NOTEBOOK_PATH = './__notebook__.ipynb'
NOTEBOOK_PATH = f'/content/drive/Shareddrives/便利用/kaggle/cassava/notebook/{TITLE}.ipynb'
# system path
import sys
sys.path.append('../input/pytorch-image-models/pytorch-image-models-master')
return DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH
```
## 共通
```
def process_common():
# ライブラリ関係
import subprocess
subprocess.run('pip install mlflow'.split(' '))
# 環境変数
import os
os.environ["GCLOUD_PROJECT"] = INFO['PROJECT_ID']
try:
from google.colab import auth
except ImportError:
DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH = process_kaggle()
else:
DATA_PATH, OUTPUT_DIR, NOTEBOOK_PATH = process_colab()
finally:
process_common()
!rm -r /content/input
import os
try:
from google.colab import auth
except ImportError:
pass
else:
! cp /content/drive/Shareddrives/便利用/kaggle/cassava/input.zip /content/input.zip
! unzip input.zip
! rm input.zip
train_num = len(os.listdir(DATA_PATH+"/train_images"))
assert train_num == 21397
```
# Library
```
# ====================================================
# Library
# ====================================================
import os
import datetime
import math
import time
import random
import shutil
from pathlib import Path
from contextlib import contextmanager
from collections import defaultdict, Counter
import scipy as sp
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
from sklearn import preprocessing
from sklearn.metrics import accuracy_score
from sklearn.model_selection import StratifiedKFold
from tqdm.auto import tqdm
from functools import partial
import cv2
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam, SGD
import torchvision.models as models
from torch.nn.parameter import Parameter
from torch.utils.data import DataLoader, Dataset
from torch.optim.lr_scheduler import CosineAnnealingWarmRestarts, CosineAnnealingLR, ReduceLROnPlateau
from albumentations import (
Compose, OneOf, Normalize, Resize, RandomResizedCrop, RandomCrop, HorizontalFlip, VerticalFlip,
RandomBrightness, RandomContrast, RandomBrightnessContrast, Rotate, ShiftScaleRotate, Cutout,
IAAAdditiveGaussianNoise, Transpose
)
from albumentations.pytorch import ToTensorV2
from albumentations import ImageOnlyTransform
import timm
import mlflow
import warnings
warnings.filterwarnings('ignore')
if CFG['apex']:
from apex import amp
if CFG['debug']:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
else:
device = torch.device('cuda')
start_time = datetime.datetime.now()
start_time_str = start_time.strftime('%m%d%H%M')
```
# Directory settings
```
# ====================================================
# Directory settings
# ====================================================
if os.path.exists(OUTPUT_DIR):
shutil.rmtree(OUTPUT_DIR)
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
```
# save basic files
```
# with open(f'{OUTPUT_DIR}/{start_time_str}_TAG.json', 'w') as f:
# json.dump(TAG, f, indent=4)
# with open(f'{OUTPUT_DIR}/{start_time_str}_CFG.json', 'w') as f:
# json.dump(CFG, f, indent=4)
import shutil
notebook_path = f'{OUTPUT_DIR}/{start_time_str}_{TITLE}.ipynb'
shutil.copy2(NOTEBOOK_PATH, notebook_path)
```
# Data Loading
```
train = pd.read_csv(f'{DATA_PATH}/train.csv')
test = pd.read_csv(f'{DATA_PATH}/sample_submission.csv')
label_map = pd.read_json(f'{DATA_PATH}/label_num_to_disease_map.json',
orient='index')
if CFG['debug']:
train = train.sample(n=1000, random_state=CFG['seed']).reset_index(drop=True)
```
# Utils
```
# ====================================================
# Utils
# ====================================================
def get_score(y_true, y_pred):
return accuracy_score(y_true, y_pred)
@contextmanager
def timer(name):
t0 = time.time()
LOGGER.info(f'[{name}] start')
yield
LOGGER.info(f'[{name}] done in {time.time() - t0:.0f} s.')
def init_logger(log_file=OUTPUT_DIR+'train.log'):
from logging import getLogger, INFO, FileHandler, Formatter, StreamHandler
logger = getLogger(__name__)
logger.setLevel(INFO)
handler1 = StreamHandler()
handler1.setFormatter(Formatter("%(message)s"))
handler2 = FileHandler(filename=log_file)
handler2.setFormatter(Formatter("%(message)s"))
logger.addHandler(handler1)
logger.addHandler(handler2)
return logger
logger_path = OUTPUT_DIR+f'{start_time_str}_train.log'
LOGGER = init_logger(logger_path)
def seed_torch(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_torch(seed=CFG['seed'])
class EarlyStopping:
"""Early stops the training if validation loss doesn't improve after a given patience."""
def __init__(self, patience=7, verbose=False, save_path='checkpoint.pt',
counter=0, best_score=None, save_latest_path=None):
"""
Args:
patience (int): How long to wait after last time validation loss improved.
Default: 7
verbose (bool): If True, prints a message for each validation loss improvement.
Default: False
save_path (str): Directory for saving a model.
Default: "'checkpoint.pt'"
"""
self.patience = patience
self.verbose = verbose
self.save_path = save_path
self.counter = counter
self.best_score = best_score
self.save_latest_path = save_latest_path
self.early_stop = False
self.val_loss_min = np.Inf
def __call__(self, val_loss, model, preds, epoch):
score = -val_loss
if self.best_score is None:
self.best_score = score
self.save_checkpoint(val_loss, model, preds, epoch)
if self.save_latest_path is not None:
self.save_latest(val_loss, model, preds, epoch, score)
elif score >= self.best_score:
self.counter = 0
self.best_score = score
self.save_checkpoint(val_loss, model, preds, epoch)
if self.save_latest_path is not None:
self.save_latest(val_loss, model, preds, epoch, score)
# nanになったら学習ストップ
elif math.isnan(score):
self.early_stop = True
else:
self.counter += 1
if self.save_latest_path is not None:
self.save_latest(val_loss, model, preds, epoch, score)
if self.verbose:
print(f'EarlyStopping counter: {self.counter} out of {self.patience}')
if self.counter >= self.patience:
self.early_stop = True
def save_checkpoint(self, val_loss, model, preds, epoch):
'''Saves model when validation loss decrease.'''
if self.verbose:
print(f'Validation loss decreased ({self.val_loss_min:.10f} --> {val_loss:.10f}). Saving model ...')
torch.save({'model': model.state_dict(), 'preds': preds,
'epoch' : epoch, 'best_score' : self.best_score, 'counter' : self.counter},
self.save_path)
self.val_loss_min = val_loss
def save_latest(self, val_loss, model, preds, epoch, score):
'''Saves latest model.'''
torch.save({'model': model.state_dict(), 'preds': preds,
'epoch' : epoch, 'score' : score, 'counter' : self.counter},
self.save_latest_path)
self.val_loss_min = val_loss
```
# CV split
```
folds = train.copy()
Fold = StratifiedKFold(n_splits=CFG['n_fold'], shuffle=True, random_state=CFG['seed'])
for n, (train_index, val_index) in enumerate(Fold.split(folds, folds[CFG['target_col']])):
folds.loc[val_index, 'fold'] = int(n)
folds['fold'] = folds['fold'].astype(int)
print(folds.groupby(['fold', CFG['target_col']]).size())
```
# Dataset
```
# ====================================================
# Dataset
# ====================================================
class TrainDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.labels = df['label'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{DATA_PATH}/train_images/{file_name}'
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
label = torch.tensor(self.labels[idx]).long()
return image, label
class TestDataset(Dataset):
def __init__(self, df, transform=None):
self.df = df
self.file_names = df['image_id'].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_name = self.file_names[idx]
file_path = f'{DATA_PATH}/test_images/{file_name}'
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
return image
# train_dataset = TrainDataset(train, transform=None)
# for i in range(1):
# image, label = train_dataset[i]
# plt.imshow(image)
# plt.title(f'label: {label}')
# plt.show()
```
# Transforms
```
def _get_augmentations(aug_list):
process = []
for aug in aug_list:
if aug == 'Resize':
process.append(Resize(CFG['size'], CFG['size']))
elif aug == 'RandomResizedCrop':
process.append(RandomResizedCrop(CFG['size'], CFG['size']))
elif aug == 'Transpose':
process.append(Transpose(p=0.5))
elif aug == 'HorizontalFlip':
process.append(HorizontalFlip(p=0.5))
elif aug == 'VerticalFlip':
process.append(VerticalFlip(p=0.5))
elif aug == 'ShiftScaleRotate':
process.append(ShiftScaleRotate(p=0.5))
elif aug == 'Normalize':
process.append(Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
))
else:
raise ValueError(f'{aug} is not suitable')
process.append(ToTensorV2())
return process
# ====================================================
# Transforms
# ====================================================
def get_transforms(*, data):
if data == 'train':
return Compose(
_get_augmentations(TAG['augmentation'])
)
elif data == 'valid':
return Compose(
_get_augmentations(['Resize', 'Normalize'])
)
# train_dataset = TrainDataset(train, transform=get_transforms(data='train'))
# for i in range(1):
# image, label = train_dataset[i]
# plt.imshow(image[0])
# plt.title(f'label: {label}')
# plt.show()
```
# Bi-tempered logistic loss
```
def log_t(u, t):
"""Compute log_t for `u'."""
if t==1.0:
return u.log()
else:
return (u.pow(1.0 - t) - 1.0) / (1.0 - t)
def exp_t(u, t):
"""Compute exp_t for `u'."""
if t==1:
return u.exp()
else:
return (1.0 + (1.0-t)*u).relu().pow(1.0 / (1.0 - t))
def compute_normalization_fixed_point(activations, t, num_iters):
"""Returns the normalization value for each example (t > 1.0).
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (> 1.0 for tail heaviness).
num_iters: Number of iterations to run the method.
Return: A tensor of same shape as activation with the last dimension being 1.
"""
mu, _ = torch.max(activations, -1, keepdim=True)
normalized_activations_step_0 = activations - mu
normalized_activations = normalized_activations_step_0
for _ in range(num_iters):
logt_partition = torch.sum(
exp_t(normalized_activations, t), -1, keepdim=True)
normalized_activations = normalized_activations_step_0 * \
logt_partition.pow(1.0-t)
logt_partition = torch.sum(
exp_t(normalized_activations, t), -1, keepdim=True)
normalization_constants = - log_t(1.0 / logt_partition, t) + mu
return normalization_constants
def compute_normalization_binary_search(activations, t, num_iters):
"""Returns the normalization value for each example (t < 1.0).
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (< 1.0 for finite support).
num_iters: Number of iterations to run the method.
Return: A tensor of same rank as activation with the last dimension being 1.
"""
mu, _ = torch.max(activations, -1, keepdim=True)
normalized_activations = activations - mu
effective_dim = \
torch.sum(
(normalized_activations > -1.0 / (1.0-t)).to(torch.int32),
dim=-1, keepdim=True).to(activations.dtype)
shape_partition = activations.shape[:-1] + (1,)
lower = torch.zeros(shape_partition, dtype=activations.dtype, device=activations.device)
upper = -log_t(1.0/effective_dim, t) * torch.ones_like(lower)
for _ in range(num_iters):
logt_partition = (upper + lower)/2.0
sum_probs = torch.sum(
exp_t(normalized_activations - logt_partition, t),
dim=-1, keepdim=True)
update = (sum_probs < 1.0).to(activations.dtype)
lower = torch.reshape(
lower * update + (1.0-update) * logt_partition,
shape_partition)
upper = torch.reshape(
upper * (1.0 - update) + update * logt_partition,
shape_partition)
logt_partition = (upper + lower)/2.0
return logt_partition + mu
class ComputeNormalization(torch.autograd.Function):
"""
Class implementing custom backward pass for compute_normalization. See compute_normalization.
"""
@staticmethod
def forward(ctx, activations, t, num_iters):
if t < 1.0:
normalization_constants = compute_normalization_binary_search(activations, t, num_iters)
else:
normalization_constants = compute_normalization_fixed_point(activations, t, num_iters)
ctx.save_for_backward(activations, normalization_constants)
ctx.t=t
return normalization_constants
@staticmethod
def backward(ctx, grad_output):
activations, normalization_constants = ctx.saved_tensors
t = ctx.t
normalized_activations = activations - normalization_constants
probabilities = exp_t(normalized_activations, t)
escorts = probabilities.pow(t)
escorts = escorts / escorts.sum(dim=-1, keepdim=True)
grad_input = escorts * grad_output
return grad_input, None, None
def compute_normalization(activations, t, num_iters=5):
"""Returns the normalization value for each example.
Backward pass is implemented.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
num_iters: Number of iterations to run the method.
Return: A tensor of same rank as activation with the last dimension being 1.
"""
return ComputeNormalization.apply(activations, t, num_iters)
def tempered_sigmoid(activations, t, num_iters = 5):
"""Tempered sigmoid function.
Args:
activations: Activations for the positive class for binary classification.
t: Temperature tensor > 0.0.
num_iters: Number of iterations to run the method.
Returns:
A probabilities tensor.
"""
internal_activations = torch.stack([activations,
torch.zeros_like(activations)],
dim=-1)
internal_probabilities = tempered_softmax(internal_activations, t, num_iters)
return internal_probabilities[..., 0]
def tempered_softmax(activations, t, num_iters=5):
"""Tempered softmax function.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
t: Temperature > 1.0.
num_iters: Number of iterations to run the method.
Returns:
A probabilities tensor.
"""
if t == 1.0:
return activations.softmax(dim=-1)
normalization_constants = compute_normalization(activations, t, num_iters)
return exp_t(activations - normalization_constants, t)
def bi_tempered_binary_logistic_loss(activations,
labels,
t1,
t2,
label_smoothing = 0.0,
num_iters=5,
reduction='mean'):
"""Bi-Tempered binary logistic loss.
Args:
activations: A tensor containing activations for class 1.
labels: A tensor with shape as activations, containing probabilities for class 1
t1: Temperature 1 (< 1.0 for boundedness).
t2: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
label_smoothing: Label smoothing
num_iters: Number of iterations to run the method.
Returns:
A loss tensor.
"""
internal_activations = torch.stack([activations,
torch.zeros_like(activations)],
dim=-1)
internal_labels = torch.stack([labels.to(activations.dtype),
1.0 - labels.to(activations.dtype)],
dim=-1)
return bi_tempered_logistic_loss(internal_activations,
internal_labels,
t1,
t2,
label_smoothing = label_smoothing,
num_iters = num_iters,
reduction = reduction)
def bi_tempered_logistic_loss(activations,
labels,
t1,
t2,
label_smoothing=0.0,
num_iters=5,
reduction = 'mean'):
"""Bi-Tempered Logistic Loss.
Args:
activations: A multi-dimensional tensor with last dimension `num_classes`.
labels: A tensor with shape and dtype as activations (onehot),
or a long tensor of one dimension less than activations (pytorch standard)
t1: Temperature 1 (< 1.0 for boundedness).
t2: Temperature 2 (> 1.0 for tail heaviness, < 1.0 for finite support).
label_smoothing: Label smoothing parameter between [0, 1). Default 0.0.
num_iters: Number of iterations to run the method. Default 5.
reduction: ``'none'`` | ``'mean'`` | ``'sum'``. Default ``'mean'``.
``'none'``: No reduction is applied, return shape is shape of
activations without the last dimension.
``'mean'``: Loss is averaged over minibatch. Return shape (1,)
``'sum'``: Loss is summed over minibatch. Return shape (1,)
Returns:
A loss tensor.
"""
if len(labels.shape)<len(activations.shape): #not one-hot
labels_onehot = torch.zeros_like(activations)
labels_onehot.scatter_(1, labels[..., None], 1)
else:
labels_onehot = labels
if label_smoothing > 0:
num_classes = labels_onehot.shape[-1]
labels_onehot = ( 1 - label_smoothing * num_classes / (num_classes - 1) ) \
* labels_onehot + \
label_smoothing / (num_classes - 1)
probabilities = tempered_softmax(activations, t2, num_iters)
loss_values = labels_onehot * log_t(labels_onehot + 1e-10, t1) \
- labels_onehot * log_t(probabilities, t1) \
- labels_onehot.pow(2.0 - t1) / (2.0 - t1) \
+ probabilities.pow(2.0 - t1) / (2.0 - t1)
loss_values = loss_values.sum(dim = -1) #sum over classes
if reduction == 'none':
return loss_values
if reduction == 'sum':
return loss_values.sum()
if reduction == 'mean':
return loss_values.mean()
```
# MODEL
```
# ====================================================
# MODEL
# ====================================================
class CustomModel(nn.Module):
def __init__(self, model_name, pretrained=False):
super().__init__()
self.model = timm.create_model(model_name, pretrained=pretrained)
if hasattr(self.model, 'classifier'):
n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(n_features, CFG['target_size'])
elif hasattr(self.model, 'fc'):
n_features = self.model.fc.in_features
self.model.fc = nn.Linear(n_features, CFG['target_size'])
def forward(self, x):
x = self.model(x)
return x
model = CustomModel(model_name=TAG['model_name'], pretrained=False)
train_dataset = TrainDataset(train, transform=get_transforms(data='train'))
train_loader = DataLoader(train_dataset, batch_size=4, shuffle=True,
num_workers=4, pin_memory=True, drop_last=True)
for image, label in train_loader:
output = model(image)
print(output)
break
```
# Helper functions
```
# ====================================================
# Helper functions
# ====================================================
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return '%dm %ds' % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return '%s (remain %s)' % (asMinutes(s), asMinutes(rs))
# ====================================================
# loss
# ====================================================
def get_loss(criterion, y_preds, labels):
if TAG['criterion']=='CrossEntropyLoss':
loss = criterion(y_preds, labels)
elif TAG['criterion'] == 'bi_tempered_logistic_loss':
loss = criterion(y_preds, labels, t1=CFG['bi_tempered_loss_t1'], t2=CFG['bi_tempered_loss_t2'])
return loss
# ====================================================
# Helper functions
# ====================================================
def train_fn(train_loader, model, criterion, optimizer, epoch, scheduler, device):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
scores = AverageMeter()
# switch to train mode
model.train()
start = end = time.time()
global_step = 0
for step, (images, labels) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
batch_size = labels.size(0)
y_preds = model(images)
loss = get_loss(criterion, y_preds, labels)
# record loss
losses.update(loss.item(), batch_size)
if CFG['gradient_accumulation_steps'] > 1:
loss = loss / CFG['gradient_accumulation_steps']
if CFG['apex']:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), CFG['max_grad_norm'])
if (step + 1) % CFG['gradient_accumulation_steps'] == 0:
optimizer.step()
optimizer.zero_grad()
global_step += 1
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG['print_freq'] == 0 or step == (len(train_loader)-1):
print('Epoch: [{0}][{1}/{2}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
'Loss: {loss.val:.4f}({loss.avg:.4f}) '
'Grad: {grad_norm:.4f} '
#'LR: {lr:.6f} '
.format(
epoch+1, step, len(train_loader), batch_time=batch_time,
data_time=data_time, loss=losses,
remain=timeSince(start, float(step+1)/len(train_loader)),
grad_norm=grad_norm,
#lr=scheduler.get_lr()[0],
))
return losses.avg
def valid_fn(valid_loader, model, criterion, device):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
scores = AverageMeter()
# switch to evaluation mode
model.eval()
preds = []
start = end = time.time()
for step, (images, labels) in enumerate(valid_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
batch_size = labels.size(0)
# compute loss
with torch.no_grad():
y_preds = model(images)
loss = get_loss(criterion, y_preds, labels)
losses.update(loss.item(), batch_size)
# record accuracy
preds.append(y_preds.softmax(1).to('cpu').numpy())
if CFG['gradient_accumulation_steps'] > 1:
loss = loss / CFG['gradient_accumulation_steps']
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG['print_freq'] == 0 or step == (len(valid_loader)-1):
print('EVAL: [{0}/{1}] '
'Data {data_time.val:.3f} ({data_time.avg:.3f}) '
'Elapsed {remain:s} '
'Loss: {loss.val:.4f}({loss.avg:.4f}) '
.format(
step, len(valid_loader), batch_time=batch_time,
data_time=data_time, loss=losses,
remain=timeSince(start, float(step+1)/len(valid_loader)),
))
predictions = np.concatenate(preds)
return losses.avg, predictions
def inference(model, states, test_loader, device):
model.to(device)
tk0 = tqdm(enumerate(test_loader), total=len(test_loader))
probs = []
for i, (images) in tk0:
images = images.to(device)
avg_preds = []
for state in states:
# model.load_state_dict(state['model'])
model.load_state_dict(state)
model.eval()
with torch.no_grad():
y_preds = model(images)
avg_preds.append(y_preds.softmax(1).to('cpu').numpy())
avg_preds = np.mean(avg_preds, axis=0)
probs.append(avg_preds)
probs = np.concatenate(probs)
return probs
```
# Train loop
```
# ====================================================
# scheduler
# ====================================================
def get_scheduler(optimizer):
if TAG['scheduler']=='ReduceLROnPlateau':
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=CFG['factor'], patience=CFG['patience'], verbose=True, eps=CFG['eps'])
elif TAG['scheduler']=='CosineAnnealingLR':
scheduler = CosineAnnealingLR(optimizer, T_max=CFG['T_max'], eta_min=CFG['min_lr'], last_epoch=-1)
elif TAG['scheduler']=='CosineAnnealingWarmRestarts':
scheduler = CosineAnnealingWarmRestarts(optimizer, T_0=CFG['T_0'], T_mult=1, eta_min=CFG['min_lr'], last_epoch=-1)
return scheduler
# ====================================================
# criterion
# ====================================================
def get_criterion():
if TAG['criterion']=='CrossEntropyLoss':
criterion = nn.CrossEntropyLoss()
elif TAG['criterion'] == 'bi_tempered_logistic_loss':
criterion = bi_tempered_logistic_loss
return criterion
# ====================================================
# Train loop
# ====================================================
def train_loop(folds, fold):
LOGGER.info(f"========== fold: {fold} training ==========")
if not CFG['debug']:
mlflow.set_tag('running.fold', str(fold))
# ====================================================
# loader
# ====================================================
trn_idx = folds[folds['fold'] != fold].index
val_idx = folds[folds['fold'] == fold].index
train_folds = folds.loc[trn_idx].reset_index(drop=True)
valid_folds = folds.loc[val_idx].reset_index(drop=True)
train_dataset = TrainDataset(train_folds,
transform=get_transforms(data='train'))
valid_dataset = TrainDataset(valid_folds,
transform=get_transforms(data='valid'))
train_loader = DataLoader(train_dataset,
batch_size=CFG['batch_size'],
shuffle=True,
num_workers=CFG['num_workers'], pin_memory=True, drop_last=True)
valid_loader = DataLoader(valid_dataset,
batch_size=CFG['batch_size'],
shuffle=False,
num_workers=CFG['num_workers'], pin_memory=True, drop_last=False)
# ====================================================
# model & optimizer & criterion
# ====================================================
best_model_path = OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth'
latest_model_path = OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_latest.pth'
model = CustomModel(TAG['model_name'], pretrained=True)
model.to(device)
# 学習途中の重みがあれば読み込み
if os.path.isfile(latest_model_path):
state_latest = torch.load(latest_model_path)
state_best = torch.load(best_model_path)
model.load_state_dict(state_latest['model'])
epoch_start = state_latest['epoch']+1
# er_best_score = state_latest['score']
er_counter = state_latest['counter']
er_best_score = state_best['best_score']
LOGGER.info(f'Retrain model in epoch:{epoch_start}, best_score:{er_best_score:.3f}, counter:{er_counter}')
# # bestしか保存していない用(後で消す)
# elif os.path.isfile(best_model_path):
# state_best = torch.load(best_model_path)
# model.load_state_dict(state_best['model'])
# epoch_start = state_best['epoch']+1
# # er_best_score = state_latest['score']
# er_counter = state_best['counter']
# er_best_score = state_best['best_score']
# LOGGER.info(f'Retrain model in epoch:{epoch_start}, best_score:{er_best_score:.3f}, counter:{er_counter}')
else:
epoch_start = 0
er_best_score = None
er_counter = 0
optimizer = Adam(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'], amsgrad=False)
scheduler = get_scheduler(optimizer)
criterion = get_criterion()
# ====================================================
# apex
# ====================================================
if CFG['apex']:
model, optimizer = amp.initialize(model, optimizer, opt_level='O1', verbosity=0)
# ====================================================
# loop
# ====================================================
# best_score = 0.
# best_loss = np.inf
early_stopping = EarlyStopping(
patience=CFG['early_stopping_round'],
verbose=True,
save_path=best_model_path,
counter=er_counter, best_score=er_best_score,
save_latest_path=latest_model_path)
for epoch in range(epoch_start, CFG['epochs']):
start_time = time.time()
# train
avg_loss = train_fn(train_loader, model, criterion, optimizer, epoch, scheduler, device)
# eval
avg_val_loss, preds = valid_fn(valid_loader, model, criterion, device)
valid_labels = valid_folds[CFG['target_col']].values
# early stopping
early_stopping(avg_val_loss, model, preds, epoch)
if early_stopping.early_stop:
print(f'Epoch {epoch+1} - early stopping')
break
if isinstance(scheduler, ReduceLROnPlateau):
scheduler.step(avg_val_loss)
elif isinstance(scheduler, CosineAnnealingLR):
scheduler.step()
elif isinstance(scheduler, CosineAnnealingWarmRestarts):
scheduler.step()
# scoring
score = get_score(valid_labels, preds.argmax(1))
elapsed = time.time() - start_time
LOGGER.info(f'Epoch {epoch+1} - avg_train_loss: {avg_loss:.4f} avg_val_loss: {avg_val_loss:.4f} time: {elapsed:.0f}s')
LOGGER.info(f'Epoch {epoch+1} - Accuracy: {score}')
# log mlflow
if not CFG['debug']:
mlflow.log_metric(f"fold{fold} avg_train_loss", avg_loss, step=epoch)
mlflow.log_metric(f"fold{fold} avg_valid_loss", avg_val_loss, step=epoch)
mlflow.log_metric(f"fold{fold} score", score, step=epoch)
mlflow.log_metric(f"fold{fold} lr", scheduler.get_last_lr()[0], step=epoch)
mlflow.log_artifact(best_model_path)
if os.path.isfile(latest_model_path):
mlflow.log_artifact(latest_model_path)
check_point = torch.load(OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth')
valid_folds[[str(c) for c in range(5)]] = check_point['preds']
valid_folds['preds'] = check_point['preds'].argmax(1)
return valid_folds
# ====================================================
# main
# ====================================================
def get_result(result_df):
preds = result_df['preds'].values
labels = result_df[CFG['target_col']].values
score = get_score(labels, preds)
LOGGER.info(f'Score: {score:<.5f}')
return score
def main():
"""
Prepare: 1.train 2.test 3.submission 4.folds
"""
if CFG['train']:
# train
oof_df = pd.DataFrame()
for fold in range(CFG['n_fold']):
if fold in CFG['trn_fold']:
_oof_df = train_loop(folds, fold)
oof_df = pd.concat([oof_df, _oof_df])
LOGGER.info(f"========== fold: {fold} result ==========")
_ = get_result(_oof_df)
# CV result
LOGGER.info(f"========== CV ==========")
score = get_result(oof_df)
# save result
oof_df.to_csv(OUTPUT_DIR+'oof_df.csv', index=False)
# log mlflow
if not CFG['debug']:
mlflow.log_metric('oof score', score)
mlflow.delete_tag('running.fold')
mlflow.log_artifact(OUTPUT_DIR+'oof_df.csv')
if CFG['inference']:
# inference
model = CustomModel(TAG['model_name'], pretrained=False)
states = [torch.load(OUTPUT_DIR+f'{TAG["model_name"]}_fold{fold}_best.pth') for fold in CFG['trn_fold']]
test_dataset = TestDataset(test, transform=get_transforms(data='valid'))
test_loader = DataLoader(test_dataset, batch_size=CFG['batch_size'], shuffle=False,
num_workers=CFG['num_workers'], pin_memory=True)
predictions = inference(model, states, test_loader, device)
# submission
test['label'] = predictions.argmax(1)
test[['image_id', 'label']].to_csv(OUTPUT_DIR+'submission.csv', index=False)
```
# rerun
```
def _load_save_point(run_id):
# どこで中断したか取得
stop_fold = int(mlflow.get_run(run_id=run_id).to_dictionary()['data']['tags']['running.fold'])
# 学習対象のfoldを変更
CFG['trn_fold'] = [fold for fold in CFG['trn_fold'] if fold>=stop_fold]
# 学習済みモデルがあれば.pthファイルを取得(学習中も含む)
client = mlflow.tracking.MlflowClient()
artifacts = [artifact for artifact in client.list_artifacts(run_id) if ".pth" in artifact.path]
for artifact in artifacts:
client.download_artifacts(run_id, artifact.path, OUTPUT_DIR)
def check_have_run():
results = mlflow.search_runs(INFO['EXPERIMENT_ID'])
run_id_list = results[results['tags.mlflow.runName']==TITLE]['run_id'].tolist()
# 初めて実行する場合
if len(run_id_list) == 0:
run_id = None
# 既に実行されている場合
else:
assert len(run_id_list)==1
run_id = run_id_list[0]
_load_save_point(run_id)
return run_id
if __name__ == '__main__':
if CFG['debug']:
main()
else:
mlflow.set_tracking_uri(INFO['TRACKING_URI'])
mlflow.set_experiment('single model')
# 既に実行済みの場合は続きから実行する
run_id = check_have_run()
with mlflow.start_run(run_id=run_id, run_name=TITLE):
if run_id is None:
mlflow.log_artifact(CONFIG_PATH)
mlflow.log_param('device', device)
mlflow.set_tags(TAG)
mlflow.log_params(CFG)
mlflow.log_artifact(notebook_path)
main()
mlflow.log_artifacts(OUTPUT_DIR)
shutil.copytree(OUTPUT_DIR, f'{INFO["SHARE_DRIVE_PATH"]}/{TITLE}')
shutil.copy2(CONFIG_PATH, f'{INFO["SHARE_DRIVE_PATH"]}/{TITLE}/{CONFIG_NAME}')
```
| github_jupyter |
# From Modeling to Evaluation
## Introduction
In this lab, we will continue learning about the data science methodology, and focus on the **Modeling** and **Evaluation** stages.
------------
## Table of Contents
1. [Recap](#0)<br>
2. [Data Modeling](#2)<br>
3. [Model Evaluation](#4)<br>
# Recap <a id="0"></a>
In Lab **From Understanding to Preparation**, we explored the data and prepared it for modeling.
The data was compiled by a researcher named Yong-Yeol Ahn, who scraped tens of thousands of food recipes (cuisines and ingredients) from three different websites, namely:
<img src="https://ibm.box.com/shared/static/4fruwan7wmjov3gywiz3swlojw0srv54.png" width=500>
www.allrecipes.com
<img src="https://ibm.box.com/shared/static/cebfdbr22fjxa47lltp0bs533r103g0z.png" width=500>
www.epicurious.com
<img src="https://ibm.box.com/shared/static/epk727njg7xrz49pbkpkzd05cm5ywqmu.png" width=500>
www.menupan.com
For more information on Yong-Yeol Ahn and his research, you can read his paper on [Flavor Network and the Principles of Food Pairing](http://yongyeol.com/papers/ahn-flavornet-2011.pdf).
<strong> Important note:</strong> Please note that you are not expected to know how to program in Python. This lab is meant to illustrate the stages of modeling and evaluation of the data science methodology, so it is totally fine if you do not understand the individual lines of code. We have a full course on programming in Python, <a href="http://cocl.us/PY0101EN_DS0103EN_LAB4_PYTHON_Coursera"><strong>Python for Data Science</strong></a>, which is also offered on Coursera. So make sure to complete the Python course if you are interested in learning how to program in Python.
### Using this notebook:
To run any of the following cells of code, you can type **Shift + Enter** to excute the code in a cell.
Download the library and dependencies that we will need to run this lab.
```
import pandas as pd # import library to read data into dataframe
pd.set_option("display.max_columns", None)
import numpy as np # import numpy library
import re # import library for regular expression
import random # library for random number generation
```
We already placed the data on an IBM server for your convenience, so let's download it from server and read it into a dataframe called **recipes**.
```
recipes = pd.read_csv("https://ibm.box.com/shared/static/5wah9atr5o1akuuavl2z9tkjzdinr1lv.csv")
print("Data read into dataframe!") # takes about 30 seconds
```
We will repeat the preprocessing steps that we implemented in Lab **From Understanding to Preparation** in order to prepare the data for modeling. For more details on preparing the data, please refer to Lab **From Understanding to Preparation**.
```
# fix name of the column displaying the cuisine
column_names = recipes.columns.values
column_names[0] = "cuisine"
recipes.columns = column_names
# convert cuisine names to lower case
recipes["cuisine"] = recipes["cuisine"].str.lower()
# make the cuisine names consistent
recipes.loc[recipes["cuisine"] == "austria", "cuisine"] = "austrian"
recipes.loc[recipes["cuisine"] == "belgium", "cuisine"] = "belgian"
recipes.loc[recipes["cuisine"] == "china", "cuisine"] = "chinese"
recipes.loc[recipes["cuisine"] == "canada", "cuisine"] = "canadian"
recipes.loc[recipes["cuisine"] == "netherlands", "cuisine"] = "dutch"
recipes.loc[recipes["cuisine"] == "france", "cuisine"] = "french"
recipes.loc[recipes["cuisine"] == "germany", "cuisine"] = "german"
recipes.loc[recipes["cuisine"] == "india", "cuisine"] = "indian"
recipes.loc[recipes["cuisine"] == "indonesia", "cuisine"] = "indonesian"
recipes.loc[recipes["cuisine"] == "iran", "cuisine"] = "iranian"
recipes.loc[recipes["cuisine"] == "italy", "cuisine"] = "italian"
recipes.loc[recipes["cuisine"] == "japan", "cuisine"] = "japanese"
recipes.loc[recipes["cuisine"] == "israel", "cuisine"] = "jewish"
recipes.loc[recipes["cuisine"] == "korea", "cuisine"] = "korean"
recipes.loc[recipes["cuisine"] == "lebanon", "cuisine"] = "lebanese"
recipes.loc[recipes["cuisine"] == "malaysia", "cuisine"] = "malaysian"
recipes.loc[recipes["cuisine"] == "mexico", "cuisine"] = "mexican"
recipes.loc[recipes["cuisine"] == "pakistan", "cuisine"] = "pakistani"
recipes.loc[recipes["cuisine"] == "philippines", "cuisine"] = "philippine"
recipes.loc[recipes["cuisine"] == "scandinavia", "cuisine"] = "scandinavian"
recipes.loc[recipes["cuisine"] == "spain", "cuisine"] = "spanish_portuguese"
recipes.loc[recipes["cuisine"] == "portugal", "cuisine"] = "spanish_portuguese"
recipes.loc[recipes["cuisine"] == "switzerland", "cuisine"] = "swiss"
recipes.loc[recipes["cuisine"] == "thailand", "cuisine"] = "thai"
recipes.loc[recipes["cuisine"] == "turkey", "cuisine"] = "turkish"
recipes.loc[recipes["cuisine"] == "vietnam", "cuisine"] = "vietnamese"
recipes.loc[recipes["cuisine"] == "uk-and-ireland", "cuisine"] = "uk-and-irish"
recipes.loc[recipes["cuisine"] == "irish", "cuisine"] = "uk-and-irish"
# remove data for cuisines with < 50 recipes:
recipes_counts = recipes["cuisine"].value_counts()
cuisines_indices = recipes_counts > 50
cuisines_to_keep = list(np.array(recipes_counts.index.values)[np.array(cuisines_indices)])
recipes = recipes.loc[recipes["cuisine"].isin(cuisines_to_keep)]
# convert all Yes's to 1's and the No's to 0's
recipes = recipes.replace(to_replace="Yes", value=1)
recipes = recipes.replace(to_replace="No", value=0)
```
<hr>
# Data Modeling <a id="2"></a>
<img src="https://ibm.box.com/shared/static/d6fiatxvraho57fq3tfyblsf38fzi61f.png" width=500>
Download and install more libraries and dependies to build decision trees.
```
# import decision trees scikit-learn libraries
%matplotlib inline
from sklearn import tree
from sklearn.metrics import accuracy_score, confusion_matrix
import matplotlib.pyplot as plt
!conda install python-graphviz --yes
import graphviz
from sklearn.tree import export_graphviz
import itertools
```
Check the data again!
```
recipes.head()
```
## [bamboo_tree] Only Asian and Indian Cuisines
Here, we are creating a decision tree for the recipes for just some of the Asian (Korean, Japanese, Chinese, Thai) and Indian cuisines. The reason for this is because the decision tree does not run well when the data is biased towards one cuisine, in this case American cuisines. One option is to exclude the American cuisines from our analysis or just build decision trees for different subsets of the data. Let's go with the latter solution.
Let's build our decision tree using the data pertaining to the Asian and Indian cuisines and name our decision tree *bamboo_tree*.
```
# select subset of cuisines
asian_indian_recipes = recipes[recipes.cuisine.isin(["korean", "japanese", "chinese", "thai", "indian"])]
cuisines = asian_indian_recipes["cuisine"]
ingredients = asian_indian_recipes.iloc[:,1:]
bamboo_tree = tree.DecisionTreeClassifier(max_depth=3)
bamboo_tree.fit(ingredients, cuisines)
print("Decision tree model saved to bamboo_tree!")
```
Let's plot the decision tree and examine how it looks like.
```
export_graphviz(bamboo_tree,
feature_names=list(ingredients.columns.values),
out_file="bamboo_tree.dot",
class_names=np.unique(cuisines),
filled=True,
node_ids=True,
special_characters=True,
impurity=False,
label="all",
leaves_parallel=False)
with open("bamboo_tree.dot") as bamboo_tree_image:
bamboo_tree_graph = bamboo_tree_image.read()
graphviz.Source(bamboo_tree_graph)
```
The decision tree learned:
* If a recipe contains *cumin* and *fish* and **no** *yoghurt*, then it is most likely a **Thai** recipe.
* If a recipe contains *cumin* but **no** *fish* and **no** *soy_sauce*, then it is most likely an **Indian** recipe.
You can analyze the remaining branches of the tree to come up with similar rules for determining the cuisine of different recipes.
Feel free to select another subset of cuisines and build a decision tree of their recipes. You can select some European cuisines and build a decision tree to explore the ingredients that differentiate them.
# Model Evaluation <a id="4"></a>
<img src="https://ibm.box.com/shared/static/prc3kksci2a6deks36jpyf4cf4oxh74a.png" width=500>
To evaluate our model of Asian and Indian cuisines, we will split our dataset into a training set and a test set. We will build the decision tree using the training set. Then, we will test the model on the test set and compare the cuisines that the model predicts to the actual cuisines.
Let's first create a new dataframe using only the data pertaining to the Asian and the Indian cuisines, and let's call the new dataframe **bamboo**.
```
bamboo = recipes[recipes.cuisine.isin(["korean", "japanese", "chinese", "thai", "indian"])]
```
Let's see how many recipes exist for each cuisine.
```
bamboo["cuisine"].value_counts()
```
Let's remove 30 recipes from each cuisine to use as the test set, and let's name this test set **bamboo_test**.
```
# set sample size
sample_n = 30
```
Create a dataframe containing 30 recipes from each cuisine, selected randomly.
```
# take 30 recipes from each cuisine
random.seed(1234) # set random seed
bamboo_test = bamboo.groupby("cuisine", group_keys=False).apply(lambda x: x.sample(sample_n))
bamboo_test_ingredients = bamboo_test.iloc[:,1:] # ingredients
bamboo_test_cuisines = bamboo_test["cuisine"] # corresponding cuisines or labels
```
Check that there are 30 recipes for each cuisine.
```
# check that we have 30 recipes from each cuisine
bamboo_test["cuisine"].value_counts()
```
Next, let's create the training set by removing the test set from the **bamboo** dataset, and let's call the training set **bamboo_train**.
```
bamboo_test_index = bamboo.index.isin(bamboo_test.index)
bamboo_train = bamboo[~bamboo_test_index]
bamboo_train_ingredients = bamboo_train.iloc[:,1:] # ingredients
bamboo_train_cuisines = bamboo_train["cuisine"] # corresponding cuisines or labels
```
Check that there are 30 _fewer_ recipes now for each cuisine.
```
bamboo_train["cuisine"].value_counts()
```
Let's build the decision tree using the training set, **bamboo_train**, and name the generated tree **bamboo_train_tree** for prediction.
```
bamboo_train_tree = tree.DecisionTreeClassifier(max_depth=15)
bamboo_train_tree.fit(bamboo_train_ingredients, bamboo_train_cuisines)
print("Decision tree model saved to bamboo_train_tree!")
```
Let's plot the decision tree and explore it.
```
export_graphviz(bamboo_train_tree,
feature_names=list(bamboo_train_ingredients.columns.values),
out_file="bamboo_train_tree.dot",
class_names=np.unique(bamboo_train_cuisines),
filled=True,
node_ids=True,
special_characters=True,
impurity=False,
label="all",
leaves_parallel=False)
with open("bamboo_train_tree.dot") as bamboo_train_tree_image:
bamboo_train_tree_graph = bamboo_train_tree_image.read()
graphviz.Source(bamboo_train_tree_graph)
```
Now that we defined our tree to be deeper, more decision nodes are generated.
#### Now let's test our model on the test data.
```
bamboo_pred_cuisines = bamboo_train_tree.predict(bamboo_test_ingredients)
```
To quantify how well the decision tree is able to determine the cuisine of each recipe correctly, we will create a confusion matrix which presents a nice summary on how many recipes from each cuisine are correctly classified. It also sheds some light on what cuisines are being confused with what other cuisines.
So let's go ahead and create the confusion matrix for how well the decision tree is able to correctly classify the recipes in **bamboo_test**.
```
test_cuisines = np.unique(bamboo_test_cuisines)
bamboo_confusion_matrix = confusion_matrix(bamboo_test_cuisines, bamboo_pred_cuisines, test_cuisines)
title = 'Bamboo Confusion Matrix'
cmap = plt.cm.Blues
plt.figure(figsize=(8, 6))
bamboo_confusion_matrix = (
bamboo_confusion_matrix.astype('float') / bamboo_confusion_matrix.sum(axis=1)[:, np.newaxis]
) * 100
plt.imshow(bamboo_confusion_matrix, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(test_cuisines))
plt.xticks(tick_marks, test_cuisines)
plt.yticks(tick_marks, test_cuisines)
fmt = '.2f'
thresh = bamboo_confusion_matrix.max() / 2.
for i, j in itertools.product(range(bamboo_confusion_matrix.shape[0]), range(bamboo_confusion_matrix.shape[1])):
plt.text(j, i, format(bamboo_confusion_matrix[i, j], fmt),
horizontalalignment="center",
color="white" if bamboo_confusion_matrix[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
```
After running the above code, you should get a confusion matrix similar to the following:
<img src="https://ibm.box.com/shared/static/69f5m7txv2u6g47867qe0eypnfylrj4w.png" width=500>
The rows represent the actual cuisines from the dataset and the columns represent the predicted ones. Each row should sum to 100%. According to this confusion matrix, we make the following observations:
* Using the first row in the confusion matrix, 60% of the **Chinese** recipes in **bamboo_test** were correctly classified by our decision tree whereas 37% of the **Chinese** recipes were misclassified as **Korean** and 3% were misclassified as **Indian**.
* Using the Indian row, 77% of the **Indian** recipes in **bamboo_test** were correctly classified by our decision tree and 3% of the **Indian** recipes were misclassified as **Chinese** and 13% were misclassified as **Korean** and 7% were misclassified as **Thai**.
**Please note** that because decision trees are created using random sampling of the datapoints in the training set, then you may not get the same results every time you create the decision tree even using the same training set. The performance should still be comparable though! So don't worry if you get slightly different numbers in your confusion matrix than the ones shown above.
Using the reference confusion matrix, how many **Japanese** recipes were correctly classified by our decision tree?
Double-click __here__ for the solution.
<!-- The correct answer is:
36.67%.
-->
Also using the reference confusion matrix, how many **Korean** recipes were misclassified as **Japanese**?
Double-click __here__ for the solution.
<!-- The correct answer is:
3.33%.
-->
What cuisine has the least number of recipes correctly classified by the decision tree using the reference confusion matrix?
Double-click __here__ for the solution.
<!-- The correct answer is:
Japanese cuisine, with 36.67% only.
-->
<br>
<hr>
### Thank you for completing this lab!
This notebook was created by [Alex Aklson](https://www.linkedin.com/in/aklson/). We hope you found this lab session interesting. Feel free to contact us if you have any questions!
This notebook is part of a course on **Coursera** called *Data Science Methodology*. If you accessed this notebook outside the course, you can take this course, online by clicking [here](https://cocl.us/DS0103EN_Coursera_LAB4).
<hr>
Copyright © 2018 [Cognitive Class](https://cognitiveclass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
<a href="https://colab.research.google.com/github/lmoroney/dlaicourse/blob/master/TensorFlow%20In%20Practice/Course%203%20-%20NLP/Course%203%20-%20Week%202%20-%20Exercise%20-%20Question.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import csv
import tensorflow as tf
import numpy as np
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/bbc-text.csv \
-O /tmp/bbc-text.csv
vocab_size = # YOUR CODE HERE
embedding_dim = # YOUR CODE HERE
max_length = # YOUR CODE HERE
trunc_type = # YOUR CODE HERE
padding_type = # YOUR CODE HERE
oov_tok = # YOUR CODE HERE
training_portion = .8
sentences = []
labels = []
stopwords = [ "a", "about", "above", "after", "again", "against", "all", "am", "an", "and", "any", "are", "as", "at", "be", "because", "been", "before", "being", "below", "between", "both", "but", "by", "could", "did", "do", "does", "doing", "down", "during", "each", "few", "for", "from", "further", "had", "has", "have", "having", "he", "he'd", "he'll", "he's", "her", "here", "here's", "hers", "herself", "him", "himself", "his", "how", "how's", "i", "i'd", "i'll", "i'm", "i've", "if", "in", "into", "is", "it", "it's", "its", "itself", "let's", "me", "more", "most", "my", "myself", "nor", "of", "on", "once", "only", "or", "other", "ought", "our", "ours", "ourselves", "out", "over", "own", "same", "she", "she'd", "she'll", "she's", "should", "so", "some", "such", "than", "that", "that's", "the", "their", "theirs", "them", "themselves", "then", "there", "there's", "these", "they", "they'd", "they'll", "they're", "they've", "this", "those", "through", "to", "too", "under", "until", "up", "very", "was", "we", "we'd", "we'll", "we're", "we've", "were", "what", "what's", "when", "when's", "where", "where's", "which", "while", "who", "who's", "whom", "why", "why's", "with", "would", "you", "you'd", "you'll", "you're", "you've", "your", "yours", "yourself", "yourselves" ]
print(len(stopwords))
# Expected Output
# 153
with open("/tmp/bbc-text.csv", 'r') as csvfile:
# YOUR CODE HERE
print(len(labels))
print(len(sentences))
print(sentences[0])
# Expected Output
# 2225
# 2225
# tv future hands viewers home theatre systems plasma high-definition tvs digital video recorders moving living room way people watch tv will radically different five years time. according expert panel gathered annual consumer electronics show las vegas discuss new technologies will impact one favourite pastimes. us leading trend programmes content will delivered viewers via home networks cable satellite telecoms companies broadband service providers front rooms portable devices. one talked-about technologies ces digital personal video recorders (dvr pvr). set-top boxes like us s tivo uk s sky+ system allow people record store play pause forward wind tv programmes want. essentially technology allows much personalised tv. also built-in high-definition tv sets big business japan us slower take off europe lack high-definition programming. not can people forward wind adverts can also forget abiding network channel schedules putting together a-la-carte entertainment. us networks cable satellite companies worried means terms advertising revenues well brand identity viewer loyalty channels. although us leads technology moment also concern raised europe particularly growing uptake services like sky+. happens today will see nine months years time uk adam hume bbc broadcast s futurologist told bbc news website. likes bbc no issues lost advertising revenue yet. pressing issue moment commercial uk broadcasters brand loyalty important everyone. will talking content brands rather network brands said tim hanlon brand communications firm starcom mediavest. reality broadband connections anybody can producer content. added: challenge now hard promote programme much choice. means said stacey jolna senior vice president tv guide tv group way people find content want watch simplified tv viewers. means networks us terms channels take leaf google s book search engine future instead scheduler help people find want watch. kind channel model might work younger ipod generation used taking control gadgets play them. might not suit everyone panel recognised. older generations comfortable familiar schedules channel brands know getting. perhaps not want much choice put hands mr hanlon suggested. end kids just diapers pushing buttons already - everything possible available said mr hanlon. ultimately consumer will tell market want. 50 000 new gadgets technologies showcased ces many enhancing tv-watching experience. high-definition tv sets everywhere many new models lcd (liquid crystal display) tvs launched dvr capability built instead external boxes. one example launched show humax s 26-inch lcd tv 80-hour tivo dvr dvd recorder. one us s biggest satellite tv companies directtv even launched branded dvr show 100-hours recording capability instant replay search function. set can pause rewind tv 90 hours. microsoft chief bill gates announced pre-show keynote speech partnership tivo called tivotogo means people can play recorded programmes windows pcs mobile devices. reflect increasing trend freeing multimedia people can watch want want.
train_size = # YOUR CODE HERE
train_sentences = # YOUR CODE HERE
train_labels = # YOUR CODE HERE
validation_sentences = # YOUR CODE HERE
validation_labels = # YOUR CODE HERE
print(train_size)
print(len(train_sentences))
print(len(train_labels))
print(len(validation_sentences))
print(len(validation_labels))
# Expected output (if training_portion=.8)
# 1780
# 1780
# 1780
# 445
# 445
tokenizer = # YOUR CODE HERE
tokenizer.fit_on_texts(# YOUR CODE HERE)
word_index = # YOUR CODE HERE
train_sequences = # YOUR CODE HERE
train_padded = # YOUR CODE HERE
print(len(train_sequences[0]))
print(len(train_padded[0]))
print(len(train_sequences[1]))
print(len(train_padded[1]))
print(len(train_sequences[10]))
print(len(train_padded[10]))
# Expected Ouput
# 449
# 120
# 200
# 120
# 192
# 120
validation_sequences = # YOUR CODE HERE
validation_padded = # YOUR CODE HERE
print(len(validation_sequences))
print(validation_padded.shape)
# Expected output
# 445
# (445, 120)
label_tokenizer = # YOUR CODE HERE
label_tokenizer.fit_on_texts(# YOUR CODE HERE)
training_label_seq = # YOUR CODE HERE
validation_label_seq = # YOUR CODE HERE
print(training_label_seq[0])
print(training_label_seq[1])
print(training_label_seq[2])
print(training_label_seq.shape)
print(validation_label_seq[0])
print(validation_label_seq[1])
print(validation_label_seq[2])
print(validation_label_seq.shape)
# Expected output
# [4]
# [2]
# [1]
# (1780, 1)
# [5]
# [4]
# [3]
# (445, 1)
model = tf.keras.Sequential([
# YOUR CODE HERE
])
model.compile(loss='sparse_categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
# Expected Output
# Layer (type) Output Shape Param #
# =================================================================
# embedding (Embedding) (None, 120, 16) 16000
# _________________________________________________________________
# global_average_pooling1d (Gl (None, 16) 0
# _________________________________________________________________
# dense (Dense) (None, 24) 408
# _________________________________________________________________
# dense_1 (Dense) (None, 6) 150
# =================================================================
# Total params: 16,558
# Trainable params: 16,558
# Non-trainable params: 0
num_epochs = 30
history = model.fit(# YOUR CODE HERE)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
plot_graphs(history, "acc")
plot_graphs(history, "loss")
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_sentence(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
# Expected output
# (1000, 16)
import io
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word_num in range(1, vocab_size):
word = reverse_word_index[word_num]
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
```
| github_jupyter |
# A modest proposal for dataset
```
from IPython.display import Image
from IPython.core.display import HTML
Image(url= "https://falexwolf.de/img/scanpy/anndata.svg")
```
## Imports
```
import pinot
import numpy as np
import pandas as pd
```
## Munging and flattening stuff.
```
ds = pinot.data.moonshot_mixed()
# flatten cs and ys
for idx, d in enumerate(ds.ds):
d['cs'] = []
d['ys'] = []
if isinstance(d['cs_multiple'], list):
d['cs'].extend(d['cs_multiple'])
d['ys'].extend(d['ys_multiple'])
d['cs'].extend(d['cs_single'])
d['ys'].extend(d['ys_single'])
del d['cs_multiple']
del d['ys_multiple']
del d['cs_single']
del d['ys_single']
# gather cs and ys into dictionary
cs = [d['cs'] for d in ds]
ys = [d['ys'] for d in ds]
ys_dicts = [dict(zip(d['cs'], d['ys'])) for d in ds]
```
Preparing to create combined dataframe.
```
# first, get all concentrations
unique = lambda array: set(x for l in array for x in l)
concentrations = sorted(list(unique(cs)))
# for each concentration, check if it is in cs
datalist = []
for idx, ys_dict in enumerate(ys_dicts):
# create row of dataframe
row_y = []
for concentration in concentrations:
if concentration in ys_dict:
# if it is, add it to the row
row_y.append(ys_dict[concentration])
else:
# else, add a dummy value
row_y.append(np.nan)
# append row to dataframe
datalist.append(row_y)
```
## Creating AnnData object.
```
# create dataframe
df = pd.DataFrame(datalist, columns=concentrations)
df.index = [str(i) for i in df.index]
df.index.name = 'compound'
df.columns = [str(c) for c in df.columns]
df.columns.name = 'concentration'
# create AnnData object
import anndata as ad
adata = ad.AnnData(df)
```
Fill metadata.
```
# add observation metadata
adata.obs['SMILES'] = [d['SMILES'] for d in ds]
adata.obs['graph'] = [d['g'] for d in ds]
# add data-level metadata
adata.layers['measurement'] = adata.X
# for example, replicate annotation
replicate_annotations = np.random.choice([0., 1.], adata.shape)
replicate_annotations[np.isnan(adata.X)] = np.nan
adata.layers['replicate'] = replicate_annotations
# could add dataset-level information
adata.uns['n_measurements'] = (~np.isnan(adata.X)).sum()
adata.uns['n_unique_graphs'] = len(adata.obs)
adata
adata.obs
```
# Mini-batches demo
Setting up a loader.
```
# adding requisite columns
a_df = adata.to_df()
a_df['compound'] = range(len(a_df))
a_df['graph'] = adata.obs['graph']
# melting to long format
df_loader_unfiltered = pd.melt(a_df, id_vars=['compound', 'graph'])
df_loader = df_loader_unfiltered[~np.isnan(df_loader_unfiltered['value'])]
df_loader.columns = ['id', 'g', 'c', 'y']
df_loader
```
Iterating.
```
for batch in df_loader.itertuples(index=False):
print(batch.g)
print(batch.c)
print(batch.y)
break
```
Note that `itertuples` is actually slow for iteration purposes. Moreover, we would want to turn this into a proper torch dataset.
This is just to demonstrate the idea.
| github_jupyter |
# Weight Sampling Tutorial
If you want to fine-tune one of the trained original SSD models on your own dataset, chances are that your dataset doesn't have the same number of classes as the trained model you're trying to fine-tune.
This notebook explains a few options for how to deal with this situation. In particular, one solution is to sub-sample (or up-sample) the weight tensors of all the classification layers so that their shapes correspond to the number of classes in your dataset.
This notebook explains how this is done.
## 0. Our example
I'll use a concrete example to make the process clear, but of course the process explained here is the same for any dataset.
Consider the following example. You have a dataset on road traffic objects. Let this dataset contain annotations for the following object classes of interest:
`['car', 'truck', 'pedestrian', 'bicyclist', 'traffic_light', 'motorcycle', 'bus', 'stop_sign']`
That is, your dataset contains annotations for 8 object classes.
You would now like to train an SSD300 on this dataset. However, instead of going through all the trouble of training a new model from scratch, you would instead like to use the fully trained original SSD300 model that was trained on MS COCO and fine-tune it on your dataset.
The problem is: The SSD300 that was trained on MS COCO predicts 80 different classes, but your dataset has only 8 classes. The weight tensors of the classification layers of the MS COCO model don't have the right shape for your model that is supposed to learn only 8 classes. Bummer.
So what options do we have?
### Option 1: Just ignore the fact that we need only 8 classes
The maybe not so obvious but totally obvious option is: We could just ignore the fact that the trained MS COCO model predicts 80 different classes, but we only want to fine-tune it on 8 classes. We could simply map the 8 classes in our annotated dataset to any 8 indices out of the 80 that the MS COCO model predicts. The class IDs in our dataset could be indices 1-8, they could be the indices `[0, 3, 8, 1, 2, 10, 4, 6, 12]`, or any other 8 out of the 80. Whatever we would choose them to be. The point is that we would be training only 8 out of every 80 neurons that predict the class for a given box and the other 72 would simply not be trained. Nothing would happen to them, because the gradient for them would always be zero, because these indices don't appear in our dataset.
This would work, and it wouldn't even be a terrible option. Since only 8 out of the 80 classes would get trained, the model might get gradually worse at predicting the other 72 clases, but we don't care about them anyway, at least not right now. And if we ever realize that we now want to predict more than 8 different classes, our model would be expandable in that sense. Any new class we want to add could just get any one of the remaining free indices as its ID. We wouldn't need to change anything about the model, it would just be a matter of having the dataset annotated accordingly.
Still, in this example we don't want to take this route. We don't want to carry around the computational overhead of having overly complex classifier layers, 90 percent of which we don't use anyway, but still their whole output needs to be computed in every forward pass.
So what else could we do instead?
### Option 2: Just ignore those weights that are causing problems
We could build a new SSD300 with 8 classes and load into it the weights of the MS COCO SSD300 for all layers except the classification layers. Would that work? Yes, that would work. The only conflict is with the weights of the classification layers, and we can avoid this conflict by simply ignoring them. While this solution would be easy, it has a significant downside: If we're not loading trained weights for the classification layers of our new SSD300 model, then they will be initialized randomly. We'd still benefit from the trained weights for all the other layers, but the classifier layers would need to be trained from scratch.
Not the end of the world, but we like pre-trained stuff, because it saves us a lot of training time. So what else could we do?
### Option 3: Sub-sample the weights that are causing problems
Instead of throwing the problematic weights away like in option 2, we could also sub-sample them. If the weight tensors of the classification layers of the MS COCO model don't have the right shape for our new model, we'll just **make** them have the right shape. This way we can still benefit from the pre-trained weights in those classification layers. Seems much better than option 2.
The great thing in this example is: MS COCO happens to contain all of the eight classes that we care about. So when we sub-sample the weight tensors of the classification layers, we won't just do so randomly. Instead, we'll pick exactly those elements from the tensor that are responsible for the classification of the 8 classes that we care about.
However, even if the classes in your dataset were entirely different from the classes in any of the fully trained models, it would still make a lot of sense to use the weights of the fully trained model. Any trained weights are always a better starting point for the training than random initialization, even if your model will be trained on entirely different object classes.
And of course, in case you happen to have the opposite problem, where your dataset has **more** classes than the trained model you would like to fine-tune, then you can simply do the same thing in the opposite direction: Instead of sub-sampling the classification layer weights, you would then **up-sample** them. Works just the same way as what we'll be doing below.
Let's get to it.
```
import h5py
import numpy as np
import shutil
from misc_utils.tensor_sampling_utils import sample_tensors
```
## 1. Load the trained weights file and make a copy
First, we'll load the HDF5 file that contains the trained weights that we need (the source file). In our case this is "`VGG_coco_SSD_300x300_iter_400000.h5`" (download link available in the README of this repo), which are the weights of the original SSD300 model that was trained on MS COCO.
Then, we'll make a copy of that weights file. That copy will be our output file (the destination file).
```
# TODO: Set the path for the source weights file you want to load.
weights_source_path = '../../trained_weights/SSD/VGG_coco_SSD_300x300_iter_400000.h5'
# TODO: Set the path and name for the destination weights file
# that you want to create.
weights_destination_path = '../../trained_weights/SSD/VGG_coco_SSD_300x300_iter_400000_subsampled_8_classes.h5'
# Make a copy of the weights file.
shutil.copy(weights_source_path, weights_destination_path)
# Load both the source weights file and the copy we made.
# We will load the original weights file in read-only mode so that we can't mess up anything.
weights_source_file = h5py.File(weights_source_path, 'r')
weights_destination_file = h5py.File(weights_destination_path)
```
## 2. Figure out which weight tensors we need to sub-sample
Next, we need to figure out exactly which weight tensors we need to sub-sample. As mentioned above, the weights for all layers except the classification layers are fine, we don't need to change anything about those.
So which are the classification layers in SSD300? Their names are:
```
classifier_names = ['conv4_3_norm_mbox_conf',
'fc7_mbox_conf',
'conv6_2_mbox_conf',
'conv7_2_mbox_conf',
'conv8_2_mbox_conf',
'conv9_2_mbox_conf']
```
## 3. Figure out which slices to pick
The following section is optional. I'll look at one classification layer and explain what we want to do, just for your understanding. If you don't care about that, just skip ahead to the next section.
We know which weight tensors we want to sub-sample, but we still need to decide which (or at least how many) elements of those tensors we want to keep. Let's take a look at the first of the classifier layers, "`conv4_3_norm_mbox_conf`". Its two weight tensors, the kernel and the bias, have the following shapes:
```
conv4_3_norm_mbox_conf_kernel = weights_source_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_source_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
```
So the last axis has 324 elements. Why is that?
- MS COCO has 80 classes, but the model also has one 'backgroud' class, so that makes 81 classes effectively.
- The 'conv4_3_norm_mbox_loc' layer predicts 4 boxes for each spatial position, so the 'conv4_3_norm_mbox_conf' layer has to predict one of the 81 classes for each of those 4 boxes.
That's why the last axis has 4 * 81 = 324 elements.
So how many elements do we want in the last axis for this layer?
Let's do the same calculation as above:
- Our dataset has 8 classes, but our model will also have a 'background' class, so that makes 9 classes effectively.
- We need to predict one of those 9 classes for each of the four boxes at each spatial position.
That makes 4 * 9 = 36 elements.
Now we know that we want to keep 36 elements in the last axis and leave all other axes unchanged. But which 36 elements out of the original 324 elements do we want?
Should we just pick them randomly? If the object classes in our dataset had absolutely nothing to do with the classes in MS COCO, then choosing those 36 elements randomly would be fine (and the next section covers this case, too). But in our particular example case, choosing these elements randomly would be a waste. Since MS COCO happens to contain exactly the 8 classes that we need, instead of sub-sampling randomly, we'll just take exactly those elements that were trained to predict our 8 classes.
Here are the indices of the 9 classes in MS COCO that we are interested in:
`[0, 1, 2, 3, 4, 6, 8, 10, 12]`
The indices above represent the following classes in the MS COCO datasets:
`['background', 'person', 'bicycle', 'car', 'motorcycle', 'bus', 'truck', 'traffic_light', 'stop_sign']`
How did I find out those indices? I just looked them up in the annotations of the MS COCO dataset.
While these are the classes we want, we don't want them in this order. In our dataset, the classes happen to be in the following order as stated at the top of this notebook:
`['background', 'car', 'truck', 'pedestrian', 'bicyclist', 'traffic_light', 'motorcycle', 'bus', 'stop_sign']`
For example, '`traffic_light`' is class ID 5 in our dataset but class ID 10 in the SSD300 MS COCO model. So the order in which I actually want to pick the 9 indices above is this:
`[0, 3, 8, 1, 2, 10, 4, 6, 12]`
So out of every 81 in the 324 elements, I want to pick the 9 elements above. This gives us the following 36 indices:
```
n_classes_source = 81
classes_of_interest = [0, 3, 8, 1, 2, 10, 4, 6, 12]
subsampling_indices = []
for i in range(int(324/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
print(subsampling_indices)
```
These are the indices of the 36 elements that we want to pick from both the bias vector and from the last axis of the kernel tensor.
This was the detailed example for the '`conv4_3_norm_mbox_conf`' layer. And of course we haven't actually sub-sampled the weights for this layer yet, we have only figured out which elements we want to keep. The piece of code in the next section will perform the sub-sampling for all the classifier layers.
## 4. Sub-sample the classifier weights
The code in this section iterates over all the classifier layers of the source weights file and performs the following steps for each classifier layer:
1. Get the kernel and bias tensors from the source weights file.
2. Compute the sub-sampling indices for the last axis. The first three axes of the kernel remain unchanged.
3. Overwrite the corresponding kernel and bias tensors in the destination weights file with our newly created sub-sampled kernel and bias tensors.
The second step does what was explained in the previous section.
In case you want to **up-sample** the last axis rather than sub-sample it, simply set the `classes_of_interest` variable below to the length you want it to have. The added elements will be initialized either randomly or optionally with zeros. Check out the documentation of `sample_tensors()` for details.
```
# TODO: Set the number of classes in the source weights file. Note that this number must include
# the background class, so for MS COCO's 80 classes, this must be 80 + 1 = 81.
n_classes_source = 81
# TODO: Set the indices of the classes that you want to pick for the sub-sampled weight tensors.
# In case you would like to just randomly sample a certain number of classes, you can just set
# `classes_of_interest` to an integer instead of the list below. Either way, don't forget to
# include the background class. That is, if you set an integer, and you want `n` positive classes,
# then you must set `classes_of_interest = n + 1`.
classes_of_interest = [0, 3, 8, 1, 2, 10, 4, 6, 12]
# classes_of_interest = 9 # Uncomment this in case you want to just randomly sub-sample the last axis instead of providing a list of indices.
for name in classifier_names:
# Get the trained weights for this layer from the source HDF5 weights file.
kernel = weights_source_file[name][name]['kernel:0'].value
bias = weights_source_file[name][name]['bias:0'].value
# Get the shape of the kernel. We're interested in sub-sampling
# the last dimension, 'o'.
height, width, in_channels, out_channels = kernel.shape
# Compute the indices of the elements we want to sub-sample.
# Keep in mind that each classification predictor layer predicts multiple
# bounding boxes for every spatial location, so we want to sub-sample
# the relevant classes for each of these boxes.
if isinstance(classes_of_interest, (list, tuple)):
subsampling_indices = []
for i in range(int(out_channels/n_classes_source)):
indices = np.array(classes_of_interest) + i * n_classes_source
subsampling_indices.append(indices)
subsampling_indices = list(np.concatenate(subsampling_indices))
elif isinstance(classes_of_interest, int):
subsampling_indices = int(classes_of_interest * (out_channels/n_classes_source))
else:
raise ValueError("`classes_of_interest` must be either an integer or a list/tuple.")
# Sub-sample the kernel and bias.
# The `sample_tensors()` function used below provides extensive
# documentation, so don't hesitate to read it if you want to know
# what exactly is going on here.
new_kernel, new_bias = sample_tensors(weights_list=[kernel, bias],
sampling_instructions=[height, width, in_channels, subsampling_indices],
axes=[[3]], # The one bias dimension corresponds to the last kernel dimension.
init=['gaussian', 'zeros'],
mean=0.0,
stddev=0.005)
# Delete the old weights from the destination file.
del weights_destination_file[name][name]['kernel:0']
del weights_destination_file[name][name]['bias:0']
# Create new datasets for the sub-sampled weights.
weights_destination_file[name][name].create_dataset(name='kernel:0', data=new_kernel)
weights_destination_file[name][name].create_dataset(name='bias:0', data=new_bias)
# Make sure all data is written to our output file before this sub-routine exits.
weights_destination_file.flush()
```
That's it, we're done.
Let's just quickly inspect the shapes of the weights of the '`conv4_3_norm_mbox_conf`' layer in the destination weights file:
```
conv4_3_norm_mbox_conf_kernel = weights_destination_file[classifier_names[0]][classifier_names[0]]['kernel:0']
conv4_3_norm_mbox_conf_bias = weights_destination_file[classifier_names[0]][classifier_names[0]]['bias:0']
print("Shape of the '{}' weights:".format(classifier_names[0]))
print()
print("kernel:\t", conv4_3_norm_mbox_conf_kernel.shape)
print("bias:\t", conv4_3_norm_mbox_conf_bias.shape)
```
Nice! Exactly what we wanted, 36 elements in the last axis. Now the weights are compatible with our new SSD300 model that predicts 8 positive classes.
This is the end of the relevant part of this tutorial, but we can do one more thing and verify that the sub-sampled weights actually work. Let's do that in the next section.
## 5. Verify that our sub-sampled weights actually work
In our example case above we sub-sampled the fully trained weights of the SSD300 model trained on MS COCO from 80 classes to just the 8 classes that we needed.
We can now create a new SSD300 with 8 classes, load our sub-sampled weights into it, and see how the model performs on a few test images that contain objects for some of those 8 classes. Let's do it.
```
from keras.optimizers import Adam
from keras import backend as K
from keras.models import load_model
from models.keras_ssd300 import ssd_300
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.object_detection_2d_patch_sampling_ops import RandomMaxCropFixedAR
from data_generator.object_detection_2d_geometric_ops import Resize
```
### 5.1. Set the parameters for the model.
As always, set the parameters for the model. We're going to set the configuration for the SSD300 MS COCO model.
```
img_height = 300 # Height of the input images
img_width = 300 # Width of the input images
img_channels = 3 # Number of color channels of the input images
subtract_mean = [123, 117, 104] # The per-channel mean of the images in the dataset
swap_channels = [2, 1, 0] # The color channel order in the original SSD is BGR, so we should set this to `True`, but weirdly the results are better without swapping.
# TODO: Set the number of classes.
n_classes = 8 # Number of positive classes, e.g. 20 for Pascal VOC, 80 for MS COCO
scales = [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] # The anchor box scaling factors used in the original SSD300 for the MS COCO datasets.
# scales = [0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05] # The anchor box scaling factors used in the original SSD300 for the Pascal VOC datasets.
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5, 3.0, 1.0/3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD300; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 100, 300] # The space between two adjacent anchor box center points for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5] # The offsets of the first anchor box center points from the top and left borders of the image as a fraction of the step size for each predictor layer.
clip_boxes = False # Whether or not you want to limit the anchor boxes to lie entirely within the image boundaries
variances = [0.1, 0.1, 0.2, 0.2] # The variances by which the encoded target coordinates are scaled as in the original implementation
normalize_coords = True
```
### 5.2. Build the model
Build the model and load our newly created, sub-sampled weights into it.
```
# 1: Build the Keras model
K.clear_session() # Clear previous models from memory.
model = ssd_300(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
mode='inference',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=subtract_mean,
divide_by_stddev=None,
swap_channels=swap_channels,
confidence_thresh=0.5,
iou_threshold=0.45,
top_k=200,
nms_max_output_size=400,
return_predictor_sizes=False)
print("Model built.")
# 2: Load the sub-sampled weights into the model.
# Load the weights that we've just created via sub-sampling.
weights_path = weights_destination_path
model.load_weights(weights_path, by_name=True)
print("Weights file loaded:", weights_path)
# 3: Instantiate an Adam optimizer and the SSD loss function and compile the model.
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0)
model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
```
### 5.3. Load some images to test our model on
We sub-sampled some of the road traffic categories from the trained SSD300 MS COCO weights, so let's try out our model on a few road traffic images. The Udacity road traffic dataset linked to in the `ssd7_training.ipynb` notebook lends itself to this task. Let's instantiate a `DataGenerator` and load the Udacity dataset. Everything here is preset already, but if you'd like to learn more about the data generator and its capabilities, take a look at the detailed tutorial in [this](https://github.com/pierluigiferrari/data_generator_object_detection_2d) repository.
```
dataset = DataGenerator()
# TODO: Set the paths to your dataset here.
images_path = '../../datasets/Udacity_Driving/driving_dataset_consolidated_small/'
labels_path = '../../datasets/Udacity_Driving/driving_dataset_consolidated_small/labels.csv'
dataset.parse_csv(images_dir=images_path,
labels_filename=labels_path,
input_format=['image_name', 'xmin', 'xmax', 'ymin', 'ymax', 'class_id'], # This is the order of the first six columns in the CSV file that contains the labels for your dataset. If your labels are in XML format, maybe the XML parser will be helpful, check the documentation.
include_classes='all',
random_sample=False)
print("Number of images in the dataset:", dataset.get_dataset_size())
```
Make sure the batch generator generates images of size `(300, 300)`. We'll first randomly crop the largest possible patch with aspect ratio 1.0 and then resize to `(300, 300)`.
```
convert_to_3_channels = ConvertTo3Channels()
random_max_crop = RandomMaxCropFixedAR(patch_aspect_ratio=img_width/img_height)
resize = Resize(height=img_height, width=img_width)
generator = dataset.generate(batch_size=1,
shuffle=True,
transformations=[convert_to_3_channels,
random_max_crop,
resize],
returns={'processed_images',
'processed_labels',
'filenames'},
keep_images_without_gt=False)
# Generate samples
batch_images, batch_labels, batch_filenames = next(generator)
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(batch_labels[i])
```
### 5.4. Make predictions and visualize them
```
# Make a prediction
y_pred = model.predict(batch_images)
# Decode the raw prediction.
i = 0
confidence_threshold = 0.5
y_pred_thresh = [y_pred[k][y_pred[k,:,1] > confidence_threshold] for k in range(y_pred.shape[0])]
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_thresh[0])
# Visualize the predictions.
from matplotlib import pyplot as plt
%matplotlib inline
plt.figure(figsize=(20,12))
plt.imshow(batch_images[i])
current_axis = plt.gca()
classes = ['background', 'car', 'truck', 'pedestrian', 'bicyclist',
'traffic_light', 'motorcycle', 'bus', 'stop_sign'] # Just so we can print class names onto the image instead of IDs
# Draw the predicted boxes in blue
for box in y_pred_thresh[i]:
class_id = box[0]
confidence = box[1]
xmin = box[2]
ymin = box[3]
xmax = box[4]
ymax = box[5]
label = '{}: {:.2f}'.format(classes[int(class_id)], confidence)
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='blue', fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor':'blue', 'alpha':1.0})
# Draw the ground truth boxes in green (omit the label for more clarity)
for box in batch_labels[i]:
class_id = box[0]
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(class_id)])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
#current_axis.text(box[1], box[3], label, size='x-large', color='white', bbox={'facecolor':'green', 'alpha':1.0})
```
Seems as if our sub-sampled weights were doing a good job, sweet. Now we can fine-tune this model on our dataset with 8 classes.
| github_jupyter |
```
## Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
#Importing the dataset
dataset = pd.read_csv('E:\Python\data\Wine.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
#Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
#Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
#Applying PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
X_train = pca.fit_transform(X_train)
X_test = pca.transform(X_test)
#Training the Logistic Regression model on the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
#Making the Confusion Matrix
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
#Visualising the Training set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
#Visualising the Test set results
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
```
| github_jupyter |
# The generic Broker class
```
from abc import abstractmethod
class Broker(object):
def __init__(self, host, port):
self.host = host
self.port = port
self.__price_event_handler = None
self.__order_event_handler = None
self.__position_event_handler = None
@property
def on_price_event(self):
"""
Listeners will receive:
symbol, bid, ask
"""
return self.__price_event_handler
@on_price_event.setter
def on_price_event(self, event_handler):
self.__price_event_handler = event_handler
@property
def on_order_event(self):
"""
Listeners will receive:
transaction_id
"""
return self.__order_event_handler
@on_order_event.setter
def on_order_event(self, event_handler):
self.__order_event_handler = event_handler
@property
def on_position_event(self):
"""
Listeners will receive:
symbol, is_long, units, unrealized_pnl, pnl
"""
return self.__position_event_handler
@on_position_event.setter
def on_position_event(self, event_handler):
self.__position_event_handler = event_handler
@abstractmethod
def get_prices(self, symbols=[]):
"""
Query market prices from a broker
:param symbols: list of symbols recognized by your broker
"""
raise NotImplementedError('Method is required!')
@abstractmethod
def stream_prices(self, symbols=[]):
""""
Continuously stream prices from a broker.
:param symbols: list of symbols recognized by your broker
"""
raise NotImplementedError('Method is required!')
@abstractmethod
def send_market_order(self, symbol, quantity, is_buy):
raise NotImplementedError('Method is required!')
```
# Oanda Broker class
```
import v20
class OandaBroker(Broker):
PRACTICE_API_HOST = 'api-fxpractice.oanda.com'
PRACTICE_STREAM_HOST = 'stream-fxpractice.oanda.com'
LIVE_API_HOST = 'api-fxtrade.oanda.com'
LIVE_STREAM_HOST = 'stream-fxtrade.oanda.com'
PORT = '443'
def __init__(self, accountid, token, is_live=False):
if is_live:
host = self.LIVE_API_HOST
stream_host = self.LIVE_STREAM_HOST
else:
host = self.PRACTICE_API_HOST
stream_host = self.PRACTICE_STREAM_HOST
super(OandaBroker, self).__init__(host, self.PORT)
self.accountid = accountid
self.token = token
self.api = v20.Context(host, self.port, token=token)
self.stream_api = v20.Context(stream_host, self.port, token=token)
def get_prices(self, symbols=[]):
response = self.api.pricing.get(
self.accountid,
instruments=",".join(symbols),
snapshot=True,
includeUnitsAvailable=False
)
body = response.body
prices = body.get('prices', [])
for price in prices:
self.process_price(price)
def process_price(self, price):
symbol = price.instrument
if not symbol:
print('Price symbol is empty!')
return
bids = price.bids or []
price_bucket_bid = bids[0] if bids and len(bids) > 0 else None
bid = price_bucket_bid.price if price_bucket_bid else 0
asks = price.asks or []
price_bucket_ask = asks[0] if asks and len(asks) > 0 else None
ask = price_bucket_ask.price if price_bucket_ask else 0
self.on_price_event(symbol, bid, ask)
def stream_prices(self, symbols=[]):
response = self.stream_api.pricing.stream(
self.accountid,
instruments=",".join(symbols),
snapshot=True
)
for msg_type, msg in response.parts():
if msg_type == "pricing.Heartbeat":
continue
elif msg_type == "pricing.ClientPrice":
self.process_price(msg)
def send_market_order(self, symbol, quantity, is_buy):
response = self.api.order.market(
self.accountid,
units=abs(quantity) * (1 if is_buy else -1),
instrument=symbol,
type='MARKET',
)
if response.status != 201:
self.on_order_event(symbol, quantity, is_buy, None, 'NOT_FILLED')
return
body = response.body
if 'orderCancelTransaction' in body:
self.on_order_event(symbol, quantity, is_buy, None, 'NOT_FILLED')
return
transaction_id = body.get('lastTransactionID', None)
self.on_order_event(symbol, quantity, is_buy, transaction_id, 'FILLED')
def get_positions(self):
response = self.api.position.list(self.accountid)
body = response.body
positions = body.get('positions', [])
for position in positions:
symbol = position.instrument
unrealized_pnl = position.unrealizedPL
pnl = position.pl
long = position.long
short = position.short
if short.units:
self.on_position_event(
symbol, False, short.units, unrealized_pnl, pnl)
elif long.units:
self.on_position_event(
symbol, True, long.units, unrealized_pnl, pnl)
else:
self.on_position_event(
symbol, None, 0, unrealized_pnl, pnl)
```
# Getting prices
```
# Replace these 2 values with your own!
ACCOUNT_ID = '101-001-1374173-001'
API_TOKEN = '6ecf6b053262c590b78bb8199b85aa2f-d99c54aecb2d5b4583a9f707636e8009'
broker = OandaBroker(ACCOUNT_ID, API_TOKEN)
SYMBOL = 'EUR_USD'
import datetime as dt
def on_price_event(symbol, bid, ask):
print(
dt.datetime.now(), '[PRICE]',
symbol, 'bid:', bid, 'ask:', ask
)
broker.on_price_event = on_price_event
broker.get_prices(symbols=[SYMBOL])
```
# Sending a simple market order
```
def on_order_event(symbol, quantity, is_buy, transaction_id, status):
print(
dt.datetime.now(), '[ORDER]',
'transaction_id:', transaction_id,
'status:', status,
'symbol:', symbol,
'quantity:', quantity,
'is_buy:', is_buy,
)
broker.on_order_event = on_order_event
broker.send_market_order(SYMBOL, 1, True)
```
# Getting position updates
```
def on_position_event(symbol, is_long, units, upnl, pnl):
print(
dt.datetime.now(), '[POSITION]',
'symbol:', symbol,
'is_long:', is_long,
'units:', units,
'upnl:', upnl,
'pnl:', pnl
)
broker.on_position_event = on_position_event
broker.get_positions()
```
# Building a mean-reverting algorithmic trading system
```
import datetime as dt
import pandas as pd
class MeanReversionTrader(object):
def __init__(
self, broker, symbol=None, units=1,
resample_interval='60s', mean_periods=5
):
"""
A trading platform that trades on one side
based on a mean-reverting algorithm.
:param broker: Broker object
:param symbol: A str object recognized by the broker for trading
:param units: Number of units to trade
:param resample_interval:
Frequency for resampling price time series
:param mean_periods: Number of resampled intervals
for calculating the average price
"""
self.broker = self.setup_broker(broker)
self.resample_interval = resample_interval
self.mean_periods = mean_periods
self.symbol = symbol
self.units = units
self.df_prices = pd.DataFrame(columns=[symbol])
self.pnl, self.upnl = 0, 0
self.bid_price, self.ask_price = 0, 0
self.position = 0
self.is_order_pending = False
self.is_next_signal_cycle = True
def setup_broker(self, broker):
broker.on_price_event = self.on_price_event
broker.on_order_event = self.on_order_event
broker.on_position_event = self.on_position_event
return broker
def on_price_event(self, symbol, bid, ask):
print(dt.datetime.now(), '[PRICE]', symbol, 'bid:', bid, 'ask:', ask)
self.bid_price = bid
self.ask_price = ask
self.df_prices.loc[pd.Timestamp.now(), symbol] = (bid + ask) / 2.
self.get_positions()
self.generate_signals_and_think()
self.print_state()
def get_positions(self):
try:
self.broker.get_positions()
except Exception as ex:
print('get_positions error:', ex)
def on_order_event(self, symbol, quantity, is_buy, transaction_id, status):
print(
dt.datetime.now(), '[ORDER]',
'transaction_id:', transaction_id,
'status:', status,
'symbol:', symbol,
'quantity:', quantity,
'is_buy:', is_buy,
)
if status == 'FILLED':
self.is_order_pending = False
self.is_next_signal_cycle = False
self.get_positions() # Update positions before thinking
self.generate_signals_and_think()
def on_position_event(self, symbol, is_long, units, upnl, pnl):
if symbol == self.symbol:
self.position = abs(units) * (1 if is_long else -1)
self.pnl = pnl
self.upnl = upnl
self.print_state()
def print_state(self):
print(
dt.datetime.now(), self.symbol, self.position_state,
abs(self.position), 'upnl:', self.upnl, 'pnl:', self.pnl
)
@property
def position_state(self):
if self.position == 0:
return 'FLAT'
if self.position > 0:
return 'LONG'
if self.position < 0:
return 'SHORT'
def generate_signals_and_think(self):
df_resampled = self.df_prices\
.resample(self.resample_interval)\
.ffill()\
.dropna()
resampled_len = len(df_resampled.index)
if resampled_len < self.mean_periods:
print(
'Insufficient data size to calculate logic. Need',
self.mean_periods - resampled_len, 'more.'
)
return
mean = df_resampled.tail(self.mean_periods).mean()[self.symbol]
# Signal flag calculation
is_signal_buy = mean > self.ask_price
is_signal_sell = mean < self.bid_price
print(
'is_signal_buy:', is_signal_buy,
'is_signal_sell:', is_signal_sell,
'average_price: %.5f' % mean,
'bid:', self.bid_price,
'ask:', self.ask_price
)
self.think(is_signal_buy, is_signal_sell)
def think(self, is_signal_buy, is_signal_sell):
if self.is_order_pending:
return
if self.position == 0:
self.think_when_position_flat(is_signal_buy, is_signal_sell)
elif self.position > 0:
self.think_when_position_long(is_signal_sell)
elif self.position < 0:
self.think_when_position_short(is_signal_buy)
def think_when_position_flat(self, is_signal_buy, is_signal_sell):
if is_signal_buy and self.is_next_signal_cycle:
print('Opening position, BUY',
self.symbol, self.units, 'units')
self.is_order_pending = True
self.send_market_order(self.symbol, self.units, True)
return
if is_signal_sell and self.is_next_signal_cycle:
print('Opening position, SELL',
self.symbol, self.units, 'units')
self.is_order_pending = True
self.send_market_order(self.symbol, self.units, False)
return
if not is_signal_buy and not is_signal_sell:
self.is_next_signal_cycle = True
def think_when_position_long(self, is_signal_sell):
if is_signal_sell:
print('Closing position, SELL',
self.symbol, self.units, 'units')
self.is_order_pending = True
self.send_market_order(self.symbol, self.units, False)
def think_when_position_short(self, is_signal_buy):
if is_signal_buy:
print('Closing position, BUY',
self.symbol, self.units, 'units')
self.is_order_pending = True
self.send_market_order(self.symbol, self.units, True)
def send_market_order(self, symbol, quantity, is_buy):
self.broker.send_market_order(symbol, quantity, is_buy)
def run(self):
self.broker.stream_prices(symbols=[self.symbol])
```
WARNING! Running the codes below will block on the main thread! You will have to restart the kernel.
```
trader = MeanReversionTrader(
broker,
resample_interval='60s',
symbol='EUR_USD',
units=1
)
trader.run()
```
# Building a trend-following trading platform
```
class TrendFollowingTrader(MeanReversionTrader):
def __init__(
self, *args, long_mean_periods=10,
buy_threshold=1.0, sell_threshold=1.0, **kwargs
):
super(TrendFollowingTrader, self).__init__(*args, **kwargs)
self.long_mean_periods = long_mean_periods
self.buy_threshold = buy_threshold
self.sell_threshold = sell_threshold
def generate_signals_and_think(self):
df_resampled = self.df_prices\
.resample(self.resample_interval)\
.ffill().dropna()
resampled_len = len(df_resampled.index)
if resampled_len < self.long_mean_periods:
print(
'Insufficient data size to calculate logic. Need',
self.mean_periods - resampled_len, 'more.'
)
return
mean_short = df_resampled\
.tail(self.mean_periods).mean()[self.symbol]
mean_long = df_resampled\
.tail(self.long_mean_periods).mean()[self.symbol]
beta = mean_short / mean_long
# Signal flag calculation
is_signal_buy = beta > self.buy_threshold
is_signal_sell = beta < self.sell_threshold
print(
'is_signal_buy:', is_signal_buy,
'is_signal_sell:', is_signal_sell,
'beta:', beta,
'bid:', self.bid_price,
'ask:', self.ask_price
)
self.think(is_signal_buy, is_signal_sell)
```
WARNING! Running the codes below will block on the main thread! You will have to restart the kernel.
```
trader = TrendFollowingTrader(
broker,
resample_interval='60s',
symbol='EUR_USD',
units=1,
mean_periods=5,
long_mean_periods=10,
buy_threshold=1.000010,
sell_threshold=0.99990,
)
trader.run()
```
# VaR for risk management
```
"""
Download the all-time AAPL dataset
"""
from alpha_vantage.timeseries import TimeSeries
# Update your Alpha Vantage API key here...
ALPHA_VANTAGE_API_KEY = 'PZ2ISG9CYY379KLI'
ts = TimeSeries(key=ALPHA_VANTAGE_API_KEY, output_format='pandas')
df, meta_data = ts.get_daily_adjusted(symbol='AAPL', outputsize='full')
df.info()
import datetime as dt
import pandas as pd
# Define the date range
start = dt.datetime(2017, 1, 1)
end = dt.datetime(2017, 12, 31)
# Cast indexes as DateTimeIndex objects
df.index = pd.to_datetime(df.index)
closing_prices = df['5. adjusted close']
prices = closing_prices.loc[start:end]
from scipy.stats import norm
def calculate_daily_var(
portfolio, prob, mean,
stdev, days_per_year=252.
):
alpha = 1-prob
u = mean/days_per_year
sigma = stdev/np.sqrt(days_per_year)
norminv = norm.ppf(alpha, u, sigma)
return portfolio - portfolio*(norminv+1)
import numpy as np
portfolio = 100000000.00
confidence = 0.95
daily_returns = prices.pct_change().dropna()
mu = np.mean(daily_returns)
sigma = np.std(daily_returns)
VaR = calculate_daily_var(
portfolio, confidence, mu, sigma, days_per_year=252.)
print('Value-at-Risk: %.2f' % VaR)
```
| github_jupyter |
# Imports
```
import matplotlib.pyplot as plt
import numpy as np
import os
import PIL
import tensorflow as tf
# Keras
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import pathlib
```
# Dataset
```
data_dir = pathlib.Path('App/res/images_AI/')
image_count = len(list(data_dir.glob('*/*.png')))
print(f"The number of images is :{image_count}")
```
# Creating a dataset for Keras
```
# Parametres
batch_size = 32
img_height = 160
img_width = 160
input_shape=(img_height, img_width, 3)
# Training dataset
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
# Validation dataset
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
```
## Checking classes
```
class_names = train_ds.class_names
print(class_names)
```
# Visualize the data
```
#plt.figure(figsize=(10, 10))
#for images, labels in train_ds.take(1):
# for i in range(9):
# ax = plt.subplot(3, 3, i + 1)
# plt.imshow(images[i].numpy().astype("uint8"))
# plt.title(class_names[labels[i]])
# plt.axis("off")
```
## Manual retrieving of images by batch
```
#for image_batch, labels_batch in train_ds:
# print(image_batch.shape)
# print(labels_batch.shape)
# break
```
## Performance
```
AUTOTUNE = tf.data.experimental.AUTOTUNE
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
```
# Data normalization example
```
normalization_layer = layers.experimental.preprocessing.Rescaling(1./255)
normalized_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
image_batch, labels_batch = next(iter(normalized_ds))
first_image = image_batch[0]
# Notice the pixels values are now in `[0,1]`.
print(np.min(first_image), np.max(first_image))
```
# Model creation
```
num_classes = 20
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=input_shape),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
```
## Compile the model
```
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# model.summary()
```
# Model training
```
epochs=10
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
```
# Visualize training results
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
# Testing the basic model
```
img_dir = "App/res/images_AI/"
img_names = {'green.png', 'purple.png', 'red.png', 'yellow.png'}
# images vars
#img_dir = "App/res/images/";
#img_names = {'jaune.png', 'rouge.png', 'vert.jpg', 'violet.png'}
cpt_green = 0
cpt_purple = 0
cpt_red = 0
cpt_yellow = 0
for i in range(1, 21):
for name in img_names:
img = keras.preprocessing.image.load_img(
img_dir + str(i) + '/' + str(i) + name, target_size=(img_height, img_width)
)
# draw the chart containing the image with boxes
#plt.imshow(img)
#plt.title(name)
#plt.show()
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
number = class_names[np.argmax(score)]
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
if( int(number) == i ):
if( name == 'green.png' ):
cpt_green += 1
if( name == 'purple.png' ):
cpt_purple += 1
if( name == 'red.png' ):
cpt_red += 1
if( name == 'yellow.png' ):
cpt_yellow += 1
print("Number of correct red = " + str(cpt_red))
print("Number of correct purple = " + str(cpt_purple))
print("Number of correct green = " + str(cpt_green))
print("Number of correct yellow = " + str(cpt_yellow))
```
# Data augmentation
Doesn't work: NotImplementedError when adding input_shape (something with NumPy calls inside Keras.Sequential)
# Dropout without data augmentation
```
num_classes = 20
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=input_shape),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
```
# Dropout with Data augmentation
Doesn't work because of an error in Data Augmentation part
# Recompiling the model
```
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
```
## Retraining the model
```
epochs = 15
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
```
# Visualize the new results
```
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
```
# Testing the model
```
img_dir = "App/res/images_AI/"
img_names = {'green.png', 'purple.png', 'red.png', 'yellow.png'}
# images vars
#img_dir = "App/res/images/";
#img_names = {'jaune.png', 'rouge.png', 'vert.jpg', 'violet.png'}
cpt_green = 0
cpt_purple = 0
cpt_red = 0
cpt_yellow = 0
for i in range(1, 21):
for name in img_names:
img = keras.preprocessing.image.load_img(
img_dir + str(i) + '/' + str(i) + name, target_size=(img_height, img_width)
)
# draw the chart containing the image with boxes
#plt.imshow(img)
#plt.title(name)
#plt.show()
img_array = keras.preprocessing.image.img_to_array(img)
img_array = tf.expand_dims(img_array, 0) # Create a batch
predictions = model.predict(img_array)
score = tf.nn.softmax(predictions[0])
number = class_names[np.argmax(score)]
print(
"This image most likely belongs to {} with a {:.2f} percent confidence."
.format(class_names[np.argmax(score)], 100 * np.max(score))
)
if( int(number) == i ):
if( name == 'green.png' ):
cpt_green += 1
if( name == 'purple.png' ):
cpt_purple += 1
if( name == 'red.png' ):
cpt_red += 1
if( name == 'yellow.png' ):
cpt_yellow += 1
print("Number of correct red = " + str(cpt_red))
print("Number of correct purple = " + str(cpt_purple))
print("Number of correct green = " + str(cpt_green))
print("Number of correct yellow = " + str(cpt_yellow))
```
# Saving the model
```
model.save('AI_Token_Recognition')
```
# Loading the model
```
new_model = keras.models.load_model('AI_Token_Recognition')
# Check its architecture
new_model.summary()
```
# Converting the model to Tensorflow Lite
To use in a Python file
## From a SavedModel on the disk
## From a Keras model trained in the same file
| github_jupyter |
# A brief demonstration of the double spike toolbox
This is a quick guide to the main features of the double spike toolbox for python. The package uses the numpy, scipy, and matplotlib libraries.
```
import doublespike as ds
import matplotlib.pyplot as plt
import numpy as np
```
# IsoData
A python object called `IsoData` is used to store data on the different isotope systems. It can be invoked with a string giving the element name. The object is initialised with a number of sensible default values, but these can then be altered by the user. Information in `IsoData` includes the isotope numbers, the atomic masses, the standard composition, and the default error model coefficients. The default information stored about Fe is shown below.
```
isodata_fe = ds.IsoData("Fe")
print(isodata_fe)
```
The stored data can be accessed and set through the object attributes. For example, shown below for Fe are the isotope numbers, standard composition, the atomic masses, and the third and fourth Oak Ridge National Labs (ORNL) spike compositions (corresponding to the 57Fe and 58Fe spikes). Note that all values are expressed as proportions of each isotope, rather than as isotopic ratios. Note also that python indexing starts from 0, so that the index 2 corresponds to the third spike.
```
print(isodata_fe.isonum)
print(isodata_fe.standard)
print(isodata_fe.mass)
print(isodata_fe.rawspike[2, :]) # third ORNL single spike composition
print(isodata_fe.rawspike[3, :]) # fourth ORNL single spike composition
```
## 2D Error surfaces 1
The function `errorcurve2d` plots a 2D error surface as a contour plot, showing how the error varies with both double spike to sample mixture proportions and the proportions in which the two single spikes are mixed to make the double spike. The example below is for a 57Fe-58Fe double spike, using pure spikes.
```
ds.errorcurve2d(isodata_fe, "pure", [57, 58])
```
## 2D Error surfaces 2
Calculations can also be performed using real spikes rather than hypothetical pure spikes. Below is an example using the third and fourth ORNL spikes (corresponding to 57Fe and 58Fe spikes). This produces Figure 2 in the manuscript. Spike purity usually has only a little effect.
```
ds.errorcurve2d(isodata_fe, "real", [2, 3])
```
## 2D Error surfaces 3
Some isotopes have more than four isotopes, and in these cases the isotopes to use in the inversion must be specified. Here is an example for Ca, with a 42Ca-48Ca double spike and 40Ca, 42Ca, 44Ca, 48Ca used in inversion. The second and sixth ORNL spikes are the 42Ca and 48Ca spikes. This produces Figure 5 in the manuscript.
```
isodata_ca = ds.IsoData("Ca")
print("spike 2:", isodata_ca.rawspike[1, :])
print("spike 6:", isodata_ca.rawspike[5, :])
ds.errorcurve2d(isodata_ca, "real", [1, 5], [40, 42, 44, 48])
```
## Error curves 1
The function `errorcurve` plots the error in either the fractionation factor $\alpha$ or a chosen ratio as a function of the double spike to sample proportion. Note again that all compositions are specified by proportions of each isotope rather than by ratios. Here we look at the error curve for a double spike which is 50% 57Fe and 50% 58Fe.
```
spike = [0, 0, 0.5, 0.5]
ds.errorcurve(isodata_fe, spike)
```
## Error curves 2
The function `errorcurve2` plots the error in either the fractionation factor alpha or a chosen ratio as a function of the proportion of the two single spikes that make the double spike. The proportion in which double spike and sample are mixed must be specified, as must the single spikes to use. We give an example here for a pure 57Fe-58Fe double spike with 50% double spike to 50% sample.
```
ds.errorcurve2(isodata_fe, "pure", 0.5, [57, 58])
```
## Optimal spikes 1
The function `optimalspike` finds double spike compositions which minimise the error on alpha or a chosen ratio. This can be done either for pure spikes, or with the real spikes in the `rawspike` attribute. The following example finds the best 57Fe-58Fe spike using the real spikes available from ORNL. The 3rd and 4th spikes correspond to the 57Fe and 58Fe spikes. The optimal double spike turns out to be quite close to a 50-50 mix of the available spikes (optspikeprop). The actual double spike compositions are in optspike, the optimal double spike-sample mixing proportions in optprop, the error estimates in opterr, rescaled error estimates in optppmperamu, and the isotopes that were used in the inversion in optisoinv.
```
ds.optimalspike(isodata_fe, "real", [2, 3])
```
## Optimal spikes 2
If the isotopes to spike are not specified, the `optimalspike` function checks all possible combinations. An example for Fe pure spikes is shown below. This produces Table 1 in the manuscript.
```
ds.optimalspike(isodata_fe, "pure")
```
## Optimal spikes 3
An example for Fe ORNL spikes is shown below. This produces Table 2 in the manuscript.
```
ds.optimalspike(isodata_fe, "real")
```
## Optimal spikes 4
By default, `optimalspike` minimises the error on alpha, but for radiogenic work we often wish to minimise the error on a particular ratio. An example of this is Pb. Shown below is the result of minimising the error on 206Pb/204Pb. This produces part of Table 3 in the manuscript.
```
isodata_pb = ds.IsoData("Pb")
ds.optimalspike(isodata_pb, "pure", errorratio=[206, 204])
```
## Optimal spikes 5
For elements with more than four isotopes, such as Ca, `optimalspike` tries all combinations of four isotopes in the inversion. This produces Table 4 in the manuscript. We only display the first 31 rows of the optimal spike composition.
```
os = ds.optimalspike(isodata_ca, "pure")
print(os["optspike"][0:31, :])
```
## Optimal error curves 1
The function `errorcurveoptimalspike` calculates the `optimalspike` and then plots the `errorcurve`.
```
ds.errorcurveoptimalspike(isodata_fe, "real", [2, 3])
```
## Optimal error curves 2
If the isotopes to spike are not specified, all possible combinations are shown. This produces Figure 3 in the manuscript.
```
ds.errorcurveoptimalspike(isodata_fe, "real")
```
## Monte Carlo fake mass spec runs
`monterun` performs Monte Carlo fake mass spec runs.
The example below uses a 50-50 spike-sample mix, the pure 50-50 57Fe-58Fe spike used earlier, a natural fractionation of -0.2, an instrumental fractionation of 1.8, with 1000 Monte Carlo samples. The first 10 mixture measurements are shown.
```
measured = ds.monterun(isodata_fe, 0.5, spike, -0.2, 1.8, 1000)
measured[0:10, :]
```
## Double spike inversions
`dsinversion` performs the double spike inversion. Here we run the double spike inversion on the Monte-Carlo generated data with the chosen spike. We then produce a figure showing how the value of alpha varies over the run.
```
out = ds.dsinversion(isodata_fe, measured, spike)
plt.plot(out["alpha"])
plt.ylabel(r"$\alpha$");
```
## Error estimates
The `errorestimate` routine estimates the errors by linear error propagation. Here we compare the error obtained from the Monte-Carlo simulation with that predicted by linear error propagation. The fact that these are close is a good validation of the linear error propagation method.
```
monteerror = np.std(out["alpha"])
predictederror = ds.errorestimate(isodata_fe, 0.5, spike, alpha=-0.2, beta=1.8)[0]
print("Monte Carlo error:", monteerror, "\nPredicted error:", predictederror)
```
## Error model 1
The coefficients of the error model are contained in the `errormodel` attribute of the `IsoData` object. The coefficients can be specified for the measured, standard (unspiked run), and double spike compositions. See Appendix C of the manuscript for their definition. The choice of error model is a key part of determining what makes an optimal spike, and the user should carefully consider the assumptions implicit in any error model and decide whether they are appropriate for their system.
```
isodata_fe.set_errormodel(R=1e11, deltat=8)
isodata_fe.errormodel["measured"]
```
The error model set above fixes the total intensity of the mixture measurement to be 10 V, and the coefficients $a$, $b$, and $c$ correspond to those in (34) in the manuscript. The errors incorporate Johnson–Nyquist noise and counting statistics for a resistance $R = 10^{11}$ Ω and an integration time $\Delta t=8$ s.
## Error model 2
The function `set_errormodel` can be used to simply change the intensity for the error model. In the example below, a doubling of the intensity decreases the error by roughly a factor of $1/\sqrt{2}$.
```
isodata_fe.set_errormodel(intensity=10)
error1 = ds.errorestimate(isodata_fe, 0.5, spike, alpha=-0.2, beta=1.8)[0]
print("Error 1:", error1)
isodata_fe.set_errormodel(intensity=20)
error2 = ds.errorestimate(isodata_fe, 0.5, spike, alpha=-0.2, beta=1.8)[0]
print("Error 2:", error2)
print("Error 2/Error 1:", error2 / error1)
isodata_fe.set_errormodel() # return error model to defaults
```
## Error model 3
The default error model of the double spike toolbox fixes the total voltage of the beams for the overall mixture at a given level. When sample-limited it may be more appropriate to consider an error model where the voltage for the sample is fixed (see [John 2012, J. Anal. At. Spectrom.](https://doi.org/10.1039/C2JA30215B)). In the toolbox this can be achieved by changing the error model type from `'fixed-total'` to `'fixed-sample'` The code below recreates Figure 8 of [Klaver and Coath 2019, Geostand. Geoanal. Res.](https://doi.org/10.1111/ggr.12248).
```
isodata_ni = ds.IsoData("Ni")
isoinv = [58, 60, 61, 62]
spike6062 = [0.0132, 0.3295, 0.0014, 0.6547, 0.0012]
spike6162 = [0.0109, 0.0081, 0.4496, 0.5297, 0.0017]
# fix the sample intensity at 0.5 V for all runs
isodata_ni.set_errormodel(intensity=0.5, measured_type="fixed-sample")
ds.errorcurve(isodata_ni, spike6062, isoinv, label="$^{60}$Ni-$^{62}$Ni spike")
ds.errorcurve(isodata_ni, spike6162, isoinv, label="$^{61}$Ni-$^{62}$Ni spike")
plt.ylim([0, 0.06])
plt.legend()
isodata_ni.set_errormodel() # return errormodel to defaults
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.