text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
# default_exp infousa
```
# Info-USA Intake and Operations
> This notebook uses Info-USA data to generate a portion of BNIA's Vital Signs report.
Todo:
- Wrap as Function
#### __Indicators Used__
- 131 artbusXX Arts and Culture
- 132 artempXX Arts and Culture
- 143 numbusXX Workforce and Economic Development
- 144 totempXX Workforce and Economic Development
- 145 smlbusXX Workforce and Economic Development
- 150 biz1_XX Workforce and Economic Development
- 151 biz2_XX Workforce and Economic Development
- 152 biz4_XX Workforce and Economic Development
- 157 neiindXX Workforce and Economic Development
- 158 neibusXX Workforce and Economic Development
- 159 neiempXX Workforce and Economic Development
- 201 cebusXX Arts and Culture
- 202 ceempXX Arts and Culture
This colab and more can be found at https://github.com/BNIA/vitalsigns.
In the following example pulls point geodata from a Postgres database.
We will pull the postgres point data in two manners.
- SQL query where an SQL query uses ST_Transform(the_geom,4326) to transform the_geom's CRS from a DATABASE Binary encoding into standard Lat Long's
- Using a plan SQL query and performing the conversion using gpd.io.sql.read_postgis() to pull the data in as 2248 and convert the CRS using .to_crs(epsg=4326)
- These examples will not work in colabs as their is no local database to connect to and has been commented out for that reason
```
pip install psycopg2
import psycopg2
# This Notebook can be downloaded to connect to a database
conn = psycopg2.connect(host='localhost', dbname='dbname', user='jfi', password='pass', port='port')
CISJFIDB.cis.ubalt.edu -> 192.168.2.43
# DB Import Method One
sql1 = 'SELECT the_geom, gid, geogcode, ooi, address, addrtyp, city, block, lot, desclu, existing FROM housing.mdprop_2017v2 limit 100;'
pointData = gpd.io.sql.read_postgis(sql1, conn, geom_col='the_geom', crs=2248)
pointData = pointData.to_crs(epsg=4326)
# DB Import Method Two
sql2 = 'SELECT ST_Transform(the_geom,4326) as the_geom, ooi, desclu, address FROM housing.mdprop_2017v2;'
pointData = gpd.GeoDataFrame.from_postgis(sql2, conn, geom_col='the_geom', crs=4326)
pointData.head()
pointData.plot()
```
## About this Tutorial:
### Whats Inside?
#### __The Tutorial__
This notebook was made to create Vital Signs Indicators from an Info-USA geographic dataset.
#### __Objectives__
- Reading in data (points/ geoms)
# Guided Walkthrough
## SETUP:
### Import Modules
```
%%capture
! pip install -U -q PyDrive
! pip install geopy
! pip install geopandas
! pip install geoplot
!apt install libspatialindex-dev
!pip install rtree
%%capture
# These imports will handle everything
import os
import sys
import csv
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import geopandas as gpd
from geopandas import GeoDataFrame
import psycopg2
import pyproj
from pyproj import Proj, transform
# conda install -c conda-forge proj4
from shapely.geometry import Point
from shapely import wkb
from shapely.wkt import loads
# https://pypi.org/project/geopy/
from geopy.geocoders import Nominatim
# In case file is KML, enable support
import fiona
fiona.drvsupport.supported_drivers['kml'] = 'rw'
fiona.drvsupport.supported_drivers['KML'] = 'rw'
from IPython.display import clear_output
clear_output(wait=True)
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
```
### Configure Enviornment
```
# This will just beautify the output
pd.set_option('display.expand_frame_repr', False)
pd.set_option('display.precision', 2)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# pd.set_option('display.expand_frame_repr', False)
# pd.set_option('display.precision', 2)
# pd.reset_option('max_colwidth')
pd.set_option('max_colwidth', 20)
# pd.reset_option('max_colwidth')
```
### (Optional) GoogleDrive Access
```
# (Optional) Run this cell to gain access to Google Drive (Colabs only)
from google.colab import drive
# Colabs operates in a virtualized enviornment
# Colabs default directory is at ~/content.
# We mount Drive into a temporary folder at '~/content/drive'
drive.mount('/content/drive')
cd drive/'My Drive'/colabs/DATA
cd ../
cd postgres
ls
```
# Permits
#### TPOP CSA and Baltimore
Get Baltimore
```
#collapse_output
#collapse_input
csa = "https://services1.arcgis.com/mVFRs7NF4iFitgbY/ArcGIS/rest/services/Tpop/FeatureServer/0/query?where=1%3D1&outFields=*&returnGeometry=true&f=pgeojson"
csa = gpd.read_file(csa);
csa.head(1)
```
Get CSA
```
url2 = "https://services1.arcgis.com/mVFRs7NF4iFitgbY/ArcGIS/rest/services/Tpop/FeatureServer/1/query?where=1%3D1&outFields=*&returnGeometry=true&f=pgeojson"
csa2 = gpd.read_file(url2);
csa2['CSA2010'] = csa2['City_1']
csa2['OBJECTID'] = 56
csa2 = csa2.drop(columns=['City_1'])
csa2.head()
```
Append do no append Bcity. We put it on the Bottom of the df because when performing the ponp it returns only the last matching columns CSA Label.
```
# csa = pd.concat([csa2, csa], ignore_index=True)
csa = csa.append(csa2).reset_index(drop=True)
csa.head(3)
csa.tail(3)
csa.head()
```
## Import
```
permits = gpd.read_file("Permits_2018.shp");
permits.columns
permits.crs
permits.head(5)
# Convert to EPSG:4326
permits = permits.to_crs(epsg=4326)
permits.crs
# Convert Geom to Coords
permits['x'] = permits.geometry.x
permits['y'] = permits.geometry.y
permits.head(5)
permits = permits[ permits.geometry.y > 38 ]
# Reference: All Points
base = csa.plot(color='white', edgecolor='black')
permits.plot(ax=base, marker='o', color='green', markersize=5);
permits.columns
# Get CSA Labels for all Points.
permitsCsa = getPolygonOnPoints(permits, csa, 'geometry', 'geometry', 'CSA2010' )
# permitsCsa = permitsCsa.drop('geometry',axis=1)
permitsCsa.head(1)
```
## Processing
All
```
permitsAll = permits
# Reference: All Points
base = csa.plot(color='white', edgecolor='black')
permitsAll.plot(ax=base, marker='o', color='green', markersize=5);
permits = permitsAll
# y < 0
permitsLessThanZero = permits[ permits.geometry.y < 0 ]
print('Y<0: ', permitsLessThanZero.size, '\n')
permitsLessThanZero.plot()
# y > 0
permitsGreaterThanZero = permits[ permits.geometry.y > 0 ]
print('Y>0: ', permitsGreaterThanZero.size, '\n')
permitsGreaterThanZero.plot();
# 0 < y < 38
permitsOver38 = permits[ permits.geometry.y < 38 ]
permitsOver38 = permitsOver38[ permitsOver38.geometry.y > 0 ]
print('0 < y < 38: ', permitsOver38.size, '\n')
permitsOver38.plot();
# y > 38
permitsUnder38 = permits[ permits.geometry.y > 38 ]
print('Y>38 Less than Zero: ', permitsUnder38.size, '\n')
permitsUnder38.plot();
```
# InfoUsa
#### Read in Data Directly
If you are using Geopandas, Direct imports only work with geojson and shape files
```
gdf = gpd.read_file("InfoUSA_2018.shp");
gdf.head(5)
gdf['prim_naics_short'] = gdf.prim_naics.astype(str).str[:-2].astype(np.int64)
# All but 'geometry', prim_naics, prim_sic, 'empl_size', 'X', 'Y'
# ['coname', 'empl_rng', 'sales_vol', 'sales_rng', 'psic_dsc', 'scnd_sic1', 'scnd_dsc1', 'scnd_sic2', 'scnd_dsc2',
# 'cr_a_score', 'cr_n_score', 'headqtr', 'first_year', 'sq_foot', 'firm_indv', 'fleetsize', 'specialty1',
# 'specialty2', 'pnaics_dsc', 'acct_exp', 'ad_exp', 'offsup_exp', 'pay_exp', 'rent_exp', 'tech_exp', 'tele_exp',
# 'ins_exp', 'legal_exp', 'pckg_exp', 'pirnt_exp', 'prof_exp', 'templbrexp', 'util_exp']"""
gdf.columns
"""
gdf = gdf.drop(['Status', 'Score', 'Match_type', 'Side', 'Match_addr',
'ARC_Street', 'recorddate', 'recordobs', 'recordobs_', 'recordobs1',
'source', 'address', 'city', 'state', 'zipcode', 'mc_route',
'md_barcode', 'loc_addr', 'loc_city', 'loc_state', 'loc_zip',
'locbarcode', 'loc_route', 'county', 'phn_nbr', 'web_addr', 'last_name',
'first_name', 'ctct_title', 'ctct_prof', 'ctct_gen',
'headqtr', 'ofc_size', 'sq_foot', 'pub_pvt',
'ind_code', 'yellowpage', 'metro_area', 'infousa_id', 'latitude',
'longitude', 'match_code'], axis=1)
"""
gdf = gdf.drop(['Status', 'Score', 'Match_type', 'Side', 'Match_addr',
'ARC_Street', 'recorddate', 'recordobs', 'recordobs_', 'recordobs1',
'source', 'address', 'city', 'state', 'zipcode', 'mc_route',
'md_barcode', 'loc_addr', 'loc_city', 'loc_state', 'loc_zip',
'locbarcode', 'loc_route', 'county', 'phn_nbr', 'web_addr', 'last_name',
'first_name', 'ctct_title', 'ctct_prof', 'ctct_gen',
'sales_vol', 'sales_rng',
'scnd_sic1', 'scnd_dsc1', 'scnd_sic2', 'scnd_dsc2', 'cr_a_score',
'cr_n_score', 'headqtr', 'ofc_size', 'sq_foot',
'firm_indv', 'pub_pvt', 'fleetsize', 'specialty1', 'specialty2',
'ind_code', 'yellowpage', 'metro_area', 'infousa_id', 'latitude',
'longitude', 'match_code', 'acct_exp',
'ad_exp', 'offsup_exp', 'pay_exp', 'rent_exp', 'tech_exp', 'tele_exp',
'ins_exp', 'legal_exp', 'pckg_exp', 'pirnt_exp', 'prof_exp',
'templbrexp', 'util_exp'], axis=1)
gdf.columns
gdf = gdf[ gdf['Y'] > 0 ]
gdf = gdf.drop(['X','Y'],axis=1)
# Convert to EPSG:4326
gdf = gdf.to_crs(epsg=4326)
gdf.crs
# Reference: All Points
base = csa.plot(color='white', edgecolor='black')
gdf.plot(ax=base, marker='o', color='green', markersize=5);
# Number of Records
gdf.head()
# Get CSA Labels for all Points.
infoUsaCsa = getPolygonOnPoints(gdf, csa, 'geometry', 'geometry', 'CSA2010' )
infoUsaCsa = infoUsaCsa.drop('geometry',axis=1)
infoUsaCsa.head(1)
# Get counts of points in polygons. This function returns CSA's with a tally of points within it.
infoUsaCsaTotals = getPointsInPolygons(gdf, csa, 'geometry', 'geometry')
infoUsaCsaTotals = infoUsaCsaTotals.drop('geometry',axis=1)
infoUsaCsaTotals = infoUsaCsaTotals.append({'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'pointsinpolygon': infoUsaCsaTotals['pointsinpolygon'].sum() } , ignore_index=True)
infoUsaCsaTotals['numbus'] = infoUsaCsaTotals['pointsinpolygon']
infoUsaCsaTotals = infoUsaCsaTotals.drop('pointsinpolygon',axis=1)
infoUsaCsaTotals.tail()
infoUsaCsaTotals.to_csv('numbus18.csv', index=False)
```
##### Process Data
```
# Convert to EPSG:2248
gdf.crs
# This is good data
gdfg = gdf[ gdf['Y'] > 0 ]
gdfg.head(1)
gdfg.size
# This is missing its GIS coordinates
gdfz = gdf[ gdf['Y'] == 0 ]
gdfz.head(1)
gdfz.size
# Convert to EPSG:4326
gdf = gdf.to_crs(epsg=4326)
gdf.crs
gdfleft = gdf[ gdf['Y'] >= 0 ]
gdfleft.plot()
# Number of Records
gdfleft.size/len(gdfleft.columns)
```
#### Explore Data
```
import os
import folium
import numpy as np
# from folium import plugins
from folium.plugins import HeatMap
def map_points(df, lat_col='latitude', lon_col='longitude',
popup='latitude', zoom_start=11, plot_points=False,
pt_radius=15, draw_heatmap=False, heat_map_weights_col=None,
heat_map_weights_normalize=True, heat_map_radius=15):
"""Creates a map given a dataframe of points. Can also produce a heatmap overlay
Arg:
df: dataframe containing points to maps
lat_col: Column containing latitude (string)
lon_col: Column containing longitude (string)
zoom_start: Integer representing the initial zoom of the map
plot_points: Add points to map (boolean)
pt_radius: Size of each point
draw_heatmap: Add heatmap to map (boolean)
heat_map_weights_col: Column containing heatmap weights
heat_map_weights_normalize: Normalize heatmap weights (boolean)
heat_map_radius: Size of heatmap point
Returns:
folium map object
"""
## center map in the middle of points center in
if (type(gdfleft) == GeoDataFrame):
middle_lat = df['geometry'].y.median()
middle_lon = df['geometry'].x.median()
else:
middle_lat = df[lat_col].median()
middle_lon = df[lon_col].median()
print(middle_lat, middle_lon)
# https://python-visualization.github.io/folium/modules.html
curr_map = folium.Map(location=[middle_lat, middle_lon],
width=750, height=500,
zoom_start=zoom_start)
# add points to map
if plot_points:
for _, row in df.iterrows():
if (type(gdfleft) == GeoDataFrame):
coords = [row['geometry'].y, row['geometry'].x]
else:
coords = [row[lat_col], row[lon_col]]
folium.CircleMarker( coords,
radius=pt_radius,
popup=row[popup],
fill_color="#3db7e4", # divvy color
).add_to(curr_map)
# add heatmap
if draw_heatmap:
# convert to (n, 2) or (n, 3) matrix format
if heat_map_weights_col is None:
cols_to_pull = [lat_col, lon_col]
elif heat_map_weights_normalize: # if we have to normalize
df[heat_map_weights_col] = \
df[heat_map_weights_col] / df[heat_map_weights_col].sum()
cols_to_pull = [lat_col, lon_col, heat_map_weights_col]
if (type(gdfleft) == GeoDataFrame):
stations = gdfleft.head(1000)['geometry'].apply(lambda p: [p.y,p.x])
else:
stations = df[cols_to_pull].values
curr_map.add_children(plugins.HeatMap(stations, radius=heat_map_radius))
return curr_map
```
###### Explore
```
gdfleft.head()
```
https://towardsdatascience.com/interactive-controls-for-jupyter-notebooks-f5c94829aee6
```
# Points
@interact
def show_articles_more_than(column= gdfleft.columns ):
return gdfleft.plot( column=column, legend=True)
# Heatmap
@interact
def show_articles_more_than(column= gdfleft.columns ):
return map_points(gdfleft.head(500), lat_col='Y', lon_col='X', popup=column, zoom_start=11, plot_points=False, pt_radius=15,
draw_heatmap=column, heat_map_weights_col=None, heat_map_weights_normalize=True, heat_map_radius=15)
# MarkerCluster.ipynb
# https://github.com/python-visualization/folium/blob/master/examples/MarkerCluster.ipynb
from folium.plugins import MarkerCluster
m = folium.Map(location=[39.28759453969165, -76.61278931706487], zoom_start=12)
marker_cluster = MarkerCluster().add_to(m)
stations = gdfleft.head(1000)['geometry'].apply(lambda p: folium.Marker( location=[p.y,p.x], popup='Add popup text here.', icon=None ).add_to(marker_cluster) )
m
6# Interact with specification of arguments
@interact
def show_articles_more_than(column = country_peripheries.columns ): # gdfleft.columns ):
return gpd.overlay(csa, gdfleft.head(), how='difference').plot(alpha=0.5, edgecolor='k', column=column, cmap='magma', legend=True);
```
##### Choropleth Timeslider
```
first_year
import os
import folium
import geopandas as gpd
import pandas as pd
import numpy as np
from branca.colormap import linear
from folium.plugins import TimeSliderChoropleth
#TimeSliderChoropleth.ipynb
# https://github.com/python-visualization/folium/blob/master/examples/TimeSliderChoropleth.ipynb
gdf = csa.copy()
%matplotlib inline
ax = gdf.plot(figsize=(10, 10))
```
To simulate that data is sampled at different times we random sample data for n_periods rows of data. __Note__ that the geodata and random sampled data is linked through the feature_id, which is the index of the GeoDataFrame.
```
periods = 10
datetime_index = pd.date_range('2010', periods=periods, freq='Y')
dt_index_epochs = ( datetime_index.astype(int) ).astype('U10')
datetime_index
# Style each boundry with randomness.
for country in gdf.index:
df = pd.DataFrame(
{'color': np.random.normal(size=periods),
'opacity': [1,2,3,4,5,6,7,8,9,1] },
index=dt_index_epochs
)
df = df.cumsum()
styledata[country] = df
ax = df.plot()
df.head()
```
We see that we generated two series of data for each country; one for color and one for opacity. Let's plot them to see what they look like.
```
max_color, min_color, max_opacity, min_opacity = 0, 0, 0, 0
for country, data in styledata.items():
max_color = max(max_color, data['color'].max())
min_color = min(max_color, data['color'].min())
max_opacity = max(max_color, data['opacity'].max())
max_opacity = min(max_color, data['opacity'].max())
linear.PuRd_09.scale(min_color, max_color)
```
We want to map the column named color to a hex color. To do this we use a normal colormap. To create the colormap, we calculate the maximum and minimum values over all the timeseries. We also need the max/min of the opacity column, so that we can map that column into a range [0,1].
```
max_color, min_color, max_opacity, min_opacity = 0, 0, 0, 0
for country, data in styledata.items():
max_color = max(max_color, data['color'].max())
min_color = min(max_color, data['color'].min())
max_opacity = max(max_color, data['opacity'].max())
max_opacity = min(max_color, data['opacity'].max())
from branca.colormap import linear
cmap = linear.PuRd_09.scale(min_color, max_color)
def norm(x): return (x - x.min()) / (x.max() - x.min())
for country, data in styledata.items():
data['color'] = data['color'].apply(cmap)
data['opacity'] = norm(data['opacity'])
styledata
```
Finally we use pd.DataFrame.to_dict() to convert each dataframe into a dictionary, and place each of these in a map from country id to data.
```
from folium.plugins import TimeSliderChoropleth
m = folium.Map([39.28759453969165, -76.61278931706487], zoom_start=12)
g = TimeSliderChoropleth(
gdf.to_json(),
styledict={
str(country): data.to_dict(orient='index') for
country, data in styledata.items()
}
).add_to(m)
m
```
##### Points and Polygons. Difference, Intersection.
```
csa = gpd.read_file("https://opendata.arcgis.com/datasets/b738a8587b6d479a8824d937892701d8_0.geojson");
from geopandas import GeoSeries
# The hard way
points = list()
for _, row in gdfleft.iterrows(): points.append( Point( row['geometry'].x, row['geometry'].y ) )
points = GeoSeries( points )
# The easy way
circles = gdfleft.geometry.buffer(.001)
circles.plot()
# collapse these circles into a single shapely MultiPolygon geometry withmp
mp = circles.unary_union
csa['geometry'].intersection( mp ).plot()
csa['geometry'].difference( mp ).plot()
mp.area / newcsa.geometry.area
```
##### Geometric Manipulations
```
# Draw tool. Create and export your own boundaries
m = folium.Map()
draw = Draw()
draw.add_to(m)
m = folium.Map(location=[-27.23, -48.36], zoom_start=12)
draw = Draw(export=True)
draw.add_to(m)
# m.save(os.path.join('results', 'Draw1.html'))
m
newcsa = csa.copy()
newcsa['geometry'] = csa.boundary
newcsa.plot(column='CSA2010' )
newcsa = csa.copy()
newcsa['geometry'] = csa.envelope
newcsa.plot(column='CSA2010' )
newcsa = csa.copy()
newcsa['geometry'] = csa.convex_hull
newcsa.plot(column='CSA2010' )
# , cmap='OrRd', scheme='quantiles'
# newcsa.boundary.plot( )
newcsa = csa.copy()
newcsa['geometry'] = csa.simplify(30)
newcsa.plot(column='CSA2010' )
newcsa = csa.copy()
newcsa['geometry'] = csa.buffer(0.01)
newcsa.plot(column='CSA2010' )
newcsa = csa.copy()
newcsa['geometry'] = csa.rotate(30)
newcsa.plot(column='CSA2010' )
newcsa = csa.copy()
newcsa['geometry'] = csa.scale(3, 2)
newcsa.plot(column='CSA2010' )
newcsa = csa.copy()
newcsa['geometry'] = csa.skew(1, 10)
newcsa.plot(column='CSA2010' )
```
### Points in CSAs
```
# Reference: All Points
base = csa.plot(color='white', edgecolor='black')
infoUsaCsa.plot(ax=base, marker='o', color='green', markersize=5);
```
generate a GeoSeries containing points
Note that this can be simplified a bit, since geometry is available as an attribute on a GeoDataFrame, and the intersection and difference methods are implemented with the “&” and “-” operators, respectively. For example, the latter could have been expressed simply as boros.geometry - mp.
It’s easy to do things like calculate the fractional area in each borough that are in the holes:
```
gdf.head()
csa.head()
gdfleft.head()
gdfleft[ gdfleft.coname == 'Us Army Corps Of Engineers' ]
gdf = gdfleft.copy()
csaUrl = "https://services1.arcgis.com/mVFRs7NF4iFitgbY/ArcGIS/rest/services/Tpop/FeatureServer/0/query?where=1%3D1&objectIds=&time=&geometry=&geometryType=esriGeometryEnvelope&inSR=&spatialRel=esriSpatialRelIntersects&resultType=none&distance=0.0&units=esriSRUnit_Meter&returnGeodetic=false&outFields=tpop10%2C+CSA2010&returnGeometry=true&returnCentroid=false&featureEncoding=esriDefault&multipatchOption=xyFootprint&maxAllowableOffset=&geometryPrecision=&outSR=&datumTransformation=&applyVCSProjection=false&returnIdsOnly=false&returnUniqueIdsOnly=false&returnCountOnly=false&returnExtentOnly=false&returnQueryGeometry=false&returnDistinctValues=false&cacheHint=false&orderByFields=&groupByFieldsForStatistics=&outStatistics=&having=&resultOffset=&resultRecordCount=&returnZ=false&returnM=false&returnExceededLimitFeatures=true&quantizationParameters=&sqlFormat=none&f=pgeojson&token="
csa = gpd.read_file(csaUrl);
csa.head()
```
# Arts and **Culture**
##### 131 Artbus
The rate of businesses (both for-profit and non-profit) that are directly related to arts and culture per 1,000 residents.
Arts-related businesses are defined as belonging to industries that allow for the consumption and enjoyment of arts and culture.
The following industries are identified by their primary NAICS code:
music, literary, and visual arts-related retail/supplies (451140, 451211, 451220);
art dealers (453920, 453920); libraries (519120); motion picture and film (521310, 532230); art schools (611610);
performing arts (711110, 711120, 711130, 711190); independent artists, writers, and performers (711510);
museums (712110); historical sites (712120); and zoos, gardens and nature parks (712130, 712190).
The following industries are identified by their primary SIC codes:
designers (152106); art publishers (274101),
music, literary, and visual arts-related retail/supplies (393101, 519202, 573608, 573609, 593201, 594201, 594205, 594501, 594520, 594601, 599965, 769969);
art galleries, dealers, and consultants (599969, 599988, 599989); photography (722121); calligraphers (733607); embroidery (738942); theatres (783201, 792207);
theatrical support (792211, 792212); musical and live entertainment (792903, 792905, 792906, 792908, 792917, 792918, 792927); parks (799951);
art and music instruction (804958, 829915, 829919); libraries (823111); museums (841201); arts organizations (841202); zoos (842201); writers (899903);
visual artists (899907, 899912); art restoring (899908); and music arrangers and composers (899921).
```
naicCodes = [451140, 451211, 451220, 453920, 519120, 521310, 532230, 611610, 711110, 711120,
711130, 711190, 711510, 712110, 712120, 712130, 712190]
sicCodes = [152106, 274101, 393101, 519202, 573608, 573609, 593201, 594201, 594205, 594501,
594520, 594601, 599965, 769969, 599969, 599988, 599989, 722121, 733607, 738942,
783201, 792207, 792211, 792212, 792903, 792905, 792906, 792908, 792917, 792918,
792927, 799951, 804958, 829915, 829919, 823111, 841201, 841202, 842201, 899903,
899907, 899912, 899908, 899921]
artbus = infoUsaCsa[ ( infoUsaCsa['prim_naics_short'].isin( naicCodes ) ) | ( infoUsaCsa.prim_sic.isin( sicCodes ) ) ]
# Aggregate Numeric Values by Sum
artbus = artbus[ ['CSA2010'] ]
artbus['artbusCount'] = 1
artbus = artbus.groupby('CSA2010').sum(numeric_only=True)
artbus = artbus.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
artbus = artbus.append({'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'artbusCount': artbus['artbusCount'].sum() } , ignore_index=True)
# Create the Indicator
artbus['artbus'] = artbus['artbusCount'] * 1000 / artbus['tpop10']
artbus.to_csv('artbus18.csv', index=False)
artbus.tail()
import json
def artbus(bounds, df):
"""
131 - artbus
with tbl AS (
select (sum(
case
when ((prim_naics::text like any (select * from vital_signs.artbus_naics_vals)
or prim_sic::text like any (select * from vital_signs.artbus_sic_vals)) and coname != 'Us Army Corps Of Engineers')
then 1
else 0
end)::numeric
* 1000 )/the_pop as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa, the_pop
)
update vital_signs.data
set artbus = result from tbl where data.csa = tbl.csa and data_year = '2016';
"""
# Filter rows
# https://www.naics.com/code-search/?sictrms=art
# https://www.naics.com/code-search/?naicstrms=art
naicCodes = [451140, 451211, 451220, 453920, 519120, 521310, 532230, 611610, 711110, 711120,
711130, 711190, 711510, 712110, 712120, 712130, 712190]
sicCodes = [152106, 274101, 393101, 519202, 573608, 573609, 593201, 594201, 594205, 594501,
594520, 594601, 599965, 769969, 599969, 599988, 599989, 722121, 733607, 738942,
783201, 792207, 792211, 792212, 792903, 792905, 792906, 792908, 792917, 792918,
792927, 799951, 804958, 829915, 829919, 823111, 841201, 841202, 842201, 899903,
899907, 899912, 899908, 899921]
# sum rows: increment by 1 if row = () else 0
# (prim_naics: like any [ ] or prim_sic like any []) and coname != 'Us Army Corps Of Engineers')
df['prim_naics_short'] = df.prim_naics.astype(str).str[:-2].astype(np.int64)
# filtered_df = df[ ( df['prim_naics_short'].isin( naicCodes ) ) | ( df.prim_sic.isin( sicCodes ) ) ] #& df.coname != 'Us Army Corps Of Engineers' ]
# Point in Polygons
csasWithCounts = getPointsInPolygons(filtered_df, bounds, 'geometry', 'geometry')
# Aggregate by CSA
# Group By CSA so that they may be opperated on
groupedCounts = csasWithCounts.groupby('CSA2010')
# Aggregate Numeric Values by Sum
groupedCounts = groupedCounts.sum(numeric_only=True)
# groupedCounts = groupedCounts.merge(bounds, left_on='CSA2010', right_on='CSA2010')
print(groupedCounts.columns)
groupedCounts['numOfBusinesses'] = groupedCounts['pointsinpolygon']
groupedCounts = groupedCounts.drop(['pointsinpolygon'], axis=1)
# groupedCounts = groupedCounts.append({'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'numOfBusinesses': groupedCounts['numOfBusinesses'].sum() } , ignore_index=True)
print({'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'numOfBusinesses': groupedCounts['numOfBusinesses'].sum() })
groupedCounts['artbus'] = groupedCounts['numOfBusinesses'] * 1000 / groupedCounts['tpop10']
return groupedCounts
artbus_vals = artbus(csaComms, gdfleft)
artbus_vals.to_csv('artbus18.csv')
artbus_vals
```
##### 132 Artemp
```
naicCodes = [451140, 451211, 451220, 453920, 519120, 521310, 532230, 611610, 711110, 711120,
711130, 711190, 711510, 712110, 712120, 712130, 712190]
sicCodes = [152106, 274101, 393101, 519202, 573608, 573609, 593201, 594201, 594205, 594501,
594520, 594601, 599965, 769969, 599969, 599988, 599989, 722121, 733607, 738942,
783201, 792207, 792211, 792212, 792903, 792905, 792906, 792908, 792917, 792918,
792927, 799951, 804958, 829915, 829919, 823111, 841201, 841202, 842201, 899903,
899907, 899912, 899908, 899921]
artemp = infoUsaCsa[ ( infoUsaCsa['prim_naics_short'].isin( naicCodes ) ) | ( infoUsaCsa.prim_sic.isin( sicCodes ) ) ]
# Aggregate Numeric Values by Sum
artemp = artemp[ ['CSA2010', 'empl_size'] ]
artemp = artemp.groupby('CSA2010').sum(numeric_only=True)
artemp = artemp.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
artemp = artemp.append({'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'empl_size': artemp['empl_size'].sum() } , ignore_index=True)
# Create the Indicator
artemp['artemp'] = artemp['empl_size']
artemp = artemp.drop('empl_size', axis=1)
artemp.to_csv('artemp18.csv', index=False)
artemp.tail()
def artemp(bounds, df, pop):
"""
132 - artemp
with tbl AS (
select (sum(
case
when ((prim_naics::text like any (select * from vital_signs.artbus_naics_vals)
or prim_sic::text like any (select * from vital_signs.artbus_sic_vals)) and coname != 'Us Army Corps Of Engineers')
then empl_size
else 0
end)
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2017', 'gid', 'the_geom') a
left join economy.infousa_2017 b on a.gid = b.gid
group by csa, the_pop
)
select * from tbl where 1 = 1 ORDER BY csa ASC;
"""
# Filter rows
# https://www.naics.com/code-search/?sictrms=art
# https://www.naics.com/code-search/?naicstrms=art
naicCodes = [451140, 451211, 451220, 453920, 519120, 521310, 532230, 611610, 711110, 711120,
711130, 711190, 711510, 712110, 712120, 712130, 712190]
sicCodes = [152106, 274101, 393101, 519202, 573608, 573609, 593201, 594201, 594205, 594501,
594520, 594601, 599965, 769969, 599969, 599988, 599989, 722121, 733607, 738942,
783201, 792207, 792211, 792212, 792903, 792905, 792906, 792908, 792917, 792918,
792927, 799951, 804958, 829915, 829919, 823111, 841201, 841202, 842201, 899903,
899907, 899912, 899908, 899921]
# sum rows: increment by 1 if row = () else 0
# (prim_naics: like any [ ] or prim_sic like any []) and coname != 'Us Army Corps Of Engineers')
df['numOfBusinesses'] = 1
df['prim_naics_short'] = df.prim_naics.astype(str).str[:-2].astype(np.int64)
filtered_df = df[ ( df['prim_naics_short'].isin( naicCodes ) | df.prim_sic.isin( sicCodes ) ) ] #& df.coname != 'Us Army Corps Of Engineers' ]
filtered_df.to_csv('artbus_filtered_points.csv')
# Point in Polygons
csasWithCounts = getPolygonOnPoints(filtered_df, bounds, 'geometry', 'geometry', 'CSA2010')
# Aggregate by CSA
# Group By CSA so that they may be opperated on
groupedCounts = csasWithCounts.groupby('CSA2010')
# Aggregate Numeric Values by Sum
groupedCounts = groupedCounts.sum(numeric_only=True)
groupedCounts = groupedCounts.merge(pop, left_on='CSA2010', right_on='CSA2010')
groupedCounts['artemp'] = groupedCounts['empl_size']
groupedCounts = groupedCounts.drop(['empl_size', 'X', 'Y'], axis=1)
groupedCounts.to_csv('artemp.csv')
return groupedCounts
# prim_naics
population = pd.read_csv('population.csv')
csaComms = csa[ ['CSA2010', 'geometry'] ].copy()
artemp_Vals = artemp(csaComms, gdfleft, population )
artemp_Vals.to_csv('artemp18_csasWithCountsAndTPop.csv')
artemp_Vals
```
##### 201 CEBUS
The rate of businesses (both for-profit and non-profit) that are in the creative economy per 1,000 residents.
The creative economy is defined as industries that use and support artistic and cultural skillsets to attract and generate capital, knowledge, and information.
Arts-based businesses are included in the creative economy.
In addition to the industries included in the rate of arts-based businesses indictor, the following industries are identified by their primary NAICS code:
Textiles (313220); Commercial Printing (323111, 323113); Book Printers and Publishers (323117, 511130); Print Media (451212, 511110, 511120, 511199, 519110);
Motion Picture & Video Production (512110); Music Publishers (512230); Sound Recording (512240); Radio (515112); Architecture (541310, 541320);
Interior Design (541410); Graphic Design (541430); Advertising (541810, 541890); and Photography (541921, 541922).
In addition to the industries included in the rate of arts-based businesses indictor,
the following industries are identified by their primary SIC code:
Print Media (271101, 271102, 271198, 272101, 272102, 272104, 273101, 273198, 596302, 599401);Publishers (273298, 274104, 274105, 874205);
Printers (275202, 275202, 275902, 275998); Bookbinders (278902); Radio (483201); Television (483301, 484101, 792205, 824911); Textiles (513122, 594904);
Advertising (519917, 731101, 731115, 731305, 731999); Fashion Designers (569901, 594408); Photography (722101, 722113, 722120, 733501, 738401);
Graphic Design (733603); Commercial Artists (733604); Website Design (737311); General Media (738301); Interior Design (738902);
Restoration (764112); Landscape Design (781030); Motion Picture and Video Support (781205, 781211, 781901);
Architecture (871202, 871207, 871209, 874892); and Business Writers (899902).
15 -> empl_size integer,
16 -> empl_size character varying(254),
17 -> empl_size bigint,
Convert Column StringToInt
CREATE OR REPLACE FUNCTION pc_chartoint(chartoconvert character varying)
RETURNS integer AS
$BODY$
SELECT CASE WHEN trim($1) SIMILAR TO '[0-9]+'
THEN CAST(trim($1) AS integer)
ELSE NULL END;
$BODY$
LANGUAGE 'sql' IMMUTABLE STRICT;
ALTER TABLE economy.infousa_2016 ALTER COLUMN empl_size TYPE integer USING pc_chartoint(empl_size);
```
import json
clear_output(wait=True)
def cebus(bounds, df, pop):
"""
201 - cebusXX
with tbl AS (
select (sum(
case
when ((prim_naics::text like any (select * from vital_signs.cebus_naics_vals)
or prim_sic::text like any (select * from vital_signs.cebus_sic_vals))
and coname != 'Us Army Corps Of Engineers')
then 1
else 0
end)::numeric
* 1000 )/the_pop as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa, the_pop
)
update vital_signs.data
set cebus = result from tbl where data.csa = tbl.csa and data_year = '2016';
"""
# Filter rows
# https://www.naics.com/code-search/?sictrms=art
# https://www.naics.com/code-search/?naicstrms=art
naicCodes = [323111, 323113, 451140, 451211, 451212, 453920, 511110, 511120, 511130, 511199,
512110, 519110, 519120, 541310, 541320, 541410, 541430, 541810, 541890, 541921,
541922, 611610, 711110, 711130, 711190, 711510, 712110, 712120, 712130, 712190,
313220, 323117, 511130, 512230, 512240, 515112 ]
sicCodes = [271101, 271102, 271198, 272101, 272102, 272104, 273101, 273198, 596302, 599401,
273298, 274104, 274105, 874205, 275202, 275902, 275998, 278902, 483201,
483301, 484101, 792205, 824911, 513122, 594904, 519917, 731101, 731115, 731305,
731999, 569901, 594408, 722101, 722113, 722120, 733501, 738401, 733603, 733604,
737311, 738301, 738902, 764112, 781030, 781205, 781211, 781901, 871202, 871207,
871209, 874892, 899902, 451220, 521310, 532230, 711120]
fromArtbusNaicsNotFoundInCebusNaics = [451220, 521310, 532230, 711120]
# sum rows: increment by 1 if row = () else 0
# (prim_naics: like any [ ] or prim_sic like any []) and coname != 'Us Army Corps Of Engineers')
df['prim_naics_short'] = df.prim_naics.astype(str).str[:-2].astype(np.int64)
filtered_df = df[ ( df.prim_naics_short.isin( naicCodes ) | df.prim_sic.isin( sicCodes ) ) ] #& df.coname != 'Us Army Corps Of Engineers' ]
filtered_df.to_csv('cebus_points.csv')
# Point in Polygons
csasWithCounts = getPointsInPolygons(filtered_df, bounds, 'geometry', 'geometry')
# Aggregate by CSA
# Group By CSA so that they may be opperated on
groupedCounts = csasWithCounts.groupby('CSA2010')
# Aggregate Numeric Values by Sum
groupedCounts = groupedCounts.sum(numeric_only=True)
groupedCounts = groupedCounts.merge(pop, left_on='CSA2010', right_on='CSA2010')
groupedCounts['countOfBusinesses'] = groupedCounts['number of points']
groupedCounts['cebus'] = groupedCounts['number of points'] * 1000 / groupedCounts['tpop10']
groupedCounts = groupedCounts.drop(['number of points'], axis=1)
groupedCounts.to_csv('cebus.csv', index=False)
return groupedCounts
naicCodes = [323111, 323113, 451140, 451211, 451212, 453920, 511110, 511120, 511130, 511199,
512110, 519110, 519120, 541310, 541320, 541410, 541430, 541810, 541890, 541921,
541922, 611610, 711110, 711130, 711190, 711510, 712110, 712120, 712130, 712190,
313220, 323117, 511130, 512230, 512240, 515112 ]
sicCodes = [271101, 271102, 271198, 272101, 272102, 272104, 273101, 273198, 596302, 599401,
273298, 274104, 274105, 874205, 275202, 275902, 275998, 278902, 483201,
483301, 484101, 792205, 824911, 513122, 594904, 519917, 731101, 731115, 731305,
731999, 569901, 594408, 722101, 722113, 722120, 733501, 738401, 733603, 733604,
737311, 738301, 738902, 764112, 781030, 781205, 781211, 781901, 871202, 871207,
871209, 874892, 899902]
cebus = infoUsaCsa[ ( infoUsaCsa['prim_naics_short'].isin( naicCodes ) ) | ( infoUsaCsa.prim_sic.isin( sicCodes ) ) ]
print( cebus.size / len(cebus.columns) )
# Aggregate Numeric Values by Sum
cebus = cebus[ ['CSA2010'] ]
cebus['cebusCount'] = 1
cebus = cebus.groupby('CSA2010').sum(numeric_only=True)
cebus = cebus.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
cebus = cebus.append({'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'cebusCount': cebus['cebusCount'].sum() } , ignore_index=True)
# Create the Indicator
cebus['cebus'] = cebus['cebusCount'] * 1000 / cebus['tpop10']
cebus.to_csv('cebus18.csv', index=False)
cebus.tail()
population = pd.read_csv('population.csv')
csaComms = csa[ ['CSA2010', 'geometry'] ].copy()
# csaComms = csaComms.drop('tpop10', axis=1)
cebus_vals = cebus(csaComms, gdfleft, population )
cebus_vals
```
##### 202 CEEMP
```
naicCodes = [323111, 323113, 451140, 451211, 451212, 453920, 511110, 511120, 511130, 511199,
512110, 519110, 519120, 541310, 541320, 541410, 541430, 541810, 541890, 541921,
541922, 611610, 711110, 711130, 711190, 711510, 712110, 712120, 712130, 712190,
313220, 323117, 511130, 512230, 512240, 515112 ]
sicCodes = [271101, 271102, 271198, 272101, 272102, 272104, 273101, 273198, 596302, 599401,
273298, 274104, 274105, 874205, 275202, 275902, 275998, 278902, 483201,
483301, 484101, 792205, 824911, 513122, 594904, 519917, 731101, 731115, 731305,
731999, 569901, 594408, 722101, 722113, 722120, 733501, 738401, 733603, 733604,
737311, 738301, 738902, 764112, 781030, 781205, 781211, 781901, 871202, 871207,
871209, 874892, 899902, 451220, 521310, 532230, 711120]
ceemp = infoUsaCsa[ ( infoUsaCsa['prim_naics_short'].isin( naicCodes ) ) | ( infoUsaCsa.prim_sic.isin( sicCodes ) ) ]
# Aggregate Numeric Values by Sum
ceemp = ceemp[ ['CSA2010', 'empl_size'] ]
ceemp = ceemp.groupby('CSA2010').sum(numeric_only=True)
ceemp = ceemp.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
ceemp = ceemp.append({'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'empl_size': ceemp['empl_size'].sum() } , ignore_index=True)
# Create the Indicator
ceemp['ceemp'] = ceemp['empl_size']
ceemp = ceemp.drop('empl_size', axis=1)
ceemp.to_csv('ceemp18.csv', index=False)
ceemp.tail()
def ceemp(bounds, df, pop):
"""
202 - ceempXX
with tbl AS (
select (sum(
case
when ((prim_naics::text like any (select * from vital_signs.cebus_naics_vals)
or prim_sic::text like any (select * from vital_signs.cebus_sic_vals))
and coname != 'Us Army Corps Of Engineers')
then empl_size
else 0
end)
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa, the_pop
)
update vital_signs.data
set ceemp = result from tbl where data.csa = tbl.csa and data_year = '2016';
"""
# Filter rows
# https://www.naics.com/code-search/?sictrms=art
# https://www.naics.com/code-search/?naicstrms=art
naicCodes = [323111, 323113, 451140, 451211, 451212, 453920, 511110, 511120, 511130, 511199,
512110, 519110, 519120, 541310, 541320, 541410, 541430, 541810, 541890, 541921,
541922, 611610, 711110, 711130, 711190, 711510, 712110, 712120, 712130, 712190,
313220, 323117, 511130, 512230, 512240, 515112 ]
sicCodes = [271101, 271102, 271198, 272101, 272102, 272104, 273101, 273198, 596302, 599401,
273298, 274104, 274105, 874205, 275202, 275902, 275998, 278902, 483201,
483301, 484101, 792205, 824911, 513122, 594904, 519917, 731101, 731115, 731305,
731999, 569901, 594408, 722101, 722113, 722120, 733501, 738401, 733603, 733604,
737311, 738301, 738902, 764112, 781030, 781205, 781211, 781901, 871202, 871207,
871209, 874892, 899902]
# sum rows: increment by 1 if row = () else 0
# (prim_naics: like any [ ] or prim_sic like any []) and coname != 'Us Army Corps Of Engineers')
df['numOfBusinesses'] = 1
df['prim_naics_short'] = df.prim_naics.astype(str).str[:-2].astype(np.int64)
filtered_df = df[ ( df.prim_naics_short.isin( naicCodes ) | df.prim_sic.isin( sicCodes ) ) ] #& df.coname != 'Us Army Corps Of Engineers' ]
# Point in Polygons
csasWithCounts = getPolygonOnPoints(filtered_df, bounds, 'geometry', 'geometry', 'CSA2010')
# Aggregate by CSA
# Group By CSA so that they may be opperated on
groupedCounts = csasWithCounts.groupby('CSA2010')
# Aggregate Numeric Values by Sum
groupedCounts = groupedCounts.sum(numeric_only=True)
groupedCounts = groupedCounts.merge(pop, left_on='CSA2010', right_on='CSA2010')
groupedCounts['ceemp'] = groupedCounts['empl_size']
# groupedCounts = groupedCounts.drop(['number of points'], axis=1)
groupedCounts.to_csv('ceemp.csv')
return groupedCounts
population = pd.read_csv('population.csv')
csaComms = csa[ ['CSA2010', 'geometry'] ].copy()
ceemp_vals = ceemp(csaComms, gdfleft, population )
ceemp_vals
```
# Workforce and Development
##### 143 numbus
```
# https://bniajfi.org/indicators/Workforce%20and%20Economic%20Development/numbus/2017
original_SQL_Query = """
143 - numbusXX
WITH tbl AS (
SELECT ( SUM( case WHEN csa_present THEN 1 ELSE 0 END )::numeric ) AS result, a.csa
FROM vital_signs.match_csas_and_bc_by_geom(' economy.infousa_2017', 'gid', 'the_geom') a
LEFT JOIN economy.infousa_2017 b
ON a.gid = b.gid
GROUP BY a.csa, the_pop
)
update vital_signs.data SET numbus = result FROM tbl WHERE data.csa = tbl.csa AND data_year = '2017';
"""
Translation = """
For Each Community
Count number of points
Show in a table with the_pop
"""
# 143 - numbusXX
infoUsaCsaTotals.tail()
#export
infoUsaCsaTotals.to_csv('numbus18.csv', index=False)
```
##### 144 totemp
```
DO_NOT_PROCESS_SQL = """144 - totempXX
SELECT bAll.csa AS Bound, SUM(bQuery.totemp17) AS totemp17
FROM boundaries.csa2010 bAll
LEFT JOIN (
SELECT bounds.csa AS Boundary, ( SUM(Tables.empl_size ::numeric(20,4))::numeric(20,2)) AS totemp17
FROM economy.infousa_2017 AS Tables
JOIN boundaries.csa2010 AS bounds
ON st_contains ( bounds.the_geom,Tables.the_geom )
GROUP BY bounds.csa
ORDER BY bounds.csa
)
bQuery ON bAll.csa = bQuery.Boundary GROUP BY Bound ORDER BY Bound;
"""
infoUsaCsa.head()
#export
# Aggregate Numeric Values by Sum
totemp = infoUsaCsa.groupby('CSA2010')[ ['CSA2010','empl_size'] ].sum(numeric_only=True)
totemp = totemp.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
totemp = totemp.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'empl_size': totemp['empl_size'].sum() }, ignore_index=True)
totemp['totemp'] = totemp['empl_size']
totemp = totemp.drop('empl_size', axis=1)
totemp.tail()
# Save
totemp.to_csv('totemp18.csv', index=False)
```
##### 145 smlbus
```
# https://bniajfi.org/indicators/Workforce%20and%20Economic%20Development/smlbus/2017
smlbus_SQL = """ 145 - smlbusXX
WITH tbl AS (
SELECT( SUM( case WHEN
empl_rng = '1 to 4'
OR empl_rng = '5 to 9'
OR empl_rng = '10 to 19'
OR empl_rng = '20 to 49'
THEN 1 ELSE 0 END
)::numeric ) AS result, a.csa
FROM vital_signs.match_csas_and_bc_by_geom(' economy.infousa_2017', 'gid', 'the_geom') a
LEFT JOIN economy.infousa_2017 b
ON a.gid = b.gid
GROUP BY a.csa, the_pop
ORDER BY a.csa
)
UPDATE vital_signs.data SET smlbus = result FROM tbl WHERE data.csa = tbl.csa AND data_year = '2017';
Screen reader support enabled.
"""
Translation = """
CSA Points in Polygons.
For Each Community
For Each Point
if Point in Community
if empl_rng = '1 to 4'
OR empl_rng = '5 to 9'
OR empl_rng = '10 to 19'
OR empl_rng = '20 to 49'
tally one to communityCount
Show in a table with the_pop
"""
# 145 - smlbusXX
infoUsaCsa.head()
#export
sml = infoUsaCsa.copy()
smlbus = sml[ ( sml['empl_rng'].isin(['1 to 4']) ) ]
smlbus.to_csv('smlbus_empl_rng1 to 4.csv')
print('empl_rng 1 to 4: ', smlbus.size / len(smlbus.columns) )
#export
smlbus = sml[ ( sml['empl_rng'].isin(['5 to 9']) ) ]
smlbus.to_csv('smlbus_empl_rng5 to 9.csv')
print('empl_rng 5 to 9: ', smlbus.size / len(smlbus.columns) )
#export
smlbus = sml[ ( sml['empl_rng'].isin(['10 to 19']) ) ]
smlbus.to_csv('smlbus_empl_rng10 to 19.csv')
print('empl_rng 10 to 19: ', smlbus.size / len(smlbus.columns) )
#export
smlbus = sml[ ( sml['empl_rng'].isin(['20 to 49']) ) ]
smlbus.to_csv('smlbus_empl_rng20 to 49.csv')
print('empl_rng 20 to 49: ', smlbus.size / len(smlbus.columns) )
#export
# Filter for small businesses
smlbus = sml[ ( sml['empl_rng'].isin(['1 to 4', '5 to 9', '10 to 19', '20 to 49']) ) ]
smlbus.to_csv('smlbus18_filtered_points.csv')
print('empl_rng 1 to 49: ', smlbus.size / len(smlbus.columns) )
#export
# Aggregate Numeric Values by Sum
smlbus['smlbus'] = 1
smlbus = smlbus.groupby('CSA2010')[ ['CSA2010','smlbus'] ].sum(numeric_only=True)
smlbus = smlbus.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
smlbus = smlbus.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'smlbus': gdf2['smlbus'].sum() }, ignore_index=True)
smlbus.tail()
# Save
smlbus.to_csv('smlbus18.csv', index=False)
```
##### 150 biz1
```
# https://bniajfi.org/indicators/Workforce%20and%20Economic%20Development/biz1/2017
biz1_SQL = """150 - biz1_XX
with numerator as (
select sum( case when first_year LIKE '2016' then 1 else 0 end)::numeric as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2017', 'gid', 'the_geom') a
left join economy.infousa_2017 b on a.gid = b.gid
group by csa
),
denominator AS (
select (sum( case when csa_present then 1 else NULL end)::numeric ) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2017', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa
),
tbl AS (
select vital_signs.div_zero (numerator.result, denominator.result)*(100::numeric) as result, numerator.csa
from numerator left join denominator on numerator.csa = denominator.csa
)
update vital_signs.data
set biz1_ = result from tbl where data.csa = tbl.csa and data_year = '2017';
Screen reader support enabled.
"""
Translation = """
CSA Points in Polygons.
Numerator = 0
For Each Community
For Each Point
if Point in Community and first_year LIKE '2017'
tally one to Numerator.csa
Denominator = 0
For Each Community
For Each Point
if Point in Community
tally one to Denominator.csa
biz1 = (Numerator / Denominator) * 100
Show in a table with the_pop
"""
#export
# 145 -biz1XX
# Filter for small businesses
biz1 = infoUsaCsa[ ( infoUsaCsa['first_year'].isin( ['2018'] ) ) ]
print('Count: first_year == 2018: ', biz1.size / len(biz1.columns) )
biz1 = biz1[ ['CSA2010'] ]
#numerator.to_csv('biz18_numerator_csasWithCounts.csv')
biz1['biz1Count'] = 1
#export
# Aggregate Numeric Values by Sum
biz1 = biz1.groupby('CSA2010').sum(numeric_only=True)
biz1 = biz1.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
biz1 = biz1.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'biz1Count': biz1['biz1Count'].mean() }, ignore_index=True)
biz1.tail(1)
#export
# Create the Indicator
biz1['biz1'] = biz1['biz1Count'] / infoUsaCsaTotals['numbus']
biz1.head()
# Save
biz1.to_csv('biz1_18.csv', index=False)
biz1.head()
```
##### 151 biz2
```
# https://bniajfi.org/indicators/Workforce%20and%20Economic%20Development/biz2/2017
biz4_SQL = """ 151 - biz2_XX
with numerator as (
select sum(
case
when first_year LIKE '2016' OR first_year LIKE '2015'
then 1
else 0
end)::numeric as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa
),
denominator AS (
select (sum(
case
when csa_present
then 1
else NULL
end)::numeric
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa
),
tbl AS (
select vital_signs.div_zero (numerator.result, denominator.result)*(100::numeric) as result, numerator.csa
from numerator left join denominator on numerator.csa = denominator.csa
)
update vital_signs.data
set biz2_ = result from tbl where data.csa = tbl.csa and data_year = '2016';
with numerator as (
select sum(
case
when first_year LIKE '2017' OR first_year LIKE '2016' OR first_year LIKE '2015'
then 1
else 0
end)::numeric as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2017', 'gid', 'the_geom') a
left join economy.infousa_2017 b on a.gid = b.gid
group by csa
),
denominator AS (
select (sum(
case
when csa_present
then 1
else NULL
end)::numeric
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2017', 'gid', 'the_geom') a
left join economy.infousa_2017 b on a.gid = b.gid
group by csa
),
tbl AS (
select vital_signs.div_zero (numerator.result, denominator.result)*(100::numeric) as result, numerator.csa
from numerator left join denominator on numerator.csa = denominator.csa
)
select * from tbl where 1 = 1 ORDER BY csa ASC;
"""
#export
# 151 - biz2XX
# Filter for small businesses
biz2 = infoUsaCsa[ ( infoUsaCsa['first_year'].isin( ['2016', '2017', '2018'] ) ) ]
print('Count: first_year == 2018, 2017, 2016: ', biz2.size / len(biz2.columns) )
biz2 = biz2[ ['CSA2010'] ]
#numerator.to_csv('biz18_numerator_csasWithCounts.csv')
biz2['biz2Count'] = 1
#export
# Aggregate Numeric Values by Sum
biz2 = biz2.groupby('CSA2010').sum(numeric_only=True)
biz2 = biz2.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
biz2 = biz2.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'biz2Count': biz2['biz2Count'].mean() }, ignore_index=True)
biz2.tail(1)
# Create the Indicator
biz2['biz2'] = biz2['biz2Count'] / infoUsaCsaTotals['numbus']
# Save
biz2.to_csv('biz2_18.csv', index=False)
biz2.head()
```
##### 152 biz4
2016 -> first_year character varying(254),
2017 -> first_year bigint,
Convert Column StringToInt
CREATE OR REPLACE FUNCTION pc_inttochar(chartoconvert bigint)
RETURNS character AS
$BODY$
SELECT CASE WHEN 1 = 1
THEN CAST($1 AS character(254))
ELSE NULL END;
$BODY$
LANGUAGE 'sql' IMMUTABLE STRICT;
ALTER TABLE economy.infousa_2017 ALTER COLUMN first_year TYPE character varying(254) USING pc_inttochar(first_year);
```
biz4_SQL = """ 152 - biz4_XX
with numerator as (
select sum(
case
when first_year LIKE '2016' OR first_year LIKE '2015' OR first_year LIKE '2014' OR first_year LIKE '2013'
then 1
else 0
end)::numeric as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa
),
denominator AS (
select (sum(
case
when csa_present
then 1
else NULL
end)::numeric
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa
),
tbl AS (
select vital_signs.div_zero (numerator.result, denominator.result)*(100::numeric) as result, numerator.csa
from numerator left join denominator on numerator.csa = denominator.csa
)
update vital_signs.data
set biz4_ = result from tbl where data.csa = tbl.csa and data_year = '2016';
with numerator as (
select sum(
case
when first_year LIKE '2017' OR first_year LIKE '2016' OR first_year LIKE '2015' OR first_year LIKE '2014' OR first_year LIKE '2013'
then 1
else 0
end)::numeric as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2017', 'gid', 'the_geom') a
left join economy.infousa_2017 b on a.gid = b.gid
group by csa
),
denominator AS (
select (sum(
case
when csa_present
then 1
else NULL
end)::numeric
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2017', 'gid', 'the_geom') a
left join economy.infousa_2017 b on a.gid = b.gid
group by csa
),
tbl AS (
select vital_signs.div_zero (numerator.result, denominator.result)*(100::numeric) as result, numerator.csa
from numerator left join denominator on numerator.csa = denominator.csa
)
select * from tbl where 1 = 1 ORDER BY csa ASC;
"""
Translation = """
"""
#export
# 152 - biz4XX
# Filter for small businesses
biz4 = infoUsaCsa[ ( infoUsaCsa['first_year'].isin( ['2015', '2016', '2017', '2018'] ) ) ]
print('Count: first_year == 2018, 2017, 2016, 2015: ', biz2.size / len(biz2.columns) )
biz4 = biz4[ ['CSA2010'] ]
#numerator.to_csv('biz18_numerator_csasWithCounts.csv')
biz4['biz4Count'] = 1
#export
# Aggregate Numeric Values by Sum
biz4 = biz4.groupby('CSA2010').sum(numeric_only=True)
biz4 = biz4.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
biz4 = biz4.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'biz4Count': biz4['biz4Count'].mean() }, ignore_index=True)
biz4.tail(1)
# Create the Indicator
biz4['biz4'] = biz4['biz4Count'] / infoUsaCsaTotals['numbus']
# Save
biz4.to_csv('biz4_18.csv', index=False)
biz4.head()
```
##### 157 neiind
2016 -> prim_naics character varying(254),
2017 -> prim_naics bigint,
Convert Column StringToInt
CREATE OR REPLACE FUNCTION pc_inttochar(chartoconvert bigint)
RETURNS character AS
$BODY$
SELECT CASE WHEN 1 = 1
THEN CAST($1 AS character(254))
ELSE NULL END;
$BODY$
LANGUAGE 'sql' IMMUTABLE STRICT;
ALTER TABLE economy.infousa_2017 ALTER COLUMN prim_naics TYPE character varying(254) USING pc_inttochar(prim_naics);
```
# https://bniajfi.org/indicators/Workforce%20and%20Economic%20Development/neiind/2017
neiind_SQL = """157 - neiindXX
with tbl AS (
select (sum(
case
when prim_naics LIKE '44%' OR prim_naics LIKE '45%' OR prim_naics LIKE '52%' OR prim_naics LIKE '54%' OR
prim_naics LIKE '62%' OR prim_naics LIKE '71%' OR prim_naics LIKE '72%' OR prim_naics LIKE '81%'
then 1
else 0
end)::numeric(20,2)
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa, the_pop
)
update vital_signs.data
set neiind = result from tbl where data.csa = tbl.csa and data_year = '2016';
"""
Translation = """
"""
infoUsaCsa.head()
#export
# 157 - neiindXX
# Filter for small businesses
neiind = infoUsaCsa.copy()
neiind['naics_extra_short'] = neiind.prim_naics.astype(str).str[:-6].astype(np.int64)
neiind = infoUsaCsa[ ( neiind['naics_extra_short'].isin( [44, 45, 52, 54, 62, 71, 72, 81] ) ) ]
print('Count of Naics Starting With: 44, 45, 52, 54, 62, 71, 72, 81: ', neiind.size / len(neiind.columns) )
neiind = neiind[ ['CSA2010'] ]
#numerator.to_csv('biz18_numerator_csasWithCounts.csv')
neiind['neiind'] = 1
#export
# Aggregate Numeric Values by Sum
neiind = neiind.groupby('CSA2010').sum(numeric_only=True)
neiind = neiind.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
neiind = neiind.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'neiind': neiind['neiind'].sum() }, ignore_index=True)
neiind.tail(1)
# Save
neiind.to_csv('neiind18.csv', index=False)
neiind.head()
```
##### 158 neibus
2016 -> prim_naics character varying(254),
2017 -> prim_naics bigint,
Convert Column StringToInt
CREATE OR REPLACE FUNCTION pc_inttochar(chartoconvert bigint)
RETURNS character AS
$BODY$
SELECT CASE WHEN 1 = 1
THEN CAST($1 AS character(254))
ELSE NULL END;
$BODY$
LANGUAGE 'sql' IMMUTABLE STRICT;
ALTER TABLE economy.infousa_2017 ALTER COLUMN prim_naics TYPE character varying(254) USING pc_inttochar(prim_naics);
```
# https://bniajfi.org/indicators/Workforce%20and%20Economic%20Development/neiind/2017
neibus_SQL = """
158 - neibusXX
with tbl AS (
select (sum(
case
when prim_naics LIKE '44%' OR prim_naics LIKE '45%' OR prim_naics LIKE '52%' OR prim_naics LIKE '54%' OR
prim_naics LIKE '62%' OR prim_naics LIKE '71%' OR prim_naics LIKE '72%' OR prim_naics LIKE '81%'
then 1
else 0
end)::numeric
*1000)/the_pop as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa, the_pop
)
update vital_signs.data
set neibus = result from tbl where data.csa = tbl.csa and data_year = '2016';
"""
Translation = """
"""
infoUsaCsa.head()
#export
# 158 - neibus
# Filter for small businesses
neibus = infoUsaCsa.copy()
neibus['naics_extra_short'] = neibus.prim_naics.astype(str).str[:-6].astype(np.int64)
neibus = infoUsaCsa[ ( neibus['naics_extra_short'].isin( [44, 45, 52, 54, 62, 71, 72, 81] ) ) ]
print('Count of Naics Starting With: 44, 45, 52, 54, 62, 71, 72, 81: ', neibus.size / len(neibus.columns) )
neibus = neibus[ ['CSA2010'] ]
#numerator.to_csv('biz18_numerator_csasWithCounts.csv')
neibus['neibus'] = 1
neibus.head()
#export
# Aggregate Numeric Values by Sum
neibus = neibus.groupby('CSA2010').sum(numeric_only=True)
neibus = neibus.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
neibus = neibus.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'neibus': neibus['neibus'].sum() }, ignore_index=True)
neibus['neibus'] = neibus['neibus'] * 1000 / neibus['tpop10']
neibus.tail(1)
# Save
neibus.to_csv('neibus18.csv', index=False)
neibus.head()
```
##### 159 neiemp
2016 -> prim_naics character varying(254),
2017 -> prim_naics bigint,
Convert Column StringToInt
CREATE OR REPLACE FUNCTION pc_inttochar(chartoconvert bigint)
RETURNS character AS
$BODY$
SELECT CASE WHEN 1 = 1
THEN CAST($1 AS character(254))
ELSE NULL END;
$BODY$
LANGUAGE 'sql' IMMUTABLE STRICT;
ALTER TABLE economy.infousa_2017 ALTER COLUMN prim_naics TYPE character varying(254) USING pc_inttochar(prim_naics);
```
# https://bniajfi.org/indicators/Workforce%20and%20Economic%20Development/neiemp/2017
neiemp_SQL = """ 159 - neiempXX
with tbl AS (
select (sum(
case
when prim_naics LIKE '44%' OR prim_naics LIKE '45%' OR prim_naics LIKE '52%' OR prim_naics LIKE '54%' OR
prim_naics LIKE '62%' OR prim_naics LIKE '71%' OR prim_naics LIKE '72%' OR prim_naics LIKE '81%'
then empl_size
else 0
end)
) as result, csa
from vital_signs.match_csas_and_bc_by_geom('economy.infousa_2016', 'gid', 'the_geom') a
left join economy.infousa_2016 b on a.gid = b.gid
group by csa, the_pop
)
update vital_signs.data
set neiemp = result from tbl where data.csa = tbl.csa and data_year = '2016';
"""
Translation = """
"""
infoUsaCsa.head()
#export
# 159 - neiempXX
# Filter for small businesses
neiemp = infoUsaCsa.copy()
neiemp['naics_extra_short'] = neiemp.prim_naics.astype(str).str[:-6].astype(np.int64)
neiemp = infoUsaCsa[ ( neiemp['naics_extra_short'].isin( [44, 45, 52, 54, 62, 71, 72, 81] ) ) ]
print('Count of Naics Starting With: 44, 45, 52, 54, 62, 71, 72, 81: ', neiemp.size / len(neiemp.columns) )
#numerator.to_csv('biz18_numerator_csasWithCounts.csv')
#export
# Aggregate Numeric Values by Sum
neiemp = neiemp.groupby('CSA2010')[ ['CSA2010','empl_size'] ].sum(numeric_only=True)
neiemp = neiemp.merge( csa[ ['CSA2010','tpop10'] ], left_on='CSA2010', right_on='CSA2010' )
neiemp = neiemp.append( {'CSA2010': 'Baltimore City' , 'tpop10' : 620961, 'empl_size': neiemp['empl_size'].sum() }, ignore_index=True)
neiemp['neiemp'] = neiemp['empl_size']
neiemp = neiemp.drop('empl_size', axis=1)
neiemp.tail()
# Save
neiemp.to_csv('neiemp18.csv', index=False)
neiemp.head()
```
| github_jupyter |
# Catch that asteroid!
```
import matplotlib.pyplot as plt
plt.ion()
from astropy import units as u
from astropy.time import Time
from astropy.utils.data import conf
conf.dataurl
conf.remote_timeout
```
First, we need to increase the timeout time to allow the download of data occur properly
```
conf.remote_timeout = 10000
from astropy.coordinates import solar_system_ephemeris
solar_system_ephemeris.set("jpl")
from poliastro.bodies import *
from poliastro.twobody import Orbit
from poliastro.plotting import OrbitPlotter, plot
EPOCH = Time("2017-09-01 12:05:50", scale="tdb")
earth = Orbit.from_body_ephem(Earth, EPOCH)
earth
plot(earth, label=Earth)
from poliastro.neos import neows
florence = neows.orbit_from_name("Florence")
florence
```
Two problems: the epoch is not the one we desire, and the inclination is with respect to the ecliptic!
```
florence.epoch
florence.epoch.iso
florence.inc
```
We first propagate:
```
florence = florence.propagate(EPOCH)
florence.epoch.tdb.iso
```
And now we have to convert to another reference frame, using http://docs.astropy.org/en/stable/coordinates/.
```
from astropy.coordinates import (
ICRS, GCRS,
CartesianRepresentation, CartesianDifferential
)
from poliastro.frames import HeliocentricEclipticJ2000
```
The NASA servers give the orbital elements of the asteroids in an Heliocentric Ecliptic frame. Fortunately, it is already defined in Astropy:
```
florence_heclip = HeliocentricEclipticJ2000(
x=florence.r[0], y=florence.r[1], z=florence.r[2],
v_x=florence.v[0], v_y=florence.v[1], v_z=florence.v[2],
representation=CartesianRepresentation,
differential_type=CartesianDifferential,
obstime=EPOCH
)
florence_heclip
```
Now we just have to convert to ICRS, which is the "standard" reference in which poliastro works:
```
florence_icrs_trans = florence_heclip.transform_to(ICRS)
florence_icrs_trans.representation = CartesianRepresentation
florence_icrs_trans
florence_icrs = Orbit.from_vectors(
Sun,
r=[florence_icrs_trans.x, florence_icrs_trans.y, florence_icrs_trans.z] * u.km,
v=[florence_icrs_trans.v_x, florence_icrs_trans.v_y, florence_icrs_trans.v_z] * (u.km / u.s),
epoch=florence.epoch
)
florence_icrs
florence_icrs.rv()
```
Let us compute the distance between Florence and the Earth:
```
from poliastro.util import norm
norm(florence_icrs.r - earth.r) - Earth.R
```
<div class="alert alert-success">This value is consistent with what ESA says! $7\,060\,160$ km</div>
```
from IPython.display import HTML
HTML(
"""<blockquote class="twitter-tweet" data-lang="en"><p lang="es" dir="ltr">La <a href="https://twitter.com/esa_es">@esa_es</a> ha preparado un resumen del asteroide <a href="https://twitter.com/hashtag/Florence?src=hash">#Florence</a> 😍 <a href="https://t.co/Sk1lb7Kz0j">pic.twitter.com/Sk1lb7Kz0j</a></p>— AeroPython (@AeroPython) <a href="https://twitter.com/AeroPython/status/903197147914543105">August 31, 2017</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>"""
)
```
And now we can plot!
```
frame = OrbitPlotter()
frame.plot(earth, label="Earth")
frame.plot(Orbit.from_body_ephem(Mars, EPOCH))
frame.plot(Orbit.from_body_ephem(Venus, EPOCH))
frame.plot(Orbit.from_body_ephem(Mercury, EPOCH))
frame.plot(florence_icrs, label="Florence")
```
The difference between doing it well and doing it wrong is clearly visible:
```
frame = OrbitPlotter()
frame.plot(earth, label="Earth")
frame.plot(florence, label="Florence (Ecliptic)")
frame.plot(florence_icrs, label="Florence (ICRS)")
```
And now let's do something more complicated: express our orbit with respect to the Earth! For that, we will use GCRS, with care of setting the correct observation time:
```
florence_gcrs_trans = florence_heclip.transform_to(GCRS(obstime=EPOCH))
florence_gcrs_trans.representation = CartesianRepresentation
florence_gcrs_trans
florence_hyper = Orbit.from_vectors(
Earth,
r=[florence_gcrs_trans.x, florence_gcrs_trans.y, florence_gcrs_trans.z] * u.km,
v=[florence_gcrs_trans.v_x, florence_gcrs_trans.v_y, florence_gcrs_trans.v_z] * (u.km / u.s),
epoch=EPOCH
)
florence_hyper
```
Notice that the ephemerides of the Moon is also given in ICRS, and therefore yields a weird hyperbolic orbit!
```
moon = Orbit.from_body_ephem(Moon, EPOCH)
moon
moon.a
moon.ecc
```
So we have to convert again.
```
moon_icrs = ICRS(
x=moon.r[0], y=moon.r[1], z=moon.r[2],
v_x=moon.v[0], v_y=moon.v[1], v_z=moon.v[2],
representation=CartesianRepresentation,
differential_type=CartesianDifferential
)
moon_icrs
moon_gcrs = moon_icrs.transform_to(GCRS(obstime=EPOCH))
moon_gcrs.representation = CartesianRepresentation
moon_gcrs
moon = Orbit.from_vectors(
Earth,
[moon_gcrs.x, moon_gcrs.y, moon_gcrs.z] * u.km,
[moon_gcrs.v_x, moon_gcrs.v_y, moon_gcrs.v_z] * (u.km / u.s),
epoch=EPOCH
)
moon
```
And finally, we plot the Moon:
```
plot(moon, label=Moon)
plt.gcf().autofmt_xdate()
```
And now for the final plot:
```
frame = OrbitPlotter()
# This first plot sets the frame
frame.plot(florence_hyper, label="Florence")
# And then we add the Moon
frame.plot(moon, label=Moon)
plt.xlim(-1000000, 8000000)
plt.ylim(-5000000, 5000000)
plt.gcf().autofmt_xdate()
```
<div style="text-align: center; font-size: 3em;"><em>Per Python ad astra!</em></div>
| github_jupyter |
This notebook presents how to train ARedsum models, the extractive summarization based models, on ThaiSum dataset.
# Introduction to ARedSumSentRank
Cite from their paper's abstract ["AREDSUM: Adaptive Redundancy-Aware Iterative Sentence Ranking for Extractive Document Summarization"](https://arxiv.org/abs/2004.06176) introduced by Keping Bi, Rahul Jha, W. Bruce Croft, Asli Celikyilmaz. (2020),
"...Building on the state-of-the-art encoding methods for summarization, we present two adaptive learning models: AREDSUM-SEQ that jointly considers salience and novelty during sentence selection; and a two-step AREDSUM-CTX that scores salience first, then learns to balance salience and redundancy, enabling the measurement of the impact of each aspect...."
# Install requirements
```
%%capture
!git clone https://github.com/nakhunchumpolsathien/ThaiSum.git
%%capture
!pip install torch==1.1.0 torchvision==0.3.0
!pip install -q pyrouge
!pip install -q pytorch_transformers
!pip install -q tensorboardX
!pip install -q pyrouge
!pip install pytorch_pretrained_bert
!pyrouge_set_rouge_path "/content/ThaiSum/BertSum/ROUGE-1.5.5"
!apt update
!apt install -q libxml-parser-perl
%cd "/content/ThaiSum/BertSum/ROUGE-1.5.5/data"
!perl WordNet-2.0-Exceptions/buildExeptionDB.pl ./WordNet-2.0-Exceptions ./smart_common_words.txt ./WordNet-2.0.exc.db
```
# Pre-processing
```
%cd '/content/ThaiSum/ARedSum/src'
!python preprocess.py -mode format_to_bert -raw_path "/content/ThaiSum/ARedSum/js_data" -save_path "/content/ThaiSum/ARedSum/bert_data" -oracle_mode greedy -n_cpus 1 -log_file ../logs/preprocess.log
```
# Training the model
```
# Train (ARedSum-Base) Train a Salience Ranker
!python train.py -bert_data_path "/content/ThaiSum/ARedSum/bert_data/thaisum" -visible_gpus 0 -gpu_ranks 0 -accum_count 2 -report_every 50 -save_checkpoint_steps 2000 -decay_method noam -mode train -model_name base -label_format soft -result_path "/content/ThaiSum/ARedSum/results/aredsum_base" -model_path "/content/ThaiSum/ARedSum/model_checkpoint/ARedSum_base"
# Train (ARedSum-CTX) Train a Ranker for Selection
!python train.py -fix_scorer -train_from /path/to/the/best/salience/ranker.pt -bert_data_path /path/to/cnndm_or_nyt50/bert_data/ -visible_gpus 2 -gpu_ranks 0 -accum_count 2 -report_every 50 -save_checkpoint_steps 2000 -decay_method noam -model_name ctx -max_epoch 2 -train_steps 50000 -label_format soft -use_rouge_label t -valid_by_rouge t -rand_input_thre 1.0 -temperature 20 -seg_count 30 -ngram_seg_count 20,20,20 -bilinear_out 20 -result_path /path/to/where/you/want/to/save/the/preidicted/summaries -model_path /path/to/where/you/want/to/save/the/models
# Train (ARedSum-SEQ) Train a Sequence Generation Model
!python train.py -bert_data_path /path/to/cnndm_or_nyt50/bert_data/ -visible_gpus 2 -gpu_ranks 0 -accum_count 2 -report_every 50 -save_checkpoint_steps 2000 -decay_method noam -model_name seq -max_epoch 2 -train_steps 50000 -label_format soft -use_rouge_label t -valid_by_rouge t -rand_input_thre 0.8 -temperature 20 -result_path /path/to/where/you/want/to/save/the/preidicted/summaries -model_path /path/to/where/you/want/to/save/the/models
```
# Evaluation
## Evaluate by ROUGE Score
```
# Evaluate ARedSum-Base
!python train.py -bert_data_path "/content/ThaiSum/ARedSum/bert_data/thaisum" -visible_gpus 0 -gpu_ranks 0 -accum_count 2 -report_every 50 -save_checkpoint_steps 2000 -decay_method noam -mode test -model_name base -label_format soft -result_path "/content/ThaiSum/ARedSum/results/aredsum_base" -test_from "/content/ThaiSum/ARedSum/model_checkpoint/ARedSum_base.pt"
```
ROUGE Scores are shown here
```
[2020-11-14 08:31:40,328 INFO] [PERF]Rouges at step 0: RG1-P:42.77 RG1-R:56.72 RG1-F:45.16 RG2-P:20.05 RG2-R:29.11 RG2-F:21.83 RGL-P:42.71 RGL-R:56.59 RGL-F:45.08
```
Therefore ROUGE-F1 results are: R1=45.16, R2=21.83, RL=45.08.
## Evaluate by BertScore
```
!pip install -q bert_score
import bert_score
from bert_score import score
import logging
import transformers
transformers.tokenization_utils.logger.setLevel(logging.ERROR)
transformers.configuration_utils.logger.setLevel(logging.ERROR)
transformers.modeling_utils.logger.setLevel(logging.ERROR)
with open("/content/ThaiSum/ARedSum/results/aredsum_base_step0_initial.candidate") as f: # Output Summary
cands = [line.strip() for line in f]
with open("/content/ThaiSum/ARedSum/results/aredsum_base_step0_initial.gold") as f: # Reference Summary
refs = [line.strip() for line in f]
P, R, F1 = score(cands, refs, lang='th', verbose=False)
print(f"System level F1 score: {F1.mean()*100:.3f}") ## *100 to make it simplier to read similar to ROUGE.
print(f"System level P score: {P.mean()*100:.3f}")
print(f"System level R score: {R.mean()*100:.3f}")
import matplotlib.pyplot as plt
plt.hist(F1, bins=20)
plt.xlabel("score")
plt.ylabel("counts")
plt.show()
```
| github_jupyter |
# Bayesian Models
We are now going to dig further into a specific type of **Probabilistic Graphical Model**, specifically **Bayesian Networks**. We will discuss the following:
1. What are Bayesian Models
2. Independencies in Bayesian Networks
3. How is Bayesian Model encoding the Joint Distribution
4. How we do inference from Bayesian models
---
## 1. What are Bayesian Models?
A Bayesian Network is a probabilistic graphical model (a type of statistical model) that represents a set of **random variables** and their **conditional dependencies** via a **directed acyclic graph** (DAG). Bayesian networks are often used when we want to represent *causal relationships* between the random variables. They are parameterized by using **Conditional Probability Distributions** (CPD). Each node in the network is parameterized using:
$$P(node|Pa(node))$$
Where $Pa(node)$ represents the parents of the nodes in the network. We can dig into this further by looking at the following student model:
<img src="images/student_full_param.png">
If we the use the library **pgmpy**, then we create the above model as follows:
> 1. Define network structure (or learn it from data)
2. Define CPD's between nodes (random variables)
3. Associated CPD's with structure
We can see this implemented below.
### 1.1 Implementation
```
# Imports needed from pgmpy
from pgmpy.models import BayesianModel
from pgmpy.factors.discrete import TabularCPD
```
### 1.1.1 Set the Structure
So, with our imports taken care of, we start by defining the model structure. We are able to define this by passing in a list of edges. Note, these edges are *directional*; for example, we have the tuple `(D, G)`, which means that `difficulty` influences `grade`.
```
student_model = BayesianModel([('difficulty', 'grade'),
('intelligence', 'grade'),
('grade', 'letter'),
('intelligence', 'sat')])
```
### 1.1.2 Setup the relationships (CPDs)
We then want to set up our relationshisp in the form of CPD's. A few things to note:
> 1. `variable_card`: this is meant ot represent the number of discrete possibilities that the random variable can take on.
2. `evidence`: this is referring to the parent of the random variable, i.e. $Pa(node)$.
```
difficulty_cpd = TabularCPD(variable='difficulty',
variable_card=2,
values=[[0.6, 0.4]])
intelligence_cpd = TabularCPD(variable='intelligence',
variable_card=2,
values=[[0.7, 0.3]])
grade_cpd = TabularCPD(variable='grade',
variable_card=3,
values=[[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['intelligence', 'difficulty'],
evidence_card=[2, 2])
letter_cpd = TabularCPD(variable='letter', variable_card=2,
values=[[0.1, 0.4, 0.99],
[0.9, 0.6, 0.01]],
evidence=['grade'],
evidence_card=[3])
sat_cpd = TabularCPD(variable='sat', variable_card=2,
values=[[0.95, 0.2],
[0.05, 0.8]],
evidence=['intelligence'],
evidence_card=[2])
```
### 1.1.3 Add the relationships (CPDs) to the Model
The next step is to actually add our CPD's to our model. The way in whcih PGMPY specifies models is highly modular, which is great because it allows us to add and take away different CPD's very easily.
```
student_model.add_cpds(difficulty_cpd, intelligence_cpd, grade_cpd, letter_cpd, sat_cpd)
```
At this point we can actually check our model for the network structure and CPDs and verifies that the CPDs are correctly defined and sum to 1.
```
student_model.check_model()
```
### 1.1.4 Examine the Structure of the Graph
We can see our model with the respective CPD's incorporated:
```
student_model.get_cpds()
```
And we can examine specific nodes to ensure that the corresponding distributions are correct.
```
print(student_model.get_cpds('difficulty'))
print(student_model.get_cpds('intelligence'))
print(student_model.get_cpds('grade'))
```
---
## 2. Independencies in Bayesian Networks
Independencies implied the by the structure of our bayesian network can be categorized in 2 types:
> 1. **Local Independencies:** Any variable in the network that is independent of its non-descendents given its parents. Mathematically it can be written as:<br>
<br>
$$X \perp NonDesc(X)|Pa(X)$$
where $NonDesc(X)$ is the set of variables which are not descendents of $X$ and $Pa(X)$ is the set of variables whcih are parents of $X$.
2. **Global Independencies:** For discussing global independencies in bayesian networks we need to look at the various network structures possible. Starting with the case of 2 nodes, there are only 2 possible ways for it to be connected:
<img src="images/two_nodes.png">
In the above two caes it is obvious that change in either node will effect the other. For the first case we can take the example of $difficulty \rightarrow grade$. If we increase the difficulty of the course the probability of getting a higher grade decreases. For the second case we can take the example of $ SAT \leftarrow Intel $. Now if we increase the probability of getting a good score in SAT that would imply that the student is intelligent, hence increasing the probability of $ i_1 $. Therefore in both the cases shown above any change in the variables leads to change in the other variable.
Now, there are four possible ways of connection between 3 nodes:
<img src="images/three_nodes.png">
Now in the above cases we will see the flow of influence from $ A $ to $ C $ under various cases.
1. **Causal**: In the general case when we make any changes in the variable $ A $, it will have an effect on variable $ B $ (as we discussed above) and this change in $ B $ will change the values in $ C $. One other possible case can be when $ B $ is observed i.e. we know the value of $ B $. So, in this case any change in $ A $ won't affect $ B $ since we already know the value. And hence there won't be any change in $ C $ as it depends only on $ B $. Mathematically we can say that:
$$ (A \perp C | B) $$
2. **Evidential**: Similarly in this case also observing $ B $ renders $ C $ independent of $ A $. Otherwise when $ B $ is not observed the influence flows from $ A $ to $ C $. Hence:
$$ (A \perp C | B) $$
3. **Common Cause**: The influence flows from $ A $ to $ C $ when $ B $ is not observed. But when $ B $ is observed and change in $ A $ doesn't affect $ C $ since it's only dependent on $ B $. Hence here also:
$$ ( A \perp C | B) $$
4. **Common Evidence**: This case is a bit different from the others. When $ B $ is not observed any change in $ A $ reflects some change in $ B $ but not in $ C $. Let's take the example of $ D \rightarrow G \leftarrow I $. In this case if we increase the difficulty of the course the probability of getting a higher grade reduces but this has no effect on the intelligence of the student. But when $ B $ is observed let's say that the student got a good grade. Now if we increase the difficulty of the course this will increase the probability of the student to be intelligent since we already know that he got a good grade. Hence in this case
$$ (A \perp C) $$
and
$$ ( A \not\perp C | B) $$
This structure is also commonly known as **V structure**.
We can see this in greater detail by utilizing pgmpy.
### 2.1 Find Local Independencies
We can look at the independencies for specific nodes.
```
student_model.local_independencies('difficulty')
student_model.local_independencies('grade')
student_model.local_independencies(['difficulty', 'intelligence', 'sat', 'grade', 'letter'])
student_model.get_independencies()
```
### 2.2 Find Active Trail Nodes
We can also look for **active trail nodes**. We can think of active trail nodes as path's of influence; what can give you information about something else?
```
student_model.active_trail_nodes('difficulty')
student_model.active_trail_nodes('grade')
```
Notice that for `grade` we had everything be fully returned. This is because everything provides information about grade, meaning grade is dependent upon all other random variables.
We can also see how the active trails to difficulty change when we observed `grade`.
```
student_model.active_trail_nodes('difficulty')
student_model.active_trail_nodes('difficulty', observed='grade')
```
---
## 3. Inference in Bayesian Models
Until now we discussed just about representing Bayesian Networks. Now let's see how we can do inference in a Bayesian Model and use it to predict values over new data points for machine learning tasks. In this section we will consider that we already have our model (structure and parameters).
In inference we try to answer probability queries over the network given some other variables. So, we might want to know the probable grade of an intelligent student in a difficult class given that he scored good in SAT. So for computing these values from a Joint Distribution we will have to reduce over the given variables that is:
$$ I = 1, D = 1, S = 1 $$
and then marginalize over the other variables that is
$$ L $$
to get
$$ P(G | I=1, D=1, S=1) $$
But carrying on marginalize and reduce operations on the complete Joint Distribution is computationaly expensive since we need to iterate over the whole table for each operation and the table is exponential in size to the number of variables. But in Graphical Models we exploit the independencies to break these operations in smaller parts making it much faster.
One of the very basic methods of inference in Graphical Models is **Variable Elimination**.
### 3.1 Variable Elimination
We know that:
$$ P(D, I, G, L, S) = P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I) $$
Now let's say we just want to compute the probability of G. For that we will need to marginalize over all the other variables.
$$ P(G) = \sum_{D, I, L, S} P(D, I, G, L, S) $$
#$$ P(G) = \sum_{D, I, L, S} P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I) $$
$$ P(G) = \sum_D \sum_I \sum_L \sum_S P(L|G) * P(S|I) * P(G|D, I) * P(D) * P(I) $$
Now since not all the conditional distributions depend on all the variables we can push the summations inside:
$$ P(G) = \sum_D P(D) \sum_I P(G|D, I) * P(I) \sum_S P(S|I) \sum_L P(L|G) $$
So, by pushing the summations inside we have saved a lot of computation because we have to now iterate over much smaller tables.
```
from pgmpy.inference import VariableElimination
infer = VariableElimination(student_model)
print(infer.query(['grade']) ['grade'])
```
There can be cases in which we want to compute the conditional distribution let's say
$$ P(G | D=0, I=1) $$
In such cases we need to modify our equations a bit:
$$ P(G | D=0, I=1) = \sum_L \sum_S P(L|G) * P(S| I=1) * P(G| D=0, I=1) * P(D=0) * P(I=1) $$
$$ P(G | D=0, I=1) = P(D=0) * P(I=1) * P(G | D=0, I=1) * \sum_L P(L | G) * \sum_S P(S | I=1) $$
In pgmpy we will just need to pass an extra argument in the case of conditional distributions:
```
print(infer.query(['grade'],
evidence={'difficulty': 0,
'intelligence': 1}) ['grade'])
```
**Predicting values from new data points** <br>
Predicting values from new data points is quite similar to computing the conditional probabilities. We need to query for the variable that we need to predict given all the other features. The only difference is that rather than getting the probabilitiy distribution we are interested in getting the most probable state of the variable.
In pgmpy this is known as MAP query. Here's an example:
```
infer.map_query(['grade'])
infer.map_query(['grade'],
evidence={'difficulty': 0,
'intelligence': 1})
infer.map_query(['grade'],
evidence={'difficulty': 0,
'intelligence': 1,
'letter': 1,
'sat': 1})
```
| github_jupyter |
# Что такое AXON
[AXON](http://intellimath.bitbucket.org/axon) это нотация для сериализованного представления объектов, документов и данных в текстовой форме. Она объединяет в себе *простоту* [JSON](http://www.json.org), *расширяемость* [XML](http://www.w3.org/xml) и *удобочитаемость* [YAML](http://www.yaml.org).
Есть проект [pyaxon](http://pypi.python.org/pypi/pyaxon) на [python](http://python.org) с которым можно "поиграться". Впрочем, он создавался таким образом, чтобы не сильно уступать по скорости с модулем [json](http://docs.python.org/3.5/library/json.html). Поэтому он может сгодиться и для реальных дел.
#### Зачем AXON?
`AXON` появился как попытка справиться с недостатками или неудобствами `JSON` и `XML`, но сохранить при этом их достоинства и возможности, дополнительно добавив удобочитаемость, присущую `YAML`.
#### AXON содержит "улучшенный" вариант JSON
**1.** `JSON` имеет два неудобства:
* имена атрибутов/ключей, которые являются идентификаторами приходится заключать в кавычки;
* легко забыть запятую в случае вставки новой пары *ключ : значение*.
`AXON` устраняет эти неудобства следующим образом:
* *можно не заключать* в кавычки имена, которые являются *идентификаторами*;
* совершенно *опускаются* разделительные *запятые*, используются только пробельные символы для разделения элементов.
В результате получается более компактное представление и удобство восприятия при форматировании.
Для сравнения:
**`JSON`**
```
{ "name": "Alex",
"birth": "1979-12-25",
"email": "mail@example.com"}
[ "Alex"
"1979-12-25"
"mail@example.com"]
```
**`AXON`**
```
{ name: "Alex"
birth: ^1979-12-25
email: "mail@example.com"}
[ "Alex"
^1979-12-25
"mail@example.com"]
```
**2.** В `JSON` не гарантируется, что после загрузки
```
{ "name": "Alex",
"birth": "1979-12-25",
"email": "mail@example.com"}
```
порядок ключей/атрибутов сохранится.
В `AXON` констатируется, что
```
{ name: "Alex"
birth: ^1979-12-25
email: "mail@example.com"}
```
преобразуется в `mapping` *без сохранения* порядка ключей.
В то же время констатируется, что
```
[ name: "Alex"
birth: ^1979-12-25
email: "mail@example.com"]
```
преобразуется в `mapping` *с сохранением* порядка ключей.
**3.** `AXON` поддерживает синтаксиc для представления даты и времени в `ISO`-подобном формате:
* даты
```
^2010-12-31
```
* времени
```
^12:30
^12:30:15
^12:30+03:00
^12:30:15-04:30
```
* даты и времени
```
^2010-12-31T12:30
^2010-12-31T12:30:05.0125
^2010-12-31T12:30+04:00
^2010-12-31T12:30:05.0123-04:00
```
а также для представления десятичных чисел:
```
1D 123456789D
3.14D 1.23e-6D
```
**4.** `AXON` также позволяет определять *метки* неатомарных значений и использовать их в качестве внутренних *ссылок*. Это позволяет при необходимости не создавать копии многократно используемых неатомарных значений при сериализации/десериализации.
Например:
``` javascript
[ { prev: &a (2012-12-30 10:00)
next: &c (2012-01-01 12:00) }
{ prev: &b (2012-12-31 13:00)
next: *a }
{ prev: *c
next: *b } ]
```
Метка имеет префикс `&` (`&a &b &c`), а ссылка имеет префикс `*` (`*a *b *c`).
#### Модель данных AXON содержит вариант модели Infoset XML в более компактной нотации
Рассмотрим иллюстративный пример `XML` представления структурированных данных:
``` xml
<person>
<name>John Smith</name>
<age>25</age>
<address type="home">
<street>21 2nd Street</street>
<city>New York</city>
<state>NY</state>
</address>
<phone type="home">212-555-1234</phone>
</person>
```
`AXON` реализует идею более простого синтаксиса для представления `XML` структурированных данных:
``` javascript
person {
name {"John Smith"}
age {25}
address {
type: "home"
street {"21 2nd Street"}
city {"New York"}
state {"NY"}
}
phone {type:"home" "212-555-1234"}
}
```
Представление в формате `AXON` можно построить из формата `XML` за 5 шагов:
1. Заменить `<tag>` на `tag {`
2. Заменить `</tag>` на `}`
3. Заменить `attr=value` на `attr: value`
4. Текст внутри элементов заключить в двойные кавычки (`"`)
5. Удалить символ запятой (`,`) или заменить его на один пробел
Результат такого преобразования структурно идентичен первоначальному `XML` документу. По-существу это синтаксически более компактная форма представления `XML` документа.
Для сравнения также приведем представление в `AXON` с форматированием сложных элементов без {} с использованием принципа одинакового отступа для подэлементов структуры:
``` javascript
person
name {"John Smith"}
age {25}
address
type: "home"
street {"21 2nd Street"}
city {"New York"}
state {"NY"}
phone
type: "home"
"212-555-1234"
```
Это представление получается из предыдущего удалением всех символов { и }, а также ненужных пустых строк.
#### AXON расширяет возможности XML и JSON
В `XML` атрибуты могут иметь только простые значения, в `AXON` значением атрибута может любое значение (как и в `JSON`). Кроме того простые значения имеют тип (*текст* в формате `unicode`, *число*, *десятичное число*, *дата* и *время*, *массив байтов* в кодировке *base64*). `AXON` можно рассматривать как расширение `JSON` в том смысле, что объекты могут именованными, так же как и элементы `XML` являются именованными.
Например:
``` javascript
person
name: "John Smith"
age: 25
burn: 1975-10-21
locations: [
address
type: "home"
street: "21 2nd Street"
city: "New York"
state: "NY"
]
contacts: [
phone
type: "home"
"212-555-1234"
email
type: "personal"
"mail@example.com"
]
```
В `JSON` есть одно неудобство, связанное с представлением нерегулярных структур, в которых существенен порядок частей. В таких структурах доступ к элементам осуществляется в результате последовательного поиска по имени, а не в результате "прямого" доступа по имени.
В качестве примера рассмотрим структурированный документ в формате `XML`:
``` xml
<section title="Title">
<par style="normal">paragraph</par>
<enumerate style="enum">
<item>item text</item>
</enumerate>
<par style="normal">paragraph</par>
<itemize style="itemize">
<item>item text</item>
</itemize>
<par style="normal">paragraph</par>
</section>
```
Непосредственно, без преобразования структуры этот документ не транслируется в `JSON` из-за важности порядка и повторяемости элементов. Один вариант трансляции, который эмулирует последовательность именованных элементов имеет вид:
``` javascript
{
"tag": "section",
"@": {"title": "Title"},
"*": [
{ "tag": "par",
"@": {"style":"normal", "text":"paragraph"}},
{ "tag":"enumerate",
"@": {"style": "enumerate"},
"*": [
{ "tag":"item",
"@": {"text":"item text"}}
]
},
{ "tag": "par", "@": {"style":"normal", "text":"paragraph"}},
{ "tag":"itemize",
"*": [
{ "tag":"item", "@": {"text":"item text"}}
]
},
{ "tag": "par", "@": {"style":"normal", "text":"paragraph"}}
]
}
```
В `AXON` такие структуры транслируются "один в один":
``` javascript
section
title: "Title"
par
style: "normal"
"paragraph"
enumerate
style: "enum"
item { "item text" }
par
style: "normal"
"paragraph"
itemize
style: "itemize"
item { "Item text" }
par
style: "normal"
"paragraph"
```
### AXON поддерживает форматирование в стиле YAML
Привлекательной стороной `YAML` является формат представления в стиле `wiki`. `AXON` также поддерживает подобный стиль форматирования.
Например, для сравнения:
* форматирование без {} (`YAML`-стиль)
``` javascript
person
name: "Alex"
age: 25
```
* форматирование с {} и отступами (`C/JSON`-стиль)
``` javascript
person {
name: "Alex"
age: 25}
```
* компактный формат
``` javascript
person{name:"Alex" age:25}
```
### AXON может представлять серию объектов
Одно из ограничений `JSON` и `XML` связано с тем, что они представляют единственный корневой объект. Напротив, `AXON` представляет серию объектов или серию пар `ключ`:`объект`, которые можно загружать по одному. Например:
* серия объектов
``` javascript
{ name: "Alex"
age: 32 }
{ name: "Michael"
age: 28 }
{ name: "Nick"
age: 19 }
```
* серия объектов с ключами
``` javascript
alex: {
message: "Hello"
datetime: ^2015-07-12T12:32:35
}
michael: {
message: "How are you"
datetime: ^2015-07-12T12:32:35
}
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
m = 1000
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
np.reshape(a,(2*m,1))
desired_num = 2000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(m):
print(mosaic_list_of_images[0][2*j:2*j+2])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.zeros(m) #np.array([0,0,0,0,0,0,0,0,0])
for i in range(len(mosaic_dataset)):
img = torch.zeros([2], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(m):
if j == give_pref:
img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/m #2 is data dim
else :
img = img + mosaic_dataset[i][2*j:2*j+2]*(m-dataset_number)/((m-1)*m)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:1000], mosaic_label[0:1000], fore_idx[0:1000] , 1, m)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[1000:2000], mosaic_label[1000:2000], fore_idx[1000:2000] , m, m)
avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
print("=="*40)
test_dataset = torch.stack(test_dataset, axis = 0)
# test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
print("=="*40)
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("dataset4 CIN with alpha = 1/"+str(m))
x1 = (test_dataset).numpy() / m
y1 = np.array(labels)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("test dataset4")
test_dataset[0:10]/m
test_dataset = test_dataset/m
test_dataset[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape
avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_11 = MosaicDataset(test_dataset, labels )
testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,3)
# self.linear2 = nn.Linear(50,10)
# self.linear3 = nn.Linear(10,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
def forward(self,x):
# x = F.relu(self.linear1(x))
# x = F.relu(self.linear2(x))
x = (self.linear1(x))
return x
# class Whatnet(nn.Module):
# def __init__(self):
# super(Whatnet,self).__init__()
# self.linear1 = nn.Linear(2,50)
# self.linear2 = nn.Linear(50,10)
# self.linear3 = nn.Linear(10,3)
# torch.nn.init.xavier_normal_(self.linear1.weight)
# torch.nn.init.zeros_(self.linear1.bias)
# torch.nn.init.xavier_normal_(self.linear2.weight)
# torch.nn.init.zeros_(self.linear2.bias)
# torch.nn.init.xavier_normal_(self.linear3.weight)
# torch.nn.init.zeros_(self.linear3.bias)
# def forward(self,x):
# x = F.relu(self.linear1(x))
# x = F.relu(self.linear2(x))
# x = (self.linear3(x))
# return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/i
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the 1000 test dataset %d: %.2f %%' % (number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1500
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 1000 train images: %.2f %%' % ( 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_11]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
| github_jupyter |
# Waveform and spectrogram display
Select an audio file from the dropdown list to display its waveform and spectrogram.
```
import os
import parselmouth
import numpy as np
from phonlab.utils import dir2df
from bokeh_phon.utils import remote_jupyter_proxy_url_callback, default_jupyter_url
from bokeh_phon.models.audio_button import AudioButton
from bokeh.plotting import figure
from bokeh.models import BoxAnnotation, BoxSelectTool, BoxZoomTool, ColumnDataSource, \
CrosshairTool, LogColorMapper, Select, Slider, ZoomInTool, ZoomOutTool
from bokeh.io import show, output_notebook, push_notebook
from bokeh.layouts import column, gridplot
from bokeh.events import SelectionGeometry
from bokeh.palettes import Greys256
r_Greys256 = list(reversed(Greys256))
# The remote_jupyter_proxy_url function is required when running on a BinderHub instance.
# Change the default_jupyter_url value to match the hostname of your instance after it has
# started. The current value is the most frequent result when launching from mybinder.org.
# Note that default_jupyter_url must be imported from bokeh_phon.utils in order for it to be
# available to the remote_jupyter_proxy_url function.
# Change to None if running locally.
#default_jupyter_url = None
output_notebook()
params = {
'low_thresh_power': 13
}
def myapp(doc):
def load_wav_cb(attr, old, new):
if not new.endswith('.wav'):
return
snd = parselmouth.Sound(new)
if snd.n_channels > 1:
snd = snd.convert_to_mono()
playvisbtn.channels = channels
playvisbtn.visible = True
playselbtn.channels = channels
playselbtn.visible = True
sgrams[0] = snd.to_spectrogram()
spec0img.glyph.dw = sgrams[0].x_grid().max()
spec0img.glyph.dh = sgrams[0].y_grid().max()
playvisbtn.fs = snd.sampling_frequency
playvisbtn.start = snd.start_time
playvisbtn.end = snd.end_time
playselbtn.fs = snd.sampling_frequency
playselbtn.start = 0.0
playselbtn.end = 0.0
n_chan = snd.values.shape[0]
source.data = dict(
seconds=snd.ts().astype(np.float32),
ch0=snd.values[0,:].astype(np.float32),
)
ch0.visible = True
spec0cmap.low = _low_thresh()
specsource.data = dict(
sgram0=[sgrams[0].values.astype(np.float32)]
)
spec0.visible = True
def x_range_cb(attr, old, new):
if attr == 'start':
playvisbtn.start = new
elif attr == 'end':
playvisbtn.end = new
def selection_cb(e):
'''Handle data range selection event.'''
playselbtn.start = e.geometry['x0']
playselbtn.end = e.geometry['x1']
selbox.left = e.geometry['x0']
selbox.right = e.geometry['x1']
selbox.visible = True
def low_thresh_cb(attr, old, new):
params['low_thresh_power'] = new
spec0cmap.low = _low_thresh()
def _low_thresh():
return sgrams[0].values.min() \
+ sgrams[0].values.std()**params['low_thresh_power']
datadir = '../resource'
fdf = dir2df(datadir, fnpat='.*\.wav$')
fdf['fpath'] = [
os.path.normpath(
os.path.join(datadir, relname)
) for relname in fdf.relpath.str.cat(fdf.fname, sep='/')
]
options = [('', 'Choose an audio file to display')]
options.extend(
list(fdf.loc[:,['fpath', 'fname']].itertuples(index=False, name=None))
)
fselect = Select(options=options, value='')
fselect.on_change('value', load_wav_cb)
source = ColumnDataSource(data=dict(seconds=[], ch0=[]))
channels = ['ch0']
playvisbtn = AudioButton(
label='Play visible signal', source=source, channels=channels,
visible=False
)
playselbtn = AudioButton(
label='Play selected signal', source=source, channels=channels,
visible=False
)
# Instantiate and share specific select/zoom tools so that
# highlighting is synchronized on all plots.
boxsel = BoxSelectTool(dimensions='width')
boxzoom = BoxZoomTool(dimensions='width')
zoomin = ZoomInTool(dimensions='width')
zoomout = ZoomOutTool(dimensions='width')
crosshair = CrosshairTool(dimensions='height')
shared_tools = [
'xpan', boxzoom, boxsel, crosshair, 'undo', 'redo',
zoomin, zoomout, 'save', 'reset'
]
figargs = dict(
tools=shared_tools,
)
ch0 = figure(name='ch0', tooltips=[("time", "$x{0.0000}")], **figargs)
ch0.line(x='seconds', y='ch0', source=source, nonselection_line_alpha=0.6)
# Link pan, zoom events for plots with x_range.
ch0.x_range.on_change('start', x_range_cb)
ch0.x_range.on_change('end', x_range_cb)
ch0.on_event(SelectionGeometry, selection_cb)
low_thresh = 0.0
sgrams = [np.ones((1, 1))]
specsource = ColumnDataSource(data=dict(sgram0=[sgrams[0]]))
spec0 = figure(
name='spec0',
x_range=ch0.x_range, # Keep times synchronized
tooltips=[("time", "$x{0.0000}"), ("freq", "$y{0.0000}"), ("value", "@sgram0{0.000000}")],
**figargs
)
spec0.x_range.range_padding = spec0.y_range.range_padding = 0
spec0cmap = LogColorMapper(palette=r_Greys256)
low_thresh_slider = Slider(
start=5.0, end=25.0, step=0.25, value=params['low_thresh_power'], title=None
)
spec0img = spec0.image(
image='sgram0',
x=0, y=0,
color_mapper=spec0cmap,
level='image',
source=specsource
)
spec0.grid.grid_line_width = 0.0
low_thresh_slider.on_change('value', low_thresh_cb)
selbox = BoxAnnotation(
name='selbox',
left=0.0, right=0.0,
fill_color='green', fill_alpha=0.1,
line_color='green', line_width=1.5, line_dash='dashed',
visible=False
)
ch0.add_layout(selbox)
spec0.add_layout(selbox)
spec0.on_event(SelectionGeometry, selection_cb)
grid = gridplot(
[ch0, spec0],
ncols=1,
plot_height=200,
toolbar_location='left',
toolbar_options={'logo': None},
merge_tools=True
)
mainLayout = column(
fselect, playvisbtn, playselbtn, grid, low_thresh_slider,
name='mainLayout'
)
doc.add_root(mainLayout)
return doc
# The notebook_url parameter is required when running in a BinderHub instance.
# If running a local notebook, omit that parameter.
if default_jupyter_url is None:
show(myapp) # For running a local notebook
else:
show(myapp, notebook_url=remote_jupyter_proxy_url_callback)
```
| github_jupyter |
# Grouping your data
```
import warnings
warnings.simplefilter('ignore', FutureWarning)
import matplotlib
matplotlib.rcParams['axes.grid'] = True # show gridlines by default
%matplotlib inline
import pandas as pd
```
In last week modules, you saw how to merge two datasets containing a common column to create a
single, combined dataset. Combining datasets allows us to make comparisons across
datasets, as you discovered when looking for correlations between GDP and life
expectancy.
In this week modules, you’ll learn how to go the other way, separating out distinct ‘subsets’ or groups
of data, before summarising them individually.
As well as splitting out different groups of data, row and column values can be rearranged
to reshape a dataset and allow the creation of a wide range of pivot table style reports
from a single data table.
In this week’s tasks, you’ll learn how a single line of code can be used to generate a
wide variety of pivot table style reports of your own.
One of the ways you are shown for loading World Bank data into the notebook in last Week,
was to use the **download ()** function.
One way to find out for yourself what sorts of argument a function expects is to ask it.
Running a code cell containing a question mark (?) followed by a function name should
pop up a help area in the bottom of the notebook window. (Close it using the x in the top
right hand corner of the panel.)
```
if pd.__version__.startswith('0.23'):
# this solves an incompatibility between pandas 0.23 and datareader 0.6
# taken from https://stackoverflow.com/questions/50394873/
core.common.is_list_like = api.types.is_list_like
from pandas_datareader.wb import download
?download
```
The function documentation tells you that you can enter a list of one or more country
names using standard country codes as well as a date range. You can also calculate a
date range from a single date to show the **N** years of data leading up to a particular year.
```
YEAR = 2013
GDP_INDICATOR = 'NY.GDP.MKTP.CD'
gdp = download(indicator=GDP_INDICATOR, country=['GB','CN'],
start=YEAR-5, end=YEAR)
gdp = gdp.reset_index()
gdp
```
Although many datasets that you are likely to work with are published in the form of a
single data table, such as a single CSV file or spreadsheet worksheet, it is often possible
to regard the dataset as being made up from several distinct subsets of data.
In the above example, you will probably notice that each country name appears in several
rows, as does each year. This suggests that we can make different sorts of comparisons
between different groupings of data using just this dataset. For example, compare the
total GDP of each country calculated over the six years 2008 to 2013 using just a single
line of code:
```
gdp.groupby('country')['NY.GDP.MKTP.CD'].aggregate(sum)
```
Essentially what this does is to say ‘for each country, find the total GDP’.
The total combined GDP for those two countries in each year could be found by making
just one slight tweak to our code (can you see below where I made the change?):
```
gdp.groupby('year')['NY.GDP.MKTP.CD'].aggregate(sum)
```
That second calculation probably doesn’t make much sense in this particular case, but
what if there was another column saying which region of the world each country was in?
Then, by taking the data for all the countries in the world, the total GDP could be found for
each region by grouping on both the year and the region.
Next, you will consider ways of grouping data.
## Ways of grouping data
Think back to the weather dataset you used in earlier week , how might you group that data
into several distinct groups? What sorts of comparisons could you make by grouping just
the elements of that dataset? Or how might you group and compare the GDP data?
One thing the newspapers love to report are weather ‘records’, such as the ‘hottest June
ever’ or the wettest location in a particular year as measured by total annual rainfall, or
highest average monthly rainfall. How easy is it to find that information out from the data?
Or with the GDP data, if countries were assigned to economic groupings such as the
European Union, or regional groupings such as Africa, or South America, how would you
generate information such as lowest GDP in the EU or highest GDP in South America?
You will learn how to split data into groups based on particular features of the
data, and then generate information about each separate group, across all of the groups,
at the same time.
**Activity: Grouping data**
Based on the data you have seen so far, or some other datasets you may be aware of,
what other ways of grouping data can you think of, and why might grouping data that
way be useful?
## Data that describes the world of trade
Lets look at what sorts of thing different
countries actually export to the UK.
For example, it might surprise you that India was the world’s largest exporter by value of
unset diamonds in 2014 (24 billion US dollars worth), or that Germany was the biggest
importer of chocolate (over $2.5 billion worth) in that same year.
National governments all tend to publish their own trade figures, but the UN also collect
data from across the world. In particular, the UN’s global trade database, Comtrade,
contains data about import and export trade flows between countries for a wide range of
goods and services.
So if you’ve ever wondered where your country imports most of its T-shirts from, or
exports most of its municipal waste to, **Comtrade** is likely to have the data.
In the next section, you will find out about the Comtrade data.
## Getting Comtrade data into your notebook
In this exercise, you will practice loading data from Comtrade into a pandas dataframe and getting it into a form where you can start to work with it.
The following steps and code are an example. Your task for this exercise is stated at the end, after the example.
The data is obtained from the [United Nations Comtrade](http://comtrade.un.org/data/) website, by selecting the following configuration:
- Type of Product: goods
- Frequency: monthly
- Periods: all of 2014
- Reporter: United Kingdom
- Partners: all
- Flows: imports and exports
- HS (as reported) commodity codes: 0401 (Milk and cream, neither concentrated nor sweetened) and 0402 (Milk and cream, concentrated or sweetened)
Clicking on 'Preview' results in a message that the data exceeds 500 rows. Data was downloaded using the *Download CSV* button and the download file renamed appropriately.
```
LOCATION='comtrade_milk_uk_monthly_14.csv'
```
A URL for downloading all the data as a CSV file can also be obtained via "View API Link".
It must be modified so that it returns up to 5000 records (set `max=5000`) in the CSV format (`&fmt=csv`).
```
# LOCATION = 'http://comtrade.un.org/api/get?max=5000&type=C&freq=M&px=HS&ps=2014&r=826&p=all&rg=1%2C2&cc=0401%2C0402&fmt=csv'
```
Load the data in from the specified location, ensuring that the various codes are read as strings. Preview the first few rows of the dataset.
```
milk = pd.read_csv(LOCATION, dtype={'Commodity Code':str, 'Reporter Code':str})
milk.head(3)
```
Limit the columns to make the dataframe easier to work with by selecting just a subset of them.
```
COLUMNS = ['Year', 'Period','Trade Flow','Reporter', 'Partner', 'Commodity','Commodity Code','Trade Value (US$)']
milk = milk[COLUMNS]
```
Derive two new dataframes that separate out the 'World' partner data and the data for individual partner countries.
```
milk_world = milk[milk['Partner'] == 'World']
milk_countries = milk[milk['Partner'] != 'World']
```
You may wish to store a local copy as a CSV file, for example:
```
milk_countries.to_csv('countrymilk.csv', index=False)
```
To load the data back in:
```
load_test = pd.read_csv('countrymilk.csv', dtype={'Commodity Code':str, 'Reporter Code':str})
load_test.head(2)
```
If you are on a Windows computer, data files may sometimes be saved using a file encoding (*Latin-1*). Pandas may not recognise this by default, in which case you will see a `UnicodeDecodeError`.
In such cases, opening files in `read_excel()` or `read_csv()` using the parameter `encoding="ISO-8859-1"` or `encoding = "Latin-1"` should fix the problem. For example, edit the previous command to read:
`load_test=read_csv('countrymilk.csv', dtype={'Commodity Code':str}, encoding = "ISO-8859-1")`
### Subsetting Your Data
For large or heterogenous datasets, it is often convenient to create subsets of the data. To further separate out the imports:
```
milk_imports = milk[milk['Trade Flow'] == 'Imports']
milk_countries_imports = milk_countries[milk_countries['Trade Flow'] == 'Imports']
milk_world_imports=milk_world[milk_world['Trade Flow'] == 'Imports']
```
### Sorting the data
Having loaded in the data, find the most valuable partners in terms of import trade flow during a particular month by sorting the data by *decreasing* trade value and then selecting the top few rows.
```
milkImportsInJanuary2014 = milk_countries_imports[milk_countries_imports['Period'] == 201401]
milkImportsInJanuary2014.sort_values('Trade Value (US$)',ascending=False).head(10)
```
### Task
To complete these tasks you could copy this notebook and amend the code or create a new notebook to do the analysis for your chosen data.
Using the [Comtrade Data website](http://comtrade.un.org/data/), identify a dataset that describes the import and export trade flows for a particular service or form of goods between your country (as reporter) and all ('All') the other countries in the world. Get the monthly data for all months in 2014.
Download the data as a CSV file and add the file to the same folder as the one containing this notebook. Load the data in from the file into a pandas dataframe. Create an easier to work with dataframe that excludes data associated with the 'World' partner. Sort this data to see which countries are the biggest partners in terms of import and export trade flow.
| github_jupyter |
```
from alpaca import Telescope, Camera, FilterWheel
import ciboulette.base.ciboulette as Cbl
import ciboulette.sector.sector as Sct
import ciboulette.utils.ephemcc as Eph
import ciboulette.utils.exposure as Exp
import ciboulette.utils.planning as Pln
```
#### Initialization of objects
```
cbl = Cbl.Ciboulette()
ephcc = Eph.Ephemcc()
sct = Sct.Sector()
exp = Exp.Exposure()
planning = Pln.Planning('1Yc-QxFr9veMeGjqvedRMrcEDL2GRyTS_','planning.csv')
planningtable = planning.get()
```
#### Ciboulette tests
- Table test
- Filter test
```
table_cbl = cbl.ciboulettetable()
cbl.server = '192.168.1.18:11111'
cbl.server
print(table_cbl)
# Initialization FilterWheel
# Device 2 for simul
filterwheel = FilterWheel(cbl.server, 2)
filterwheel.position(0)
planningtable.pprint()
plan = planningtable[0]
cbl.setfilteralpaca(filterwheel,planning.getfilter(plan))
planning.getRA(plan),planning.getDEC(plan),planning.getfilter(plan),planning.getobservationID(plan),planning.getexptime(plan)
from astropy.table import Table
plan = Table()
plan['target_names'] = ['M1']
plan['s_ra'] = ['5.24']
plan['s_dec'] = ['42.64']
plan['t_exptime'] = ['300']
plan['obs_id'] = ['251']
plan['binning'] = ['1']
plan['filters'] = ['SA200']
plan['dataproduct_type'] = ['light']
plan['obs_title'] = ['none']
planning.getRA(plan)[0],planning.getDEC(plan)[0],planning.getfilter(plan)[0],planning.getobservationID(plan)[0]
```
#### Ephemcc class tests
The Alpaca class is used to test the movement of the telescope
```
cbl.site_lat = 40.235
ephcc.observer = cbl.ephemccgetobserver()
ephcc.observer
ephcc.setep('2021-02-11T22:00:00')
ephcc.setndb(10)
print(ephcc.get())
from astropy import units as u
from astropy.coordinates import SkyCoord, Angle
from astropy.io.votable import parse_single_table
table = parse_single_table(ephcc.filename).to_table()
table
c = SkyCoord(table['ra'], table['dec'], unit='deg', frame='icrs')
c.ra.degree*15,c.dec.degree
# Initialisation Telescope
cbl.device = 1
telescope = Telescope(cbl.server, cbl.device)
telescope.sitelatitude(cbl.site_lat)
telescope.sitelongitude(cbl.site_long)
telescope.unpark()
telescope.tracking(True)
RA = c.ra.deg[0]
DEC = c.dec.deg[0]
RA = 4.00
DEC = 36.22
# Déplacement
telescope.slewtocoordinates(RA,DEC)
```
#### Check sofware tests
```
sectortable = sct.readarchives(cbl.archive_table)
cbl.ra = 5.58
cbl.dec = -5.36
cbl.projections(sectortable)
cbl.starmap()
```
#### Planning tests
Reading a google drive file and transforms it into a planning table
```
planningtable[0:3]
plan = planningtable[2]
exp.setexptime(planning.getexptime(plan))
exp.setnumber(planning.getobservationID(plan))
exp.getexptime(),exp.getnumber()
```
#### Exposure class tests
```
exp.todaytonumber()
exp.getnumber()
exp.setexptime(200)
exp.getexptime()
```
#### INDICLIENT tests
```
from ciboulette.indiclient.camera import ASICam120Mini
indi_server = '192.168.1.30'
indi_port = 7624
ccd = ASICam120Mini(indi_server,indi_port)
ccd.connect()
obs = ccd.observer
print(obs)
info = ccd.ccd_info
print(info)
hdul = ccd.expose(exptime=20.0)
hdul.info()
hdul[0].data
hdul[0].header
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,7))
ax = fig.add_subplot(111)
fig.add_axes(ax)
ax.grid(b = False)
plt.imshow(hdul[0].data, origin='lower', cmap='gray',vmin = 16, vmax = 250)
plt.show()
ccd.quit()
```
#### FILTERWheel test
```
from ciboulette.indiclient.filterwheel import FILTERWheelSimulator
indi_server = '192.168.1.30'
indi_port = 7624
filterwheel = FILTERWheelSimulator(indi_server,indi_port)
filterwheel.connect()
filterwheel.filter_name
filterwheel.connected
filterwheel.filter
filterwheel.filters
filterwheel.filter = 4
filterwheel.filter
filterwheel.filtername = 'OIII'
filterwheel.filter
filterwheel.filtername = 'OII'
filterwheel.filter
from ciboulette.indiclient.filterwheel import FILTERWheelATIK
indi_server = '192.168.1.30'
indi_port = 7624
filterwheel = FILTERWheelATIK(indi_server,indi_port)
filterwheel.connect()
filterwheel.filter
filterwheel.filters
filterwheel.filternames = ['L', 'CLS', 'Red', 'G', 'PLR', '328-742nm', 'OIII', 'Visible', 'SA200']
filterwheel.filters
filterwheel.quit()
```
#### ATIK CCD test
```
from ciboulette.indiclient.camera import ATIKCam383L
indi_server = '192.168.1.30'
indi_port = 7624
ATIKccd = ATIKCam383L(indi_server,indi_port)
ATIKccd.connect()
ATIKccd.observer = 'CAM2'
obs = ATIKccd.observer
print(obs)
ATIKccd.updir = '/home/ubuntu/lab/dataset'
ATIKccd.prefix = obs + '_' + 'XXX'
print(ATIKccd.prefix),print(ATIKccd.updir)
info = ATIKccd.ccd_info
print(info)
t = ATIKccd.temperature
print(t)
ATIKccd.local
ATIKccd.client
ATIKccd.both
hdul = ATIKccd.expose(exptime=20.0)
hdul[0].data
hdul[0].header
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(15,7))
ax = fig.add_subplot(111)
fig.add_axes(ax)
ax.grid(b = False)
plt.imshow(hdul[0].data, origin='lower', cmap='gray',vmin = 200, vmax = 1000)
plt.show()
ATIKccd.quit()
```
| github_jupyter |
```
%load_ext rpy2.ipython
%matplotlib inline
from fbprophet import Prophet
import pandas as pd
import logging
logging.getLogger('fbprophet').setLevel(logging.ERROR)
import warnings
warnings.filterwarnings("ignore")
%%R
library(prophet)
```
### Forecasting Growth
By default, Prophet uses a linear model for its forecast. When forecasting growth, there is usually some maximum achievable point: total market size, total population size, etc. This is called the carrying capacity, and the forecast should saturate at this point.
Prophet allows you to make forecasts using a [logistic growth](https://en.wikipedia.org/wiki/Logistic_function) trend model, with a specified carrying capacity. We illustrate this with the log number of page visits to the [R (programming language)](https://en.wikipedia.org/wiki/R_%28programming_language%29) page on Wikipedia:
```
%%R
df <- read.csv('../examples/example_wp_log_R.csv')
df = pd.read_csv('../examples/example_wp_log_R.csv')
```
We must specify the carrying capacity in a column `cap`. Here we will assume a particular value, but this would usually be set using data or expertise about the market size.
```
%%R
df$cap <- 8.5
df['cap'] = 8.5
```
The important things to note are that `cap` must be specified for every row in the dataframe, and that it does not have to be constant. If the market size is growing, then `cap` can be an increasing sequence.
We then fit the model as before, except pass in an additional argument to specify logistic growth:
```
%%R
m <- prophet(df, growth = 'logistic')
m = Prophet(growth='logistic')
m.fit(df)
```
We make a dataframe for future predictions as before, except we must also specify the capacity in the future. Here we keep capacity constant at the same value as in the history, and forecast 5 years into the future:
```
%%R -w 10 -h 6 -u in
future <- make_future_dataframe(m, periods = 1826)
future$cap <- 8.5
fcst <- predict(m, future)
plot(m, fcst)
future = m.make_future_dataframe(periods=1826)
future['cap'] = 8.5
fcst = m.predict(future)
fig = m.plot(fcst)
```
The logistic function has an implicit minimum of 0, and will saturate at 0 the same way that it saturates at the capacity. It is possible to also specify a different saturating minimum.
### Saturating Minimum
The logistic growth model can also handle a saturating minimum, which is specified with a column `floor` in the same way as the `cap` column specifies the maximum:
```
%%R -w 10 -h 6 -u in
df$y <- 10 - df$y
df$cap <- 6
df$floor <- 1.5
future$cap <- 6
future$floor <- 1.5
m <- prophet(df, growth = 'logistic')
fcst <- predict(m, future)
plot(m, fcst)
df['y'] = 10 - df['y']
df['cap'] = 6
df['floor'] = 1.5
future['cap'] = 6
future['floor'] = 1.5
m = Prophet(growth='logistic')
m.fit(df)
fcst = m.predict(future)
fig = m.plot(fcst)
```
To use a logistic growth trend with a saturating minimum, a maximum capacity must also be specified.
| github_jupyter |
```
import os
import lmdb
import caffe
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from snntoolbox.io_utils.common import to_categorical
path_to_dataset = '/home/rbodo/.snntoolbox/Datasets/roshambo'
lmdb_env = lmdb.open(path_to_dataset)
lmdb_txn = lmdb_env.begin()
lmdb_cursor = lmdb_txn.cursor()
x_test = []
y_test = []
for key, value in lmdb_cursor:
datum = caffe.proto.caffe_pb2.Datum()
datum.ParseFromString(value)
x_test.append(caffe.io.datum_to_array(datum) / 255.)
y_test.append(int(datum.label))
# Separate background samples
x_test_no_background = [x for i, x in enumerate(x_test) if y_test[i] != 3]
y_test_no_background = [y for y in y_test if y != 3]
x_test_background = [x for i, x in enumerate(x_test) if y_test[i] == 3]
y_test_background = [y for y in y_test if y == 3]
len(y_test_no_background)
len(y_test_background)
len(y_test)
# np.savez_compressed(os.path.join(path_to_dataset, 'frames_background', 'x_norm'), np.array(x_test_background, dtype='float32'))
np.savez_compressed(os.path.join(path_to_dataset, 'frames_background', 'x_test'), np.array(x_test_background, dtype='float32'))
np.savez_compressed(os.path.join(path_to_dataset, 'frames_background', 'y_test'), np.array(to_categorical(y_test_background, 4), dtype='float32'))
# np.savez_compressed(os.path.join(path_to_dataset, 'frames_no_background', 'x_norm'), np.array(x_test_no_background, dtype='float32'))
np.savez_compressed(os.path.join(path_to_dataset, 'frames_no_background', 'x_test'), np.array(x_test_no_background, dtype='float32'))
np.savez_compressed(os.path.join(path_to_dataset, 'frames_no_background', 'y_test'), np.array(to_categorical(y_test_no_background, 4), dtype='float32'))
# np.savez_compressed(os.path.join(path_to_dataset, 'frames_testset', 'x_norm'), np.array(x_test, dtype='float32'))
np.savez_compressed(os.path.join(path_to_dataset, 'frames_testset', 'x_test'), np.array(x_test, dtype='float32'))
np.savez_compressed(os.path.join(path_to_dataset, 'frames_testset', 'y_test'), np.array(to_categorical(y_test, 4), dtype='float32'))
plt.plot(y_test, '.')
class_label_idx = {'0': 'paper', '1': 'scissors', '2': 'rock', '3': 'background'}
print(class_label_idx)
class_sizes = [len(np.nonzero(np.array(y_test)==i)[0]) for i in [0, 2, 3, 1]]
print('Class boundaries: {}'.format(np.cumsum(class_sizes)))
i = 9422
plt.imshow(x_test[i][0])
print(y_test[i])
path = '/home/rbodo/.snntoolbox/data/roshambo'
prototxt = os.path.join(path, 'NullHop.prototxt')
caffemodel = os.path.join(path, 'NullHop.caffemodel')
model = caffe.Net(prototxt, 1, weights=caffemodel)
# To be executed in the terminal running python2
import sys
sys.path.remove('/mnt/2646BAF446BAC3B9/Repositories/caffe/python')
sys.path.append('/mnt/2646BAF446BAC3B9/Repositories/caffe_lp/python')
import caffe
from caffe.quantization.net_descriptor import net_prototxt
net = net_prototxt()
caffe_root = '/mnt/2646BAF446BAC3B9/Repositories/caffe_lp/'
weight_dir = '/home/rbodo/Downloads/'#'/home/rbodo/.snntoolbox/data/roshambo/' # Location of caffemodel
save_dir = '/home/rbodo/.snntoolbox/data/roshambo/standard_caffe/'
model_dir = 'examples/low_precision/' # Put prototxt file there, and append '_deploy' to filename.
net_descr = net.extract('NullHop', caffe_root=caffe_root, model_dir=model_dir, weight_dir=weight_dir)
net.create('NullHop', net_descr, lp=False, deploy=True, caffe_root=caffe_root, model_dir=model_dir, save_dir=save_dir)
# In output file, remove dropout layer (change 'bottom' parameter of following layer) and adapt input layer
net1 = caffe.Net(caffe_root+model_dir+'NullHop_deploy.prototxt', 1, weights=weight_dir+'NullHop/NullHop.caffemodel')
net2 = caffe.Net(save_dir+'NullHop_deploy.prototxt', 1)
for k1, k2 in zip(net1.params.keys(), net2.params.keys()):
net2.params[k2][0] = net1.params[k1][1]
net2.params[k2][1] = net1.params[k1][3]
net2.save(save_dir+'NullHop.caffemodel')
# Generate and save Poisson input
num_to_test = 15876
batch_size = 50
num_batches = int(np.floor(num_to_test / batch_size))
input_rate = 1000
rescale_fac = 1000 / input_rate
duration = 150
num_poisson_events_per_sample = -1
input_b_l_t = np.empty((batch_size, 1, 64, 64, duration), np.bool)
for batch_idx in range(num_batches):
x_b = np.array(x_test[batch_size*batch_idx: batch_size*(batch_idx+1)])
input_spikecount = 0
for sim_step in range(duration):
if input_spikecount < num_poisson_events_per_sample \
or num_poisson_events_per_sample < 0:
spike_snapshot = np.random.random_sample(x_b.shape) * rescale_fac
inp = (spike_snapshot <= np.abs(x_b)).astype('float32')
input_spikecount += np.count_nonzero(inp) / batch_size
else:
inp = np.zeros(x_b.shape)
input_b_l_t[Ellipsis, sim_step] = inp
np.savez_compressed('/home/rbodo/Downloads/' + str(batch_idx), input_b_l_t=input_b_l_t)
```
| github_jupyter |
```
import json
import pandas as pd
import numpy as np
import qgrid
from _vars import *
import os
import sys
import zipfile
zip_filepath="json_archive.zip" #input parameter
target_dir="extracted_files/" #input parameter
files_to_extract = ['40171448_final.json', '10171448_final.json', '14171448_final.json'] #WRITE HERE FILENAMES
def unzip(zip_filepath, target_dir, files_to_extract):
"""
Unzips the files from a zip file and puts them in the given destination (can be a non-existing folder).
Out put can be filtered by providing the file extension.
:param zip_file: zip file to extract.
:param target_dir: destination folder.
:param files_to_extract: list of filenames to extract
:return: nothing
"""
if not os.path.exists(target_dir):
os.makedirs(target_dir)
try:
zfile = zipfile.ZipFile(zip_filepath, 'r')
for filename in files_to_extract:
print('Extracting {f} to {d}'.format(f=filename, d=target_dir))
zfile.extract(filename, target_dir)
except zipfile.BadZipfile:
print("Cannot extract {f}: Not a valid zipfile (BadZipfile Exception)".format(f=zip_filepath))
unzip(zip_filepath, target_dir, files_to_extract)
'''Reading a file and putting the data into dataframe'''
with open('1206_final.json') as f:
data = json.load(f)
df = pd.DataFrame(data)
'''Extracting the identificators of references and their types'''
def extract_id(token_string):
for _id in dict_of_ids.keys():
token_id = re.search(dict_of_ids[_id] , token_string)
if token_id is not None:
token_id = token_id.group()
ref = re.compile("([a-zA-Z=:|]*)([0-9].+)").match(token_id).groups()
return (_id, ref[1])
df['ref_ids'] = np.empty((len(df), 0)).tolist()
df['ref_id_ins'] = None
df['ref_ids_type'] = np.empty((len(df), 0)).tolist()
for i in range(len(df)):
ref_list = []
for seq in df['change_sequence'][i]:
ref = extract_id("".join(seq['tokens']))
if ref != None and df.loc[i, 'ref_id_ins'] == None:
df.loc[i, 'ref_id_ins'] = seq['time']
if ref != None:
ref_list.append((ref[0].replace("=", ""), ref[1]))
ref_list = set(ref_list)
df.at[i, 'ref_ids'] = [x[1] for x in ref_list]
df.at[i, 'ref_ids_type'] = [x[0] for x in ref_list]
```
In this example we are extracting the dates of reference insertion, date of id insertion and final reference deletion for each reference by its id:
```
def getting_data(df):
df_upt = pd.DataFrame(df[['ref_ids','ref_ids_type', 'ref_id_ins']])
df_upt['ins_time'] = df['first_rev_time']
df_upt['del_time'] = 'None'
for i in df_upt.index:
if df['deleted'][i]:
df_upt['del_time'][i] = df['del_time'][i][-1]
return df_upt
df_upt = getting_data(df)
qgrid.show_grid(getting_data(df))
```
| github_jupyter |
```
import os
import sys
module_path = os.path.abspath(os.path.join('../../'))
print(module_path)
if module_path not in sys.path:
sys.path.append(module_path)
from pydub import AudioSegment
import soundfile as sf
from params import EXCERPT_LENGTH,INPUT_DIR_PARENT,OUTPUT_DIR
# sys.path.insert(0, './models/audioset')
# from vggish_params import EXAMPLE_HOP_SECONDS
from pre_process_func import cal_sample_size,iterate_for_waveform_to_examples
import os
import math
from pathlib import Path
import numpy as np
import sys
sys.path.insert(0, './models/audioset')
from models.audioset import vggish_slim
from models.audioset import vggish_params
from models.audioset import vggish_input
from models.audioset import vggish_postprocess
src="/tank/data/nna/real/11A/"
file="/tank/data/nna/real/11A/S4A10276_20190514_153000.flac"
# flac_file = AudioSegment.from_file(file, "flac")
# ffmpeg -i /tank/data/nna/real/11A/S4A10276_20190514_153000.flac -map 0 -c copy -f segment -segment_time "02:00:00" ./output_%03d.flac
flacs=os.listdir("/tank/data/nna/real/11A")
wav_data, sr = sf.read(Path(src+flac[0]),dtype='int16')
import soundfile as sf
wav_data, sr = sf.read(Path("/scratch/enis/data/nna/real/39B/S4A10262_20190611_101602_segments/output000.flac"),dtype='int16')
sf.__version__
for flac in flacs[1:2]:
wav_data, sr = sf.read("/scratch/enis/data/nna/real/27A/S4A10251_20190504_000000_segments/output000.flac",dtype='int16')
sample_size,offset,remainder_wav_data,lower_limit=cal_sample_size(wav_data,sr)
print(sample_size,offset,remainder_wav_data,lower_limit)
sound=iterate_for_waveform_to_examples2(wav_data,sr)
print(sound.shape)
len(wav_data)/offset
def iterate_for_waveform_to_examples2(wav_data,sr):
"""Wrapper for waveform_to_examples from models/audioset/vggish_input.py
Iterate over data with 10 seconds batches, so waveform_to_examples produces
stable results (equal size)
read **(16/06/2019)** at Project_logs.md for explanations.
Args:
wav_data (numpy.array): audio data in wav format
sr (int): sampling rate of the audio
Returns:
See waveform_to_examples.
"""
sample_size,offset,remainder_wav_data,lower_limit=cal_sample_size(wav_data,sr)
# in this loop wav_data jumps offset elements and sound jumps EXCERPT_LENGTH*2
# because offset number of raw data turns into EXCERPT_LENGTH*2 pre-processed
sound=np.zeros((sample_size,96,64),dtype=np.float32)
count=0
print(len(wav_data))
for i in range(0,len(wav_data),offset):
#this is when wav_data%offset!=0
# numpy indexing handles bigger indexes
# i+offset>len(wav_data) means that we are on the last loop
# then if there is enough remaind data, process it otherwise not
if i+offset>len(wav_data) and remainder_wav_data<lower_limit:
continue
# left data is smaller than 22712, we cannot pre-process
# if smaller than 42998, will be 0 anyway
a_sound= vggish_input.waveform_to_examples(wav_data[i:i+(offset)], sr)
sound[count:(count+a_sound.shape[0]),:,:]=a_sound[:,:,:]
count+=a_sound.shape[0]
return sound
print(len(data),data.shape,sound.shape)
# vggfile="/scratch/enis/data/nna/NUI_DATA/01 Itkillik/August 2016/ITKILLIK1_20160727_135107_vgg/ITKILLIK1_20160727_135107_rawembeddings049.npy"
# vggnp=np.load(vggfile)
# vggfile="/scratch/enis/data/nna/NUI_DATA/01 Itkillik/August 2016/ITKILLIK1_20160727_135107_vgg/ITKILLIK1_20160727_135107_rawembeddings048.npy"
# vggnp8=np.load(vggfile)
# vggnp[-1]
# vggnp8[-1]
# vggfile="/scratch/enis/data/nna/NUI_DATA/01 Itkillik/August 2016/ITKILLIK1_20160731_171332_vgg/ITKILLIK1_20160731_171332_rawembeddings048.npy"
# vggnp8_2=np.load(vggfile)
# vggnp8_2[-1]
# name="ITKILLIK1_20160825_132756"
# vggfile="/scratch/enis/data/nna/NUI_DATA/01 Itkillik/August 2016/{}_vgg/{}_rawembeddings049.npy".format(name,name)
# vggnp9_2=np.load(vggfile)
# vggnp9_2[-1]
# vggnp9_2[-1]
wav_data=data[:]
sr=samplerate
vggish_params.EXAMPLE_HOP_SECONDS=0.96
initial=-1
# 22712 0 starts ?? 22712
# 42998 1 starts ?? 42998
# 63284 2 starys ?? 85334
# 3 # ?? 127734
# 63284-42998 = 20286 ! ?? 42336
# 297014
# 8 starts 339350
# 9 starts 381686
# 10 starts 424022
# 11 starts 466358
# 22712+(seconds*20286)=len_wav_data
for index in range(1,11):
for i in range(int((sr*0.96*index)+(719)),(EXCERPT_LENGTH+1)*sr,1):
# print(i)
a_sound= vggish_input.waveform_to_examples(wav_data[0:i], sr)
# print(i," ",a_sound.shape)
if a_sound.shape[0]==index:
print("shape is",a_sound.shape[0],"starting from ",i)
initial=a_sound.shape[0]
break
if i%1==0:
print(i)
# break
# for 0.96
# shape is 1 starting from 42998
# shape is 2 starting from 85334
# shape is 3 starting from 127670
# shape is 4 starting from 170006
# shape is 5 starting from 212342
# shape is 6 starting from 254678
# shape is 7 starting from 297014
# shape is 8 starting from 339350
# shape is 9 starting from 381686
# shape is 10 starting from 424022
# shape is 11 starting from 466358
# for flac with 48000 sampling rate
# shape is 1 starting from 46800
# shape is 2 starting from 92880
import time
import resampy
s=time.time()
if len(data.shape) > 1:
datas1 = np.mean(data[:213024000], axis=1)
datas = resampy.resample(datas1, 41000, 16000)
e=time.time()
t1=(e-s)
s=time.time()
if len(data.shape) > 1:
datas1 = np.mean(data[:4100000], axis=1)
datas2 = resampy.resample(datas1, 41000, 16000)
e=time.time()
t2=(e-s)
print((213024000/4100000)*t2/t1)
from params import LOGS_FILE
import pre_process_func
from pathlib import Path
from subprocess import Popen, PIPE
import numpy as np
TEST_DIR=Path("tests/")
EXAMPLE_MODELS_DIR=TEST_DIR / "example_models/"
EXAMPLE_OUTPUT_DIR= TEST_DIR / "example_outputs"
OUTPUT_DIR=Path("aggregates")
root_dir="/tank/data/nna/real/"
# input_path_list=[root_dir+"tests/data/3hours30min.mp3",]
input_path_list=[root_dir+"39B/S4A10262_20190611_101602.flac",]
pre_process_func.parallel_pre_process(input_path_list,
output_dir="./tests/data/output",
input_dir_parent=root_dir,
cpu_count=2,
segment_len="02:00:00",
logs_file_path=LOGS_FILE)
```
| github_jupyter |
# Assess and Monitor QCs, Internal Standards, and Common Metabolites
## This notebook will guide people to
* ## Identify their files
* ## Specify the LC/MS method used
* ## Specify the text-string used to differentiate blanks, QCs, and experimental injections
* ## Populate the run log with the pass/fail outcome for each run
## Run each block below. They will indicate "ok" when completed. Clear all output prior to starting makes it easier to tell when cells are completed.
# 1. Import required packages
```
import sys
# sys.path.insert(0,'/global/homes/b/bpb/metatlas/' )
sys.path.insert(0,'/global/project/projectdirs/metatlas/anaconda/lib/python2.7/site-packages' )
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
from metatlas import ms_monitor_util as mtools
%matplotlib inline
```
# 2. Select your experiment.
```
num_days = raw_input('How many days back to search: ')
experiment = mtools.get_recent_experiments(num_days = int(num_days))
```
# 3. Get files for that experiment.
```
files = mtools.get_files_for_experiment(experiment.value)
print len(files)
```
# 4. Get strings used in your file naming and the method you used.
```
qc_str,blank_str,neg_str,pos_str = mtools.get_blank_qc_pos_neg_string()
method = mtools.get_method_dropdown()
```
# 5. Get Data from Reference
### You can also view the source of these references [here](https://docs.google.com/a/lbl.gov/spreadsheets/d/1SCvTvVloqkrsvT5uoCLP4gGaFO_BolptkiT3uAk_exM/edit?usp=sharing "Title").
## 5a. Get the Data as a Dataframe to Explore on Your Own
```
df = mtools.get_ms_monitor_reference_data()
df.head()
```
## 5b. Get the data for proceeding onto steps 6 and 7
```
reference_data
mtools = reload(mtools)
reference_data = mtools.filter_istd_qc_by_method(method.value)
print "ok"
```
# 6. Check that you have entered everything correctly by running the next cell
```
print "Method = ",method.value
print "Experiment = ",experiment.value
print len(files), " files queued for assessment"
print "filter strings are: ", qc_str.value, blank_str.value, pos_str.value, neg_str.value
print "parameters: ",reference_data['parameters']
##################################################################
##################################################################
##### YOU SHOULD NEVER HAVE TO UNCOMMENT AND RUN THIS BLOCK ######
# reference_data['parameters']['mz_ppm_tolerance'], reference_data['parameters']['rt_minutes_tolerance'] = mtools.get_rt_mz_tolerance_from_user()
##### YOU SHOULD NEVER HAVE TO UNCOMMENT AND RUN THIS BLOCK ######
##################################################################
##################################################################
```
# 7. Get values and make plots and tables. This saves an xls and figures to your current folder.
```
mtools = reload(mtools)
df = mtools.construct_result_table_for_files(files,qc_str,blank_str,neg_str,pos_str,method,reference_data,experiment)
```
# 8. Assert for each file if it has passed or failed QC
```
#TODO
```
| github_jupyter |
## Pipeline sequência de execução
## <font color='blue'>Streming de dados no twitter com MongoDB, Pandas e Scikit Learn</font>
## Preparação de conexão com twitter
```
# instalação de pacotes tweepy
!pip install tweepy
# importando os módulos tweepy, Datetime e Json
# listerner, ouvinte, vai ficar ouvindo pelos twitters
# OAuthHandler, permitir autehnticação com o twitter
# Stream, dados continuos
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
from datetime import datetime
import json
# definindo as chaves
# chave do consumidor
consumer_key = 'gvnrEENC1XyYoLeKdDieBSkVQ'
# chave do consumidor secreta
consumer_secret = 'QY1zP0JuPpEofQJRFSlkEXYhIERP6dhK5kZqfpi9rkLf4lnNIg'
# token de acesso
acess_token = '1231963153845628929-53nTZACvlRPHc1qHD8XuUczzridaEZ'
# token secreto
acess_token_secret = 'KDIrbgbkQyDaAvoBKzThA1H5g9gcLL4uA6JxGnnkeJsXg'
# criando as chaves de autenticação
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(acess_token,acess_token_secret)
# criando uma classe para capturar Streaming de dados do twitter
# classe MyListerner, herda de StreamListerner
class MyListerner(StreamListener):
def on_data(self, dados):
'''
coletar dados, transformar em JSON
coletar colunas do twitter
guardar no BD
'''
tweet = json.loads(dados)
created_at = tweet['created_at']
id_str = tweet['id_str']
text = tweet['text']
obj = {'created_at': created_at, 'id_str': id_str, 'text': text,}
tweetind = col.insert_one(obj).inserted_id
print(obj)
return True
# criando uma instância de MyListerner
mylisterner = MyListerner()
# criando uma instância de Streaming
mystream = Stream(auth, listener = mylisterner)
```
## Preparando conexão com MongoDB
```
# importando pymongo o módulo MongoClient
from pymongo import MongoClient
# criando a conexão ao MongoDB
client = MongoClient('localhost', 27017)
# criando o banco de dados twetterdb
db = client.twetterdb
# criando uma collection col
col = db.twetts
# criando uma lista de palavras chave para buscar nos tweets
keywords = ['Big Data', 'Python', 'Data Mining', 'Data Science']
```
## Coletando twitters
```
# iniciando o filtro e gravando no mongoDB
mystream.filter(track = keywords)
# desconetar
mystream.disconnect()
# verificando um documento na collection
col.find_one()
```
## Análise de dados com Pandas e Scikt-Learn
```
# criando um dataset com twitters coletados e armazenados no mongoDB
# pela sintaxe col.find()
dataset = [{'created_at': item['created_at'], 'text': item['text'],} for item in col.find()]
# importando pandas para trabalhar com dataset em Python
# por convenção é apelidado de pd
import pandas as pd
# criando um dataframe a partir do dataset
df = pd.DataFrame(dataset)
# imprimindo o dataframe
df
# importando o módulo scikit learn
# pacote muito utilizado em processamento de linguagem natural
from sklearn.feature_extraction.text import CountVectorizer
# usando o módulo CountVectorizer para criar uma matriz de documentos
# CountVectorized(), função comum para sabe quantas vezes uma palavra
# especificada por paramentro aparece neste caso meu dataframe
# df.text coletando apenas a coluna text
# cv.fit_transform(df.text), cria uma matrix, onde vai contabilizar
# o conjuto de palavras que aparece no datframe
cv = CountVectorizer()
count_matrix = cv.fit_transform(df.text)
# contando o número de ocorrencias das principais palavras em nosso dataset
# get_features_names(), pegar caracteristica nome
# tolist, converte para lista
# grava todos os dados e exibe na tela word_count[:50]
word_count = pd.DataFrame(cv.get_feature_names(), columns=['word'])
word_count['count'] = count_matrix.sum(axis=0).tolist()[0]
word_count = word_count.sort_values('count', ascending=False).reset_index(drop=True)
word_count[:50]
```
## Fim
| github_jupyter |
```
# Useful for debugging
%load_ext autoreload
%autoreload 2
# Nicer plotting
import matplotlib
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
matplotlib.rcParams['figure.figsize'] = (8,4)
```
# Movie example using write_beam
Here we insert write_beam elements into an existing lattice, run, save the beams to an h5 file, and plot using openPMD-beamphysics tools
```
from impact import Impact
from distgen import Generator
import numpy as np
import matplotlib.pyplot as plt
import os
IMPACT_IN = 'templates/apex_gun/ImpactT.in'
DISTGEN_IN = 'templates/apex_gun/distgen.yaml'
os.path.exists(IMPACT_IN)
G = Generator(DISTGEN_IN)
G['n_particle'] = 10000
G.run()
P0 = G.particles
P0.plot('x', 'y')
# Make Impact object
I = Impact(IMPACT_IN, initial_particles = P0, verbose=True)
# Change some things
I.header['Nx'] = 32
I.header['Ny'] = 32
I.header['Nz'] = 32
I.header['Dt'] = 1e-13
I.total_charge = P0['charge']
# Change stop location
I.stop = 0.1
# Make new write_beam elements and add them to the lattice.
from impact.lattice import new_write_beam
# Make a list of s
for s in np.linspace(0.001, 0.1, 98):
ele = new_write_beam(s=s, ref_eles=I.lattice) # ref_eles will ensure that there are no naming conflicts
I.add_ele(ele)
I.timeout = 1000
I.run()
len(I.particles)
```
# Plot
```
from bokeh.plotting import show, figure, output_notebook
from bokeh.layouts import column, row
from bokeh.models import ColumnDataSource
from bokeh import palettes, colors
pal = palettes.Viridis[256]
white=colors.named.white
pal = list(pal)
pal[0] = white # replace 0 with white
pal = tuple(pal)
output_notebook(verbose=False, hide_banner=True)
import os
# Prepare histogram function
PL = I.particles
ilist = []
for k in PL:
if k.startswith('write_beam_'):
ilist.append(int(k.strip('write_beam_')))
def bin_particles(i, key1='x', key2='y', bins=40):
P = I.particles[f'write_beam_{i}']
return np.histogram2d(P[key1], P[key2], weights=P.weight, bins=bins)
bin_particles(100)
# Prepare a datasource for Bokeh
def bin_bunch_datasource_h5(i, key1, key2, bins=20, nice=True, liveOnly=True, liveStatus=1):
H, xedges, yedges = bin_particles(i, key1, key2, bins=bins)
xmin = min(xedges)
xmax = max(xedges)
ymin = min(yedges)
ymax = max(yedges)
#if nice:
# f1 = nice_phase_space_factor[component1]
# f2 = nice_phase_space_factor[component2]
# xlabel = nice_phase_space_label[component1]
# ylabel = nice_phase_space_label[component2]
# xmin *= f1
# xmax *= f1
# ymin *= f2
# ymax *= f2
#else:
# xlabel = component1
# ylabel = component2
# Form datasource
dat = {'image':[H.transpose()], 'xmin':[xmin], 'ymin':[ymin], 'dw':[xmax-xmin], 'dh':[ymax-ymin]}
dat['xmax'] = [xmax]
dat['ymax'] = [ymax]
ds = ColumnDataSource(data=dat)
return ds
ds = bin_bunch_datasource_h5(100, 'x', 'y')
plot = figure(#x_range=[xmin,xmax], y_range=[ymin,ymax],
# x_axis_label = xlabel, y_axis_label = ylabel,
plot_width=500, plot_height=500)
plot.image(image='image', x='xmin', y='ymin', dw='dw', dh='dh', source=ds,palette=pal)
show(plot)
```
# Interactive
```
from bokeh.models.widgets import Slider
from bokeh import palettes, colors
# interactive
def myapp2(doc):
bunches = ilist
doc.bunchi = bunches[0]
doc.component1 = 'z'
doc.component2 = 'x'
doc.xlabel = doc.component1
doc.ylabel = doc.component2
doc.bins = 100
#doc.range = FULLRANGE
ds = bin_bunch_datasource_h5(doc.bunchi, doc.component1, doc.component2,bins=doc.bins)
def refresh():
ds.data = dict(bin_bunch_datasource_h5(doc.bunchi, doc.component1, doc.component2,bins=doc.bins).data )
# Default plot
plot = figure(title='',
x_axis_label = doc.xlabel, y_axis_label = doc.ylabel,
plot_width=500, plot_height=500)
plot.image(image='image', x='xmin', y='ymin', dw='dw', dh='dh', source=ds, palette=pal)
def slider_handler(attr, old, new):
doc.bunchi = bunches[new]
refresh()
slider = Slider(start=0, end=len(bunches)-1, value=0, step=1, title='x')
slider.on_change('value', slider_handler)
# Add plot to end
doc.add_root(column(slider, plot))
show(myapp2)# , notebook_url=remote_jupyter_proxy_url)
# If there are multiple
import os
os.environ['BOKEH_ALLOW_WS_ORIGIN'] = 'localhost:8888'
%%time
I.archive()
```
| github_jupyter |
```
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
model = Test_Model(conv1_output=32,conv2_output=8,conv3_output=64,fc1_output=512,fc3_output=256,fc2_output=512,activation=activation).to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
preds.to(device)
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':get_loss(criterion,y_train,model,X_train),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test),'val_loss':get_loss(criterion,y_test,model,X_test)})
for index in range(10):
print(torch.argmax(preds[index]))
print(y_batch[index])
print('\n')
wandb.finish()
test_index = 0
from load_data import *
# load_data()
from load_data import *
X_train,X_test,y_train,y_test = load_data()
len(X_train),len(y_train)
len(X_test),len(y_test)
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.c1 = nn.Conv2d(1,64,5)
self.c2 = nn.Conv2d(64,128,5)
self.c3 = nn.Conv2d(128,256,5)
self.fc4 = nn.Linear(256*10*10,256)
self.fc6 = nn.Linear(256,128)
self.fc5 = nn.Linear(128,4)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.c1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.c2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.c3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,256*10*10)
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc6(preds))
preds = self.fc5(preds)
return preds
device = torch.device('cuda')
BATCH_SIZE = 32
IMG_SIZE = 112
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
EPOCHS = 125
from tqdm import tqdm
PROJECT_NAME = 'Weather-Clf'
import wandb
# test_index += 1
# wandb.init(project=PROJECT_NAME,name=f'test')
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# preds.to(device)
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item()})
# wandb.finish()
# for index in range(10):
# print(torch.argmax(preds[index]))
# print(y_batch[index])
# print('\n')
class Test_Model(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1,16,5)
self.conv2 = nn.Conv2d(16,32,5)
self.conv3 = nn.Conv2d(32,64,5)
self.fc1 = nn.Linear(64*10*10,16)
self.fc2 = nn.Linear(16,32)
self.fc3 = nn.Linear(32,64)
self.fc4 = nn.Linear(64,32)
self.fc5 = nn.Linear(32,6)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.conv1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.conv2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.conv3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,64*10*10)
preds = F.relu(self.fc1(preds))
preds = F.relu(self.fc2(preds))
preds = F.relu(self.fc3(preds))
preds = F.relu(self.fc4(preds))
preds = F.relu(self.fc5(preds))
return preds
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
# test_index += 1
# wandb.init(project=PROJECT_NAME,name=f'test-{test_index}')
# for _ in tqdm(range(EPOCHS)):
# for i in range(0,len(X_train),BATCH_SIZE):
# X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
# y_batch = y_train[i:i+BATCH_SIZE].to(device)
# model.to(device)
# preds = model(X_batch.float())
# preds.to(device)
# loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# wandb.log({'loss':loss.item()})
# wandb.finish()
class Test_Model(nn.Module):
def __init__(self,conv1_output=16,conv2_output=32,conv3_output=64,fc1_output=16,fc2_output=32,fc3_output=64,activation=F.relu):
super().__init__()
self.conv3_output = conv3_output
self.conv1 = nn.Conv2d(1,conv1_output,5)
self.conv2 = nn.Conv2d(conv1_output,conv2_output,5)
self.conv3 = nn.Conv2d(conv2_output,conv3_output,5)
self.fc1 = nn.Linear(conv3_output*10*10,fc1_output)
self.fc2 = nn.Linear(fc1_output,fc2_output)
self.fc3 = nn.Linear(fc2_output,fc3_output)
self.fc4 = nn.Linear(fc3_output,fc2_output)
self.fc5 = nn.Linear(fc2_output,6)
self.activation = activation
def forward(self,X):
preds = F.max_pool2d(self.activation(self.conv1(X)),(2,2))
preds = F.max_pool2d(self.activation(self.conv2(preds)),(2,2))
preds = F.max_pool2d(self.activation(self.conv3(preds)),(2,2))
# print(preds.shape)
preds = preds.view(-1,self.conv3_output*10*10)
preds = self.activation(self.fc1(preds))
preds = self.activation(self.fc2(preds))
preds = self.activation(self.fc3(preds))
preds = self.activation(self.fc4(preds))
preds = self.activation(self.fc5(preds))
return preds
# conv1_output = 32
# conv2_output = 8
# conv3_output = 64
# fc1_output = 512
# fc2_output = 512
# fc3_output = 256
# activation
# optimizer
# loss
# lr
# num of epochs
def get_loss(criterion,y,model,X):
model.to('cpu')
preds = model(X.view(-1,1,112,112).to('cpu').float())
preds.to('cpu')
loss = criterion(preds,torch.tensor(y,dtype=torch.long).to('cpu'))
loss.backward()
return loss.item()
def test(net,X,y):
device = 'cpu'
net.to(device)
correct = 0
total = 0
net.eval()
with torch.no_grad():
for i in range(len(X)):
real_class = torch.argmax(y[i]).to(device)
net_out = net(X[i].view(-1,1,112,112).to(device).float())
net_out = net_out[0]
predictied_class = torch.argmax(net_out)
if predictied_class == real_class:
correct += 1
total += 1
net.train()
net.to('cuda')
return round(correct/total,3)
EPOCHS = 3
activations = [nn.ELU(),nn.LeakyReLU(),nn.PReLU(),nn.ReLU(),nn.ReLU6(),nn.RReLU(),nn.SELU(),nn.CELU(),nn.GELU(),nn.SiLU(),nn.Tanh()]
for activation in activations:
wandb.init(project=PROJECT_NAME,name=f'activation-{activation}')
model = Test_Model(conv1_output=32,conv2_output=8,conv3_output=64,fc1_output=512,fc3_output=256,fc2_output=512,activation=activation).to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
preds.to(device)
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':get_loss(criterion,y_train,model,X_train),'accuracy':test(model,X_train,y_train),'val_accuracy':test(model,X_test,y_test),'val_loss':get_loss(criterion,y_test,model,X_test)})
for index in range(10):
print(torch.argmax(preds[index]))
print(y_batch[index])
print('\n')
wandb.finish()
```
| github_jupyter |
```
import numpy as np
np.random.seed(1)
from numpy.linalg import cholesky as llt
import matplotlib.pyplot as plt
plt.rcParams.update({
"text.usetex": True,
"font.family": "sans-serif",
"font.sans-serif": ["Helvetica Neue"],
"font.size": 28,
})
def forward_substitution(L, b):
n = L.shape[0]
x = np.zeros(n)
for i in range(n):
x[i] = (b[i] - L[i,:i] @ x[:i])/L[i, i]
return x
def backward_substitution(U, b):
n = U.shape[0]
x = np.zeros(n)
for i in reversed(range(n)):
x[i] = (b[i] - U[i,i+1:] @ x[i+1:])/U[i, i]
return x
def lstsq(A, b):
M = A.T.dot(A)
q = A.T.dot(b)
L = llt(M)
x = forward_substitution(L, q)
x = backward_substitution(L.T, x)
return x, L
def multi_obj_lstsq(As, bs, lambdas):
k = len(lambdas)
A_tilde = np.vstack([np.sqrt(lambdas[i]) * As[i] for i in range(k)])
# stack vectors b horizontally (numpy vectors are by default horizontal)
b_tilde = np.hstack([np.sqrt(lambdas[i]) * bs[i] for i in range(k)])
return lstsq(A_tilde, b_tilde)[0]
m, n = 10, 5
A1 = np.random.randn(m, n)
b1 = np.random.randn(m)
A2 = np.random.randn(m, n)
b2 = np.random.randn(m)
N = 200 # Number of lambda points
lambdas = np.logspace(-4, 4, N)
x = np.zeros((n, N))
J1 = np.zeros(N)
J2 = np.zeros(N)
for k in range(N):
x[:, k] = multi_obj_lstsq([A1, A2], [b1, b2], [1, lambdas[k]])
J1[k] = np.sum(np.square(A1 @ x[:, k] - b1))
J2[k] = np.sum(np.square(A2 @ x[:, k] - b2))
lambda_points = [0.1, 1, 10]
x_points = np.zeros((n, len(lambda_points)))
J1_points = np.zeros(len(lambda_points))
J2_points = np.zeros(len(lambda_points))
for k in range(len(lambda_points)):
x_points[:, k] = multi_obj_lstsq([A1, A2], [b1, b2], [1, lambda_points[k]])
J1_points[k] = np.sum(np.square(A1 @ x_points[:, k] - b1))
J2_points[k] = np.sum(np.square(A2 @ x_points[:, k] - b2))
fig, ax = plt.subplots(1,3, figsize=(30, 9))
# plot solution versus lambda
ax[0].plot(lambdas, x.T)
ax[0].set_xscale('log')
ax[0].set_xlabel(r'$\lambda$')
ax[0].set_xlim([1e-4, 1e+4])
ax[0].legend([r'$x^\star_{%d}(\lambda)$' % d for d in range(1, n+1)])
# plot two objectives versus lambda
ax[1].plot(lambdas, J1)
ax[1].plot(lambdas, J2)
ax[1].set_xscale('log')
ax[1].set_xlabel(r'$\lambda$')
ax[1].set_xlim([1e-4, 1e+4])
ax[1].legend([r'$J_1$', r'$J_2$'])
ax[1].grid()
# plot two objectives versus lambda
ax[2].plot(J1, J2)
ax[2].set_xlabel(r'$J_1(\lambda)$')
ax[2].set_ylabel(r'$J_2(\lambda)$')
ax[2].scatter(J1_points, J2_points)
for i, lam in enumerate(lambda_points):
ax[2].annotate(r'$\lambda=%.1f$' % lam,
(J1_points[i], J2_points[i]),
textcoords="offset points", # how to position the text
xytext=(10,10), # distance from text to points (x,y)
)
ax[2].grid()
plt.savefig('small_example.pdf')
```
| github_jupyter |
foo.039 Crop Total Nutrient Consumption
http://www.earthstat.org/data-download/
file type: geotiff
```
# Libraries for downloading data from remote server (may be ftp)
import requests
from urllib.request import urlopen
from contextlib import closing
import shutil
# Library for uploading/downloading data to/from S3
import boto3
# Libraries for handling data
import rasterio as rio
import numpy as np
# from netCDF4 import Dataset
# import pandas as pd
# import scipy
# Libraries for various helper functions
# from datetime import datetime
import os
import threading
import sys
from glob import glob
```
s3
```
s3_upload = boto3.client("s3")
s3_download = boto3.resource("s3")
s3_bucket = "wri-public-data"
s3_folder = "resourcewatch/raster/foo_039_Crop_Total_Nutrient_Consumption/"
s3_file1 = "foo_039_Crop_Total_Nutrient_Consumption_Nitrogen.tif"
s3_file2 = "foo_039_Crop_Total_Nutrient_Consumption_Phosphorus.tif"
s3_file3 = "foo_039_Crop_Total_Nutrient_Consumption_Potassium.tif"
s3_key_orig1 = s3_folder + s3_file1
s3_key_edit1 = s3_key_orig1[0:-4] + "_edit.tif"
s3_key_orig2 = s3_folder + s3_file2
s3_key_edit2 = s3_key_orig2[0:-4] + "_edit.tif"
s3_key_orig3 = s3_folder + s3_file3
s3_key_edit3 = s3_key_orig3[0:-4] + "_edit.tif"
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write("\r%s %s / %s (%.2f%%)"%(
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
```
Define local file locations
```
local_folder = "/Users/Max81007/Desktop/Python/Resource_Watch/Raster/foo.039/FertilizerConsumption_Geotiff/"
file_name1 = "NitrogenFertilizer_TotalConsumption.tif"
file_name2 = "PhosphorusFertilizer_TotalConsumption.tif"
file_name3 = "PotassiumFertilizer_TotalConsumption.tif"
local_orig1 = local_folder + file_name1
local_orig2 = local_folder + file_name2
local_orig3 = local_folder + file_name3
orig_extension_length = 4 #4 for each char in .tif
local_edit1 = local_orig1[:-orig_extension_length] + "edit.tif"
local_edit2 = local_orig2[:-orig_extension_length] + "edit.tif"
local_edit3 = local_orig3[:-orig_extension_length] + "edit.tif"
```
Use rasterio to reproject and compress
```
files = [local_orig1, local_orig2, local_orig3]
for file in files:
with rio.open(file, 'r') as src:
profile = src.profile
print(profile)
# Note - this is the core of Vizz's netcdf2tif function
def convert_asc_to_tif(orig_name, edit_name):
with rio.open(orig_name, 'r') as src:
# This assumes data is readable by rasterio
# May need to open instead with netcdf4.Dataset, for example
data = src.read()[0]
rows = data.shape[0]
columns = data.shape[1]
print(rows)
print(columns)
# Latitude bounds
south_lat = -90
north_lat = 90
# Longitude bounds
west_lon = -180
east_lon = 180
transform = rio.transform.from_bounds(west_lon, south_lat, east_lon, north_lat, columns, rows)
# Profile
no_data_val = None
target_projection = 'EPSG:4326'
target_data_type = np.float64
profile = {
'driver':'GTiff',
'height':rows,
'width':columns,
'count':1,
'dtype':target_data_type,
'crs':target_projection,
'transform':transform,
'compress':'lzw',
'nodata': no_data_val
}
with rio.open(edit_name, "w", **profile) as dst:
dst.write(data.astype(profile["dtype"]), 1)
convert_asc_to_tif(local_orig1, local_edit1)
convert_asc_to_tif(local_orig2, local_edit2)
convert_asc_to_tif(local_orig3, local_edit3)
```
Upload orig and edit files to s3
```
# Original
s3_upload.upload_file(local_orig1, s3_bucket, s3_key_orig1,
Callback=ProgressPercentage(local_orig1))
s3_upload.upload_file(local_orig2, s3_bucket, s3_key_orig2,
Callback=ProgressPercentage(local_orig2))
s3_upload.upload_file(local_orig3, s3_bucket, s3_key_orig3,
Callback=ProgressPercentage(local_orig3))
# Edit
s3_upload.upload_file(local_edit1, s3_bucket, s3_key_edit1,
Callback=ProgressPercentage(local_edit1))
s3_upload.upload_file(local_edit2, s3_bucket, s3_key_edit2,
Callback=ProgressPercentage(local_edit2))
s3_upload.upload_file(local_edit3, s3_bucket, s3_key_edit3,
Callback=ProgressPercentage(local_edit3))
```
Define local file locations for merged files
```
local_tmp_folder = "/Users/Max81007/Desktop/Python/Resource_Watch/Raster/foo.039/FertilizerConsumption_Geotiff/"
tmp1 = local_tmp_folder + "NitrogenFertilizer_TotalConsumptionedit.tif"
tmp2 = local_tmp_folder + "PhosphorusFertilizer_TotalConsumptionedit.tif"
tmp3 = local_tmp_folder + "PotassiumFertilizer_TotalConsumptionedit.tif"
merge_files = [tmp1, tmp2, tmp3]
tmp_merge = local_tmp_folder + "Foo_039_Fertilizer_TotalConsumption_Merge.tif"
# S3 storage
s3_bucket = "wri-public-data"
s3_folder = "resourcewatch/raster/foo_039_Crop_Total_Nutrient_Consumption/"
s3_file1 = s3_folder + "NitrogenFertilizer_TotalConsumption.tif"
s3_file2 = s3_folder + "PhosphorusFertilizer_TotalConsumption.tif"
s3_file3 = s3_folder + "PotassiumFertilizer_TotalConsumption.tif"
# Make sure these match the order of the merge_files above
s3_files_to_merge = [s3_file1, s3_file2, s3_file3]
band_ids = ["Nitrogen","Phosphorus","Potassium" ]
s3_key_merge = s3_folder + "Foo_039_Fertilizer_TotalConsumption_Merge.tif"
# S3 services
s3_download = boto3.resource("s3")
s3_upload = boto3.client("s3")
# Helper function to view upload progress
class ProgressPercentage(object):
def __init__(self, filename):
self._filename = filename
self._size = float(os.path.getsize(filename))
self._seen_so_far = 0
self._lock = threading.Lock()
def __call__(self, bytes_amount):
# To simplify we'll assume this is hooked up
# to a single filename.
with self._lock:
self._seen_so_far += bytes_amount
percentage = (self._seen_so_far / self._size) * 100
sys.stdout.write(
"\r%s %s / %s (%.2f%%)" % (
self._filename, self._seen_so_far, self._size,
percentage))
sys.stdout.flush()
with rio.open(merge_files[0]) as src:
kwargs = src.profile
kwargs.update(
count=len(merge_files)
)
with rio.open(tmp_merge, 'w', **kwargs) as dst:
for idx, file in enumerate(merge_files):
print(idx)
with rio.open(file) as src:
band = idx+1
windows = src.block_windows()
for win_id, window in windows:
src_data = src.read(1, window=window)
dst.write_band(band, src_data, window=window)
with rio.open(tmp_merge) as src:
num_bands = src.profile["count"] + 1
data = {}
for i in range(1, num_bands):
data[i] = src.read(i)
files = [tmp_merge]
for file in files:
with rio.open(file, 'r') as src:
profile = src.profile
print(profile)
s3_upload.upload_file(tmp_merge, s3_bucket, s3_key_merge,
Callback=ProgressPercentage(tmp_merge))
os.environ["Zs3_key"] = "s3://wri-public-data/" + s3_key_merge
os.environ["Zs3_key_inspect"] = "wri-public-data/" + s3_key_merge
os.environ["Zgs_key"] = "gs://resource-watch-public/" + s3_key_merge
!echo %Zs3_key_inspect%
!aws s3 ls %Zs3_key_inspect%
!gsutil cp %Zs3_key% %Zgs_key%
os.environ["asset_id"] = "users/resourcewatch/foo_039_crop_total_nutrient_consumption"
!earthengine upload image --asset_id=%asset_id% %Zgs_key%
os.environ["band_names"] = str(band_ids)
!earthengine asset set -p band_names="%band_names%" %asset_id%
```
| github_jupyter |
# Bring your own components (BYOC)
Starting in V4 Clara train is based of MONAI
from their website
"The MONAI framework is the open-source foundation being created by Project MONAI.
MONAI is a freely available, community-supported,
PyTorch-based framework for deep learning in healthcare imaging.
It provides domain-optimized foundational capabilities for developing healthcare imaging training workflows in a native PyTorch paradigm."
<br><img src="screenShots/MONAI.png" alt="Drawing" style="height: 200px;width: 400px"/><br>
Clara Train SDK is modular and flexible enough to allow researchers to bring their own components including:
1. [Transformations](https://docs.monai.io/en/latest/transforms.html#)
2. [Loss functions](https://docs.monai.io/en/latest/losses.html)
3. [Model Architecture](https://docs.monai.io/en/latest/networks.html)
4. [Loaders](https://docs.nvidia.com/clara/tlt-mi/clara-train-sdk-v3.0/nvmidl/byom.html#bring-your-own-data-loader)
5. [Metrics](https://docs.monai.io/en/latest/metrics.html)
By the end of this notebook you should be able to bring your own components mentioned above.
## Prerequisites
- Familiar with Clara train main concepts. See [Getting Started Notebook](../GettingStarted/GettingStarted.ipynb)
- Nvidia GPU with 8Gb of memory
## Resources
You could watch the free GTC 2021 talks covering Clara Train SDK
- [Clara Train 4.0 - 101 Getting Started [SE2688]](https://gtc21.event.nvidia.com/media/Clara%20Train%204.0%20-%20101%20Getting%20Started%20%5BSE2688%5D/1_0qgfrql2)
- [Clara Train 4.0 - 201 Federated Learning [SE3208]](https://gtc21.event.nvidia.com/media/Clara%20Train%204.0%20-%20201%20Federated%20Learning%20%5BSE3208%5D/1_m48t6b3y)
- [What’s New in Clara Train 4.0 [D3114]](https://gtc21.event.nvidia.com/media/What%E2%80%99s%20New%20in%20Clara%20Train%204.0%20%5BD3114%5D/1_umvjidt2)
- [Take Medical AI from Concept to Production using Clara Imaging [S32482]](https://gtc21.event.nvidia.com/media/Take%20Medical%20AI%20from%20Concept%20to%20Production%20using%20Clara%20Imaging%20%20%5BS32482%5D/1_6bvnvyg7)
- [Federated Learning for Medical AI [S32530]](https://gtc21.event.nvidia.com/media/Federated%20Learning%20for%20Medical%20AI%20%5BS32530%5D/1_z26u15uk)
- [Get Started Now on Medical Imaging AI with Clara Train on Google Cloud Platform [S32518]](https://gtc21.event.nvidia.com/media/Get%20Started%20Now%20on%20Medical%20Imaging%20AI%20with%20Clara%20Train%20on%20Google%20Cloud%20Platform%20%5BS32518%5D/1_2yjdekmi)
- [Automate 3D Medical Imaging Segmentation with AutoML and Neural Architecture Search [S32083]](https://gtc21.event.nvidia.com/media/Automate%203D%20Medical%20Imaging%20Segmentation%20with%20AutoML%20and%20Neural%20Architecture%20Search%20%5BS32083%5D/1_r5swh2jn)
- [A Platform for Rapid Development and Clinical Translation of ML Models for Applications in Radiology at UCSF [S31619]](https://gtc21.event.nvidia.com/media/A%20Platform%20for%20Rapid%20Development%20and%20Clinical%20Translation%20of%20ML%20Models%20for%20Applications%20in%20Radiology%20at%20UCSF%20%5BS31619%5D/1_oz8qop5a)
## Dataset
This notebook uses a sample dataset (ie. a single image of a spleen dataset) provided in the package to train a small neural network for a few epochs.
This single file is duplicated 32 times for the training set and 9 times for the validation set to mimic the full spleen dataset.
# Lets get started
It is helpful to check that we have an NVIDIA GPU available in the docker by running the cell below
```
# following command should show all gpus available
!nvidia-smi
```
## 1.1 General Concept
You can easily BYOC into Clara Train SDK by writing your python code then point to it in the `config.json` using `path` instead of the `name` tag.
Throughout this notebook we have placed all of our examples from our documentations into the [BYOC](BYOC) folder.
Normal | BYOC
--- | ---
{<br>"name": "CropFixedSizeRandomCenter", <br> "args": {"fields": "image"}<br> } | { <br> "path": "myTransformation.MyAddRandomConstant", <br> "args": {"fields": "image"}<br> }
We modified the [set_env.sh](commands/set_env.sh) to include the path.
Let us run the cells below that define some helper functions we will be using and see where we added the BYOC to the pythonpath
```
MMAR_ROOT="/claraDevDay/GettingStarted/"
print ("setting MMAR_ROOT=",MMAR_ROOT)
%ls $MMAR_ROOT
!chmod 777 $MMAR_ROOT/commands/*
def printFile(filePath,lnSt,lnEnd):
print ("showing ",str(lnEnd-lnSt)," lines from file ",filePath, "starting at line",str(lnSt))
!< $filePath head -n "$lnEnd" | tail -n +"$lnSt"
```
## 1.2 Add BYOC folder to PYTHONPATH
It is important to add the folder containing your code to the PYTHONPATH variable.
The easiest way to do this is to add it to the `set_env.sh` file since it is called from all the train commands.
Let's take a look at this `set_env.sh` file
```
printFile(MMAR_ROOT+"/commands/set_env.sh",0,20)
```
## 2.1 BYO Transformation: Adding random noise to image pixels
Now lets write a full transformation `MyRandAdditiveNoised` from scratch. For this you need to:
1. Implement `Randomizable` and `MapTransform`
2. Define `__call__` function.
Also define `set_random_state` and `randomize` functions for Randomizable
see how we did this in by running the cell below
```
printFile(MMAR_ROOT+"/custom/myTransformation.py",16,30)
```
Now to run this we need to modify the train config by setting the `path`
to our newly created transformation `myTransformation.MyRandAdditiveNoised`.
We also would like to debug the output so we will add the `SaveImageD` Transform.
This transform would pause the training and save batches to `output_dir` for us to check.
```
configFile=MMAR_ROOT+"/custom/trn_BYOC_transform.json"
printFile(configFile,0,50)
```
## 2.2 Run and see Debugging Data
So let us now run training and see the results
```
! $MMAR_ROOT/commands/train_W_Config.sh trn_BYOC_transform.json
```
Now let us see the sample images in the debug folder
```
! ls -la /claraDevDay/Data/_tmpDebugPatches/
! ls -la /claraDevDay/Data/_tmpDebugPatches/spleen_8
```
## 3. BYO Network Architecture and Loss
Clara Train SDK also allows you to write your own network architecture as well as your loss function.
In this example we have a shallow Unet architecture defined in [myNetworkArch.py](BYOC/myNetworkArch.py)
as well as our own dice loss defined in [myLoss.py](BYOC/myLoss.py).
Normal | BYOC
--- | ---
"loss": {<br> "name": "DiceLoss",<br> "args":{ ... } <br>}, | "loss": {<br> **"path"**: "myLoss.MyDiceLoss",<br> "args": {... }<br>} |
"model": {<br> "name": "UNet",<br>"args": { ... }<br>}, | "model": {<br>**"path"**: "myNetworkArch.MyBasicUNet",<br>"args": { ... }<br>},
Let us see how it is defined
```
printFile(MMAR_ROOT+"/custom/myNetworkArch.py",0,30)
printFile(MMAR_ROOT+"/custom/myLoss.py",0,30)
```
Let us Examine the config file [trn_BYOC_Arch_loss.json](config/trn_BYOC_Arch_loss.json)
```
configFile=MMAR_ROOT+"/config/trn_BYOC_Arch_loss.json"
printFile(configFile,11,18)
printFile(configFile,32,43)
```
Now let us train our network
```
! $MMAR_ROOT/commands/train_W_Config.sh trn_BYOC_Arch_loss.json
```
# 4. Exercise
### 4.1. BYO Data Loader
For this example we will see how to use a custom loader specifically to load a numpy file.
To do this, we first load our nii.gz file and save it a np.
```
import nibabel as nib
import numpy as np
dataRoot="/claraDevDay/Data/sampleData/"
for imgLb in ["imagesTr","labelsTr"]:
filename= dataRoot+imgLb+"/spleen_8.nii.gz"
img = nib.load(filename)
data = img.get_fdata()
np.save(filename[:-7]+".npy",data)
!ls -ls $dataRoot/imagesTr
!ls -ls $dataRoot/labelsTr
```
Now you should:
1. Modify the environment file to point to [datasetNp.json](../Data/sampleData/datasetNp.json)
2. write a numpy dataloader similar to the one in monai [NumpyReader](https://docs.monai.io/en/latest/_modules/monai/data/image_reader.html#NumpyReader)
3. Change dataloader transformation to point.
### 4.2. Modify custom loss
Modify the custom loss file to be a weighted dice loss per label.
Some Tips:
1. You can add code below [myLoss.py](custom/myLoss.py) in the init function
```
# uncomment lines below to enable label weights
self.label_weights=label_weights
if self.label_weights is not None:
self.label_weights=[x / sum(self.label_weights) for x in self.label_weights]
print ("====== AEH applying label weights {} refactored as {}".format(label_weights,self.label_weights))
```
2. Similarly uncomment the lines in the `forward` function to multiply the weights given with the loss
```
if self.label_weights is not None: # add wights to labels
bs=intersection.shape[0]
w = torch.tensor(self.label_weights, dtype=torch.float32,device=torch.device('cuda:0'))
w= w.repeat(bs, 1) ## change size to [BS, Num of classes ]
intersection = w* intersection
```
3. You need to pass the weights by adding `label_weights` in the args of your loss in the training config
| github_jupyter |
# Семинар 6 - Введение в простые модели ML
Дополнительно понадобятся следующие библиотеки. Раскомментируйте код, чтобы установить их.
```
# !pip install -U scikit-learn
# !pip install pandas
```
# Метрики
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from IPython.display import HTML
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, mean_squared_error
from sklearn.datasets import load_digits
from sklearn.linear_model import LinearRegression
import warnings
warnings.simplefilter('ignore')
plt.style.use('seaborn')
%matplotlib inline
```
Зададим истинные и "предсказанные" метки для того, чтобы посмотреть на точность предсказаний.
```
y_pred = [0, 1, 1, 0, 0, 1, 0, 3]
y_true = [0, 1, 2, 0, 1, 2, 3, 4]
```
## Accuracy
```
accuracy_score(y_true, y_pred)
```
## Precision
```
precision_score(y_true, y_pred, average=None)
```
## Recall
```
recall_score(y_true, y_pred, average=None)
```
## F1_score
```
f1_score(y_true, y_pred, average=None)
```
# KNN
## Загрузим данные
```
data = load_digits()
print(data['DESCR'])
img = data.data[56].reshape(8, 8)
print(data.target[56])
plt.imshow(img)
plt.show()
X, y = data.data, data.target
print('В датасете {} объектов и {} признака'.format(X.shape[0], X.shape[1]))
```
### Посмотрим на объекты:
```
i = np.random.randint(0, X.shape[0])
print('Class name: {}'.format(y[i]))
print(X[i].reshape(8,8))
X[i]
plt.imshow(X[i].reshape(8,8), cmap='gray_r')
plt.show()
```
Посмотрим на баланс классов:
```
counts = np.unique(y, return_counts=True)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.bar(counts[0], counts[1])
plt.show()
```
Разделим выборку на две части: обучающую и тестовую
```
X_train, X_test, y_train, y_test = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
shuffle=True,
random_state=18)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
```
## Метод ближайших соседей
Зададим классификатор:
```
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
knn_predictons = knn.predict(X_test)
preds = pd.DataFrame(y_test, columns=['True'])
preds['knn_pred'] = knn_predictons
preds.head(200).T
```
Посмотрим долю правильных ответов:
```
accuracy_score(y_test, knn_predictons)
```
## Поиск оптимальных параметров
```
from sklearn.model_selection import GridSearchCV
n = np.linspace(1, 21, 21, dtype=int)
n
kNN_cv = KNeighborsClassifier(n_neighbors=5)
params = {
'metric':['minkowski', 'manhattan'],
'n_neighbors': n,
}
gcv = GridSearchCV(kNN_cv, param_grid=params, cv=5, scoring='accuracy')
gcv.fit(X_train, y_train)
gcv.get_params()
def print_cv_results(a, len_gs, params, param_r, param_sep):
d = len(params['param_grid'][param_sep])
ar = np.array(a).reshape(d, len_gs).T
df = pd.DataFrame(ar)
pen_par = params['param_grid'][param_sep]
c_par = params['param_grid'][param_r]
if type(c_par) != list:
c_par = c_par.tolist()
columns_mapper = dict(zip(range(0, len(pen_par)), pen_par))
row_mapper = dict(zip(range(0, len(c_par)), c_par))
df.rename(columns=columns_mapper, index=row_mapper, inplace=True)
plot = df.plot(title='Mean accuracy rating', grid=True)
plot.set_xlabel(param_r, fontsize=13)
plot.set_ylabel('acc', rotation=0, fontsize=13, labelpad=15)
plt.show()
gcv.get_params()
print_cv_results(gcv.cv_results_['mean_test_score'],
21, gcv.get_params(), 'n_neighbors','metric')
gcv.best_params_
print('Лучший скор %.4f' % gcv.best_score_)
print('при метрике %(metric)s и %(n_neighbors)s соседей' % gcv.best_params_)
```
### Что получится на тесте?
```
accuracy_score(y_test, gcv.predict(X_test))
gcv_preds = pd.DataFrame(gcv.predict(X_test), columns=['kNN'])
gcv_preds['True'] = y_test
gcv_preds
```
Посмотрим на те цифры, которые "путает" наш классификатор
```
gcv_preds[gcv_preds['True'] != gcv_preds['kNN']]
```
# Линейные модели
## Постановка задачи

Где линейная модель - это: $$ \hat{y} = f(x) = \theta_0*1 + \theta_1*x_1 + ... + \theta_n*x_n = \theta^T*X$$
Сгенерируем исскуственные данные, на основе функции:
$$f(x) = 4x+5$$
```
def lin_function(x):
return 4 * x + 5
x_true = np.array([-2, 2])
y_true = lin_function(x_true)
plt.plot(x_true, y_true, linewidth=1)
plt.show()
n = 100
x = np.random.rand(n, 1) * 4 - 2
e = np.random.rand(n, 1) * 4 - 2
y = lin_function(x) + e
plt.scatter(x, y, color='g')
plt.plot(x_true, y_true, linewidth=1)
plt.show()
```
## Метрики
Mean Absoulte Error:
$$MAE = \frac1N \sum_{i = 1}^N|f(x_i) - y_i| = \frac1N \sum_{i = 1}^N|\hat y_i - y_i| = \frac1N || \hat Y - Y||_1$$
Mean Sqared Error:
$$MSE = \frac1N \sum_{i = 1}^N(f(x_i) - y_i)^2 = \frac1N \sum_{i = 1}^N(\hat y_i - y_i)^2 = \frac1N ||\hat Y - Y||_2$$
## Аналитический метод поиска минимума по MSE
$$MSE -> min $$
$$MSE = \frac1N \sum_{i = 1}^N(\hat y_i - y_i)^2 = \frac1N \sum_{i = 1}^N(\theta_i * x_i - y_i)^2 = \frac1N ||X \theta - Y||_2 = \frac1N (X\theta - Y)^T*(X\theta - Y) $$
$$ \frac{d}{d\theta}[\frac1N (X\theta - Y)^T*(X\theta - Y)] = \frac1N \frac{d}{d\theta}[Y^TY - 2Y^TX\theta+\theta^TX^TX\theta] $$
$$\hat \theta = \bigl(X^T \cdot X \bigr)^{-1} \cdot X^T \cdot y $$
```
x_matrix = np.hstack([np.ones((n, 1)), x])
%%time
# найдем аналитическое решение
thetha_matrix = np.linalg.inv(x_matrix.T.dot(x_matrix)).dot(x_matrix.T).dot(y)
```
Обратите внимание на время работы
```
thetha_matrix.T[0].tolist()
print("Свободный член: {[0][0]:.7}".format(thetha_matrix.T))
print("Коэфициент: {[0][1]:.7}".format(thetha_matrix.T))
%%time
lr = LinearRegression()
lr.fit(x,y);
print("Свободный член: {:.7}".format(lr.intercept_[0]))
print("Коэфициент: {:.7}".format(lr.coef_[0][0]))
plt.scatter(x, y, color='g')
plt.scatter(x, lr.predict(x), color='r')
plt.plot(x_true, y_true, linewidth=1)
plt.show()
```
## Градиентный спуск
$$\theta^{(t+1)} = \theta^{(t)} - lr\cdot \nabla MSE(\theta^{(t)}),$$
где $lr$ — длина шага градиентного спуска (learning rate).
$$\nabla MSE(\theta)= \frac{2}{N} X^T \cdot \bigl(X \cdot \theta - Y \bigr) $$
```
def animate_solutions(iter_solutions):
fig, ax = plt.subplots(figsize=(6.4 * 1, 4.8 * 1))
def update(idx):
_theta = iter_solutions[idx]
ax.clear()
ax.scatter(x, y, color='g', label='Выборка')
ax.plot(x_true, y_true, linewidth=1, label='Исходная зависимость')
ax.plot(x_true, x_true * _theta[1] + _theta[0], linewidth=1, color='r', label='Предсказанная зависимость')
ax.legend(loc='upper left', fontsize=13)
fps = 3
ani = animation.FuncAnimation(fig, update, len(iter_solutions), interval=100 / fps)
return ani
%%time
lr = 0.1 # шаг обучения
n_iterations = 150 # количество итераций
theta = np.random.randn(2,1) # начальная инициализация
iter_solutions = [theta]
for iteration in range(n_iterations):
gradients = 2 / n * x_matrix.T @ (x_matrix @ theta - y)
theta = theta - lr * gradients
iter_solutions.append(theta)
# изобразим результаты численного решения
plt.figure(figsize=(6.4 * 1, 4.8 * 1))
plt.scatter(x, y, color='g', label='Выборка')
plt.plot(x_true, y_true, linewidth=1, label='Исходная зависимость')
plt.plot(x_true, x_true * theta[1] + theta[0], linewidth=1, color='r', label='Предсказанная зависимость')
plt.legend(loc='upper left', fontsize=13)
plt.show()
ani = animate_solutions(iter_solutions)
HTML(ani.to_html5_video())
```
| github_jupyter |
```
HAM = 0
SPAM = 1
datadir = 'data/section 7'
sources = [
('beck-s.tar.gz', HAM),
('farmer-d.tar.gz', HAM),
('kaminski-v.tar.gz', HAM),
('kitchen-l.tar.gz', HAM),
('lokay-m.tar.gz', HAM),
('williams-w3.tar.gz', HAM),
('BG.tar.gz', SPAM),
('GP.tar.gz', SPAM),
('SH.tar.gz', SPAM)
]
def extract_tar(datafile, extractdir):
try:
import tarfile
except ImportError:
raise ImportError("You do not have tarfile installed. "
"Try unzipping the file outside of Python.")
tar = tarfile.open(datafile)
tar.extractall(path=extractdir)
tar.close()
print("%s successfully extracted to %s" % (datafile, extractdir))
for source, _ in sources:
datafile = '%s/%s' % (datadir, source)
extract_tar(datafile, datadir)
import os
def read_single_file(filename):
past_header, lines = False, []
if os.path.isfile(filename):
f = open(filename, encoding="latin-1")
for line in f:
if past_header:
lines.append(line)
elif line == '\n':
past_header = True
f.close()
content = '\n'.join(lines)
return filename, content
def read_files(path):
for root, dirnames, filenames in os.walk(path):
for filename in filenames:
filepath = os.path.join(root, filename)
yield read_single_file(filepath)
import pandas as pd
pd.DataFrame({
'model': ['Normal Bayes', 'Multinomial Bayes', 'Bernoulli Bayes'],
'class': [
'cv2.ml.NormalBayesClassifier_create()',
'sklearn.naive_bayes.MultinomialNB()',
'sklearn.naive_bayes.BernoulliNB()'
]
})
def build_data_frame(extractdir, classification):
rows = []
index = []
for file_name, text in read_files(extractdir):
rows.append({'text': text, 'class': classification})
index.append(file_name)
data_frame = pd.DataFrame(rows, index=index)
return data_frame
data = pd.DataFrame({'text': [], 'class': []})
for source, classification in sources:
extractdir = '%s/%s' % (datadir, source[:-7])
data = data.append(build_data_frame(extractdir, classification))
from sklearn import feature_extraction
counts = feature_extraction.text.CountVectorizer()
X = counts.fit_transform(data['text'].values)
X.shape
X
y = data['class'].values
from sklearn import model_selection as ms
X_train, X_test, y_train, y_test = ms.train_test_split(
X, y, test_size=0.2, random_state=42
)
import cv2
model_norm = cv2.ml.NormalBayesClassifier_create()
import numpy as np
X_train_small = X_train[:1000, :300].toarray().astype(np.float32)
y_train_small = y_train[:1000]
from sklearn import model_selection as ms
X_train, X_test, y_train, y_test = ms.train_test_split(
X, y, test_size=0.2, random_state=42
)
from sklearn import naive_bayes
model_naive = naive_bayes.MultinomialNB()
model_naive.fit(X_train, y_train)
model_naive.score(X_train, y_train)
model_naive.score(X_test, y_test)
counts = feature_extraction.text.CountVectorizer(
ngram_range=(1, 2)
)
X = counts.fit_transform(data['text'].values)
from sklearn import model_selection
X_train, X_test, y_train, y_test = model_selection.train_test_split(
X, y, test_size=0.2, random_state=42
)
model_naive = naive_bayes.MultinomialNB()
model_naive.fit(X_train, y_train)
model_naive.score(X_test, y_test)
tfidf = feature_extraction.text.TfidfTransformer()
X_new = tfidf.fit_transform(X)
X_train, X_test, y_train, y_test = ms.train_test_split(
X_new, y, test_size=0.2, random_state=42
)
model_naive = naive_bayes.MultinomialNB()
model_naive.fit(X_train, y_train)
model_naive.score(X_test, y_test)
from sklearn import metrics
metrics.confusion_matrix(y_test, model_naive.predict(X_test))
```
| github_jupyter |
# Actividad: Clasificación de SPAM
¿Podemos clasificar un email como spam con árboles y/o ensambles?
Usaremos la base de datos [UCI Spam database](https://archive.ics.uci.edu/ml/datasets/Spambase)
Responda las preguntas y realice las actividades en cada uno de los bloques
Entregas al correo phuijse@inf.uach.cl hasta el Viernes 13, 11:20 AM
Se trabajará en grupos de dos personas: se entrega un notebook completo por grupo
```
# Descargar la base de datos con wget, si usas windows usa el link de arriba
!wget -c https://archive.ics.uci.edu/ml/machine-learning-databases/spambase/spambase.data
!head -n 5 spambase.data
```
Responda
- ¿Cuántos atributos tiene la base de datos? Describalos de forma muy breve
- Muestre un histograma de las etiquetas ¿Cuántos ejemplos hay de cada clase? ¿Es la base de datos balanceada?
- ¿Hay valores perdidos o invalidas?
```
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
data = np.genfromtxt('spambase.data', delimiter=',')
X, Y = data[:, :-1], data[:, -1]
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.75, stratify=Y)
```
Use el conjunto de entrenamiento para entrenar y ajustar los parámetros de un
1. árbol de decisión
1. ensamble random forest
1. ensamble *gradient boosting*
Puede usar `GridSearchCV` para encontrar los mejores estimadores
Para este caso particular y para cada estimador responda
- ¿Qué función/criterio funciona mejor? `criterion`
- ¿Qué profundidad de árbol funciona mejor? `max_depth`
- ¿Combiene ponderar las clases? `class_weight`
- En el caso de los ensambles
- ¿Es recomendable usar un subconjunto aleatorio de características? `max_features`
- ¿Cuál es la mejor cantidad de clasificadores débiles? `n_estimators`
Compare los mejores modelos de cada tipo en el conjunto de test usando métricas de clasificación apropiadas
Analice y comente sus resultados
```
tree.DecisionTreeClassifier?
from sklearn import tree
from sklearn.model_selection import GridSearchCV
params = {'criterion':('entropy', 'gini'),
'max_depth':[2, 5, 10, 20, 35, 50],
'class_weight': (None, 'balanced', {0:0.3, 1:0.7})}
np.random.seed(0) # reproducibilidad
model = tree.DecisionTreeClassifier()
clf_dt = GridSearchCV(model, params, cv=5)
clf_dt.fit(X_train, Y_train)
display(clf_dt.best_estimator_)
from sklearn.metrics import precision_recall_curve
fig, ax = plt.subplots(1, figsize=(5, 4), tight_layout=True)
ax.set_xlabel('Recall/TPR')
ax.set_ylabel('Precision')
Y_pred = clf_dt.best_estimator_.predict_proba(X_test)[:, 1]
precision, recall, th = precision_recall_curve(Y_test, Y_pred)
ax.plot(recall, precision, label="Decision Tree", linewidth=1)
plt.legend(loc=3);
!rm spambase.data
```
| github_jupyter |
# Amortized Neural Variational Inference for a toy probabilistic model
Consider a certain number of sensors placed at known locations, $\mathbf{s}_1,\mathbf{s}_2,\ldots,\mathbf{s}_L$. There is a target at an unknown position $\mathbf{z}\in\mathbb{R}^2$ that is emitting a certain signal that is received at the $i$-th sensor with a signal strength distributed as follows:
\begin{align}
x_i \sim \mathcal{N}\Big(- A \log\left(||\mathbf{s}_i-\mathbf{z} ||^2\right), \sigma^2\Big),
\end{align}
where $A$ is a constant related to how fast signal strength degrades with distance. We assume a Gaussian prior for the unknown position $\mathcal{N}(\mathbf{0},\mathbf{I})$. Given a set of $N$ i.i.d. samples for each sensor, $\mathbf{X}\in\mathbb{R}^{L\times N}$, we will use a Amortized Neural Variational Inference to find a Gaussian approximation to
\begin{align}
p(\mathbf{z}|\mathbf{X}) \propto p(\mathbf{X}|\mathbf{z}) p(\mathbf{z})
\end{align}
Our approximation to $p(\mathbf{z}|\mathbf{X})$ is of the form
\begin{align}
p(\mathbf{z}|\mathbf{X}) \approx q(\mathbf{z}|\mathbf{X})=\mathcal{N}\Big(\mu(\mathbf{X}),\Sigma(\mathbf{X})\Big),
\end{align}
where
- $\mu(\mathbf{X})$ --> Given by a Neural Network with parameter vector $\theta$ and input $\mathbf{X}$
- $\Sigma(\mathbf{X})$ --> Diagonal covariance matrix, where the log of the main diagonal is constructed by a Neural Network with parameter vector $\gamma$ and input $\mathbf{X}$
## ELBO lower-bound to $p(\mathbf{X})$
We will optimize $q(\mathbf{z}|\mathbf{X})$ w.r.t. $\theta,\gamma$ by optimizing the Evidence-Lower-Bound (ELBO):
\begin{align}
p(\mathbf{X}) &= \int p(\mathbf{X}|\mathbf{z}) p(\mathbf{z}) d\mathbf{z}\\
&\geq \int q(\mathbf{X}|\mathbf{z}) \log \left(\frac{p(\mathbf{X},\mathbf{z})}{q(\mathbf{X}|\mathbf{z})}\right)d\mathbf{z}\\
& = \mathbb{E}_{q}\left[\log p(\mathbf{X}|\mathbf{z})\right] - D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})\triangleq \mathcal{L}(\mathbf{X},\theta,\gamma),
\end{align}
where $D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})$ is known in closed form since it is the KL divergence between two Gaussian pdfs:
\begin{align}
D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})) = \frac{1}{2} \left[\text{tr}\left(\Sigma(\mathbf{X})\right)+\left(\mu(\mathbf{X})^T\mu(\mathbf{X})\right)-2-\log\det \left(\Sigma(\mathbf{X})\right) \right]
\end{align}
## SGD optimization
- Sample $\mathbf{\epsilon}\sim \mathcal{N}(\mathbf{0},\mathbf{I})$
- Sample from $q(\mathbf{z}|\mathbf{X})$:
\begin{align}
\mathbf{z}^0 = \mu(\mathbf{X}) + \sqrt{\text{diag}(\Sigma(\mathbf{X}))} \circ \mathbf{\epsilon}
\end{align}
- Compute gradients of
\begin{align}
\hat{\mathcal{L}}(\mathbf{X},\theta,\gamma) =\log p(\mathbf{X}|\mathbf{z}^0) - D_{KL}(q(\mathbf{z}|\mathbf{X})||p(\mathbf{z})
\end{align}
w.r.t. $\theta,\gamma$
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
%matplotlib inline
# use seaborn plotting defaults
import seaborn as sns; sns.set()
```
### Probabilistic model definition and generating samples
```
############## Elements of the true probabilistic model ####################
loc_info = {}
loc_info['S'] = 3 # Number o sensors
loc_info['pos_s'] = np.array([[0.5,1], [3.5,1], [2,3]]) #Position of sensors
#loc_info['target'] = np.random.uniform(-3,3,[2,]) #(Unknown target position)
loc_info['target'] = np.array([-1,2]) #(Unknown target position)
loc_info['var_s'] = 5.*np.ones(loc_info['S']).reshape([loc_info['S'],1]) #Variance of sensors
loc_info['A'] = np.ones(loc_info['S'],dtype=np.float32) * 10.0 #Attenuation mean factor per sensor
loc_info['N'] = 5 # Number of measurements per sensor
def sample_X(S,M,z,pos_s,A,var_s):
means = -1*A*np.log(np.sum((pos_s-z)**2,1))
X = means.reshape([S,1]) + np.random.randn(S,M) * np.sqrt(var_s)
return X
# Sampling from model for the right target
X = sample_X(loc_info['S'],loc_info['N'], loc_info['target'],loc_info['pos_s'],loc_info['A'],loc_info['var_s'])
plt.plot(loc_info['pos_s'][:,0],loc_info['pos_s'][:,1],'b>',label='Sensors',ms=15)
plt.plot(loc_info['target'][0],loc_info['target'][1],'ro',label='Target',ms=15)
plt.legend()
```
### TensorFlow Computation Graph and Loss Function
```
z_dim = 2 #Latent Space
model_name = 'model1' #In 'model1.py' we define the variational family
learning_rate = 1e-2
num_samples_avg = 1 #Number of samples to approximate the expectation in the ELBO
num_samples = 10 #Number of samples from the posterior (for testing)
num_it = int(1e4) #SGD iterations
period_plot = int(1000) #Show resuts every period_plot iterations
dims = X.shape #X.shape
sess_VAE = tf.Graph()
with sess_VAE.as_default():
print('[*] Importing model: ' + model_name)
model = __import__(model_name)
print('[*] Defining placeholders')
inputX = tf.placeholder(tf.float32, shape=dims, name='x-input')
print('[*] Defining the encoder')
log_var, mean, samples_z, KL = model.encoder(inputX,dims,z_dim,num_samples)
print('[*] Defining the log_likelyhood')
loglik = model.decoder(loc_info,inputX,samples_z,num_samples_avg)
loss = -(loglik-KL)
optim = tf.train.AdamOptimizer(learning_rate).minimize(loss)
# Output dictionary -> Useful if computation graph is defined in a separate .py file
tf_nodes = {}
tf_nodes['X'] = inputX
tf_nodes['mean'] = mean
tf_nodes['logvar'] = log_var
tf_nodes['KL'] = KL
tf_nodes['loglik'] = loglik
tf_nodes['optim'] = optim
tf_nodes['samples'] = samples_z
```
## SGD optimization
```
############ SGD Inference #####################################
mean_list = []
with tf.Session(graph=sess_VAE) as session:
# Add ops to save and restore all the variables.
saver = tf.train.Saver()
tf.global_variables_initializer().run()
print('Training the VAE ...')
for it in range(num_it):
feedDict = {tf_nodes['X'] : X}
_= session.run([tf_nodes['optim'],tf_nodes['loglik'],tf_nodes['KL']],feedDict)
if(it % period_plot ==0):
mean, logvar,loglik,KL = session.run([tf_nodes['mean'],tf_nodes['logvar']],feedDict)
print("It = %d, loglik = %.5f, KL = %.5f" %(it,loglik,KL))
mean_list.append(mean)
samples = session.run(tf_nodes['samples'],feedDict)
#Samples from q(z|x)
m_evol = np.vstack(mean_list)
nsamples = 50
samples = mean + np.sqrt(np.exp(logvar)) * np.random.randn(nsamples,2)
plt.plot(loc_info['pos_s'][:,0],loc_info['pos_s'][:,1],'b>',label='Sensors',ms=15)
plt.plot(loc_info['target'][0],loc_info['target'][1],'ro',label='Target',ms=15)
plt.plot(m_evol[:,0],m_evol[:,1],'g>',label='Post Mean')
plt.scatter(samples[:,0],samples[:,1],label='Post Samples')
plt.rcParams["figure.figsize"] = [8,8]
plt.legend()
```
| github_jupyter |
# Assemble Perturb-seq BMDC data
```
import scanpy as sc
import pandas as pd
import scipy.io as io
data_path = '/data_volume/memento/bmdc/'
```
### Process time 0
```
genes = pd.read_csv(
data_path + 'raw0/GSM2396857_dc_0hr_genenames.csv', index_col=0)
var_df = pd.DataFrame(index=genes['0'].str.split('_').str[1])
var_df['gene_id'] = genes['0'].str.split('_').str[0].tolist()
cells = pd.read_csv(
data_path + 'raw0/GSM2396857_dc_0hr_cellnames.csv', index_col=0)
obs_df = pd.DataFrame(index=cells['0'])
obs_df['cell'] = cells['0'].tolist()
mapping = pd.read_csv(data_path + 'raw0/GSM2396857_dc_0hr_cbc_gbc_dict.csv', header=None)
mapping['cell'] = mapping[1].str.split(', ')
mapping = mapping.explode(column='cell').rename(columns={0:'guide'})[['cell', 'guide']]
guides = mapping['guide'].drop_duplicates().tolist()
print(obs_df.shape)
obs_df = obs_df.merge(mapping, on='cell', how='left').astype(str)
obs_df = pd.DataFrame(obs_df.groupby('cell').guide.apply(list))
obs_df['guide_string'] = obs_df['guide'].apply(lambda x: '-'.join(x))
print(obs_df.shape)
# for g in guides:
# obs_df[g] = obs_df['guide'].str.contains(g)
X = io.mmread(data_path + 'raw0/GSM2396857_dc_0hr.mtx.txt').tocsr()
adata0 = sc.AnnData(X=X.T, obs=obs_df, var=var_df)
adata0.write(data_path + 'raw0/tp0.h5ad')
```
### Process time 3
```
genes = pd.read_csv(
data_path + 'raw3/GSM2396856_dc_3hr_genenames.csv', index_col=0)
var_df = pd.DataFrame(index=genes['0'].str.split('_').str[1])
var_df['gene_id'] = genes['0'].str.split('_').str[0].tolist()
cells = pd.read_csv(
data_path + 'raw3/GSM2396856_dc_3hr_cellnames.csv', index_col=0)
obs_df = pd.DataFrame(index=cells['0'])
obs_df['cell'] = cells['0'].tolist()
mapping = pd.read_csv(data_path + 'raw3/GSM2396856_dc_3hr_cbc_gbc_dict_lenient.csv', header=None)
mapping['cell'] = mapping[1].str.split(', ')
mapping = mapping.explode(column='cell').rename(columns={0:'guide'})[['cell', 'guide']]
guides = mapping['guide'].drop_duplicates().tolist()
print(obs_df.shape)
obs_df = obs_df.merge(mapping, on='cell', how='left').astype(str)
obs_df = pd.DataFrame(obs_df.groupby('cell').guide.apply(list))
obs_df['guide_string'] = obs_df['guide'].apply(lambda x: '-'.join(x))
print(obs_df.shape)
# for g in guides:
# obs_df[g] = obs_df['guide'].str.contains(g)
X = io.mmread(data_path + 'raw3/GSM2396856_dc_3hr.mtx.txt').tocsr()
adata3 = sc.AnnData(X=X.T, obs=obs_df, var=var_df)
adata3.write(data_path + 'raw3/tp3.h5ad')
```
### Combine
```
adata0.obs['tp'] = '0hr'
adata3.obs['tp'] = '3hr'
adata0.var_names_make_unique()
adata3.var_names_make_unique()
overlap_genes = adata0.var[[]].copy().join(adata3.var[[]].copy(), how='inner').index.tolist()
adata0 = adata0[:, overlap_genes]
adata3 = adata3[:, overlap_genes]
adata_combined = adata0.concatenate(adata3)
adata_combined.write(data_path + 'h5ad/bmdc.h5ad')
```
### Preprocess BMDC dataset
```
(adata.obs.guide_string.apply(lambda x: len(x.split('-'))) == 1)
adata = sc.read(data_path + 'h5ad/bmdc.h5ad')
adata = adata[adata.obs['guide_string']!='nan', :].copy()
adata.obs['WT'] = adata.obs['guide_string'].str.contains('NTC') & (adata.obs.guide_string.apply(lambda x: len(x.split('-'))) == 1)
adata.obs['KO'] = (~adata.obs['guide_string'].str.contains('NTC')).astype(int)
adata.obs['n_counts'] = adata.X.sum(axis=1)
# guides = adata.obs.guides.drop_duplicates().tolist()
# guides = [g for g in guides if ('INTER' not in g and 'nan' not in g)]
# ko_genes = adata.obs.query('KO == 1')['KO_GENE'].drop_duplicates().tolist()
sc.pp.filter_cells(adata, min_genes=200)
sc.pp.filter_genes(adata, min_cells=3)
adata.var['mt'] = adata.var_names.str.startswith('mt-') # annotate the group of mitochondrial genes as 'mt'
sc.pp.calculate_qc_metrics(adata, qc_vars=['mt'], percent_top=None, log1p=False, inplace=True)
sc.pl.scatter(adata, x='total_counts', y='pct_counts_mt')
sc.pl.scatter(adata, x='total_counts', y='n_genes_by_counts')
adata = adata[adata.obs.pct_counts_mt < 2, :]
sc.pp.normalize_total(adata, target_sum=1e4)
sc.pp.log1p(adata)
sc.pp.highly_variable_genes(adata, min_mean=0.0125, max_mean=3, min_disp=0.5)
adata.raw = adata
adata = adata[:, adata.var.highly_variable]
sc.pp.regress_out(adata, ['total_counts', 'pct_counts_mt'])
sc.pp.scale(adata, max_value=10)
sc.tl.pca(adata, svd_solver='arpack')
sc.pp.neighbors(adata, n_neighbors=10, n_pcs=40)
sc.tl.umap(adata)
sc.tl.leiden(adata, resolution=0.05)
sc.pl.umap(adata, color=['leiden'])
# Reconstruct the raw but filtered AnnData object
adata_raw = sc.read(data_path + 'h5ad/bmdc.h5ad')
adata_raw = adata_raw[adata.obs.index, :]
adata_raw.obs = adata.obs.copy()
adata_raw.write(data_path + 'h5ad/filtered-bmdc.h5ad')
```
| github_jupyter |
# Basics
`reciprocalspaceship` provides methods for reading and writing MTZ files, and can be easily used to join reflection data by Miller indices. We will demonstrate these uses by loading diffraction data of tetragonal hen egg-white lysozyme (HEWL).
```
import reciprocalspaceship as rs
print(rs.__version__)
```
This diffraction data was collected at the Sector 24-ID-C beamline at [NE-CAT](https://lilith.nec.aps.anl.gov/) at APS. Diffraction images were collected at ambient room temperature (295K), and low energy (6550 eV) in order to collect native sulfur anomalous diffraction for experimental phasing. The diffraction images were processed in [DIALS](https://dials.github.io/) for indexing, geometry refinement, and spot integration, and scaling and merging was done in [AIMLESS](http://www.ccp4.ac.uk/html/aimless.html). This data reduction yielded an MTZ file that is included in the `data/` subdirectory. Here, we will load the MTZ file and inspect its contents.
---
### Loading reflection data
Reflection tables can be loaded using the top-level function, `rs.read_mtz()`. This returns a `DataSet` object, that is analogous to a `pandas.DataFrame`.
```
refltable = rs.read_mtz("data/HEWL_SSAD_24IDC.mtz")
type(refltable).__name__
```
This reflection table was produced directly from `AIMLESS`, and contains several different data columns:
```
refltable.head()
print(f"Number of reflections: {len(refltable)}")
```
Internally, each of these data columns is stored using a custom `dtype` that was added to the conventional `pandas` and `numpy` datatypes. This enables `DataSet` reflection tables to be written back to MTZ files. There is a `dtype` for each of the possible datatypes listed in the [MTZ file specification](http://www.ccp4.ac.uk/html/f2mtz.html#CTYPOUT).
```
refltable.dtypes
```
Additional crystallographic metadata is read from the MTZ file and can be stored as attributes of the `DataSet`. These include the crystallographic spacegroup and unit cell parameters, which are stored as `gemmi.SpaceGroup` and `gemmi.UnitCell` objects.
```
refltable.spacegroup
refltable.cell
```
---
### Plotting reflection data
For illustrative purposes, let's plot the $I(+)$ data against the $I(-)$ data
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(6, 6))
plt.plot(refltable['I(+)'].to_numpy(), refltable['I(-)'].to_numpy(), 'k.', alpha=0.1)
plt.xlabel("I(+)")
plt.ylabel("I(-)")
plt.show()
```
In the [next example](2_mergingstats.ipynb), we will investigate this anomalous signal in more detail.
---
### Writing Reflection Data
It is also possible to write out MTZ files using `DataSet.write_mtz()`. This functionality depends on the correct setting of each column's `dtype`.
```
refltable.write_mtz("data/HEWL_SSAD_24IDC.mtz")
```
| github_jupyter |
# Caffe2 Basic Concepts - Operators & Nets
In this tutorial we will go through a set of Caffe2 basics: the basic concepts including how operators and nets are being written.
First, let's import caffe2. `core` and `workspace` are usually the two that you need most. If you want to manipulate protocol buffers generated by caffe2, you probably also want to import `caffe2_pb2` from `caffe2.proto`.
```
# We'll also import a few standard python libraries
from matplotlib import pyplot
import numpy as np
import time
# These are the droids you are looking for.
from caffe2.python import core, workspace
from caffe2.proto import caffe2_pb2
# Let's show all plots inline.
%matplotlib inline
```
You might see a warning saying that caffe2 does not have GPU support. That means you are running a CPU-only build. Don't be alarmed - anything CPU is still runnable without problem.
## Workspaces
Let's cover workspaces first, where all the data reside.
If you are familiar with Matlab, workspace consists of blobs you create and store in memory. For now, consider a blob to be a N-dimensional Tensor similar to numpy's ndarray, but is contiguous. Down the road, we will show you that a blob is actually a typed pointer that can store any type of C++ objects, but Tensor is the most common type stored in a blob. Let's show what the interface looks like.
`Blobs()` prints out all existing blobs in the workspace.
`HasBlob()` queries if a blob exists in the workspace. For now, we don't have anything yet.
```
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
```
We can feed blobs into the workspace using `FeedBlob()`.
```
X = np.random.randn(2, 3).astype(np.float32)
print("Generated X from numpy:\n{}".format(X))
workspace.FeedBlob("X", X)
```
Now, let's take a look what blobs there are in the workspace.
```
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
print("Workspace has blob 'X'? {}".format(workspace.HasBlob("X")))
print("Fetched X:\n{}".format(workspace.FetchBlob("X")))
```
Let's verify that the arrays are equal.
```
np.testing.assert_array_equal(X, workspace.FetchBlob("X"))
```
Also, if you are trying to access a blob that does not exist, an error will be thrown:
```
try:
workspace.FetchBlob("invincible_pink_unicorn")
except RuntimeError as err:
print(err)
```
One thing that you might not use immediately: you can have multiple workspaces in Python using different names, and switch between them. Blobs in different workspaces are separate from each other. You can query the current workspace using `CurrentWorkspace`. Let's try switching the workspace by name (gutentag) and creating a new one if it doesn't exist.
```
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
# Switch the workspace. The second argument "True" means creating
# the workspace if it is missing.
workspace.SwitchWorkspace("gutentag", True)
# Let's print the current workspace. Note that there is nothing in the
# workspace yet.
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
```
Let's switch back to the default workspace.
```
workspace.SwitchWorkspace("default")
print("Current workspace: {}".format(workspace.CurrentWorkspace()))
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
```
Finally, `ResetWorkspace()` clears anything that is in the current workspace.
```
workspace.ResetWorkspace()
```
## Operators
Operators in Caffe2 are kind of like functions. From the C++ side, they all derive from a common interface, and are registered by type, so that we can call different operators during runtime. The interface of operators is defined in `caffe2/proto/caffe2.proto`. Basically, it takes in a bunch of inputs, and produces a bunch of outputs.
Remember, when we say "create an operator" in Caffe2 Python, nothing gets run yet. All it does is to create the protocol buffere that specifies what the operator should be. At a later time it will be sent to the C++ backend for execution. If you are not familiar with protobuf, it is a json-like serialization tool for structured data. Find more about protocol buffers [here](https://developers.google.com/protocol-buffers/).
Let's see an actual example.
```
# Create an operator.
op = core.CreateOperator(
"Relu", # The type of operator that we want to run
["X"], # A list of input blobs by their names
["Y"], # A list of output blobs by their names
)
# and we are done!
```
As we mentioned, the created op is actually a protobuf object. Let's show the content.
```
print("Type of the created op is: {}".format(type(op)))
print("Content:\n")
print(str(op))
```
OK, let's run the operator. We first feed in the input X to the workspace.
Then the simplest way to run an operator is to do `workspace.RunOperatorOnce(operator)`
```
workspace.FeedBlob("X", np.random.randn(2, 3).astype(np.float32))
workspace.RunOperatorOnce(op)
```
After execution, let's see if the operator is doing the right thing, which is our neural network's activation function ([Relu](https://en.wikipedia.org/wiki/Rectifier_(neural_networks))) in this case.
```
print("Current blobs in the workspace: {}\n".format(workspace.Blobs()))
print("X:\n{}\n".format(workspace.FetchBlob("X")))
print("Y:\n{}\n".format(workspace.FetchBlob("Y")))
print("Expected:\n{}\n".format(np.maximum(workspace.FetchBlob("X"), 0)))
```
This is working if your Expected output matches your Y output in this example.
Operators also take optional arguments if needed. They are specified as key-value pairs. Let's take a look at one simple example, which takes a tensor and fills it with Gaussian random variables.
```
op = core.CreateOperator(
"GaussianFill",
[], # GaussianFill does not need any parameters.
["Z"],
shape=[100, 100], # shape argument as a list of ints.
mean=1.0, # mean as a single float
std=1.0, # std as a single float
)
print("Content of op:\n")
print(str(op))
```
Let's run it and see if things are as intended.
```
workspace.RunOperatorOnce(op)
temp = workspace.FetchBlob("Z")
pyplot.hist(temp.flatten(), bins=50)
pyplot.title("Distribution of Z")
```
If you see a bell shaped curve then it worked!
## Nets
Nets are essentially computation graphs. We keep the name `Net` for backward consistency (and also to pay tribute to neural nets). A Net is composed of multiple operators just like a program written as a sequence of commands. Let's take a look.
When we talk about nets, we will also talk about BlobReference, which is an object that wraps around a string so we can do easy chaining of operators.
Let's create a network that is essentially the equivalent of the following python math:
```
X = np.random.randn(2, 3)
W = np.random.randn(5, 3)
b = np.ones(5)
Y = X * W^T + b
```
We'll show the progress step by step. Caffe2's `core.Net` is a wrapper class around a NetDef protocol buffer.
When creating a network, its underlying protocol buffer is essentially empty other than the network name. Let's create the net and then show the proto content.
```
net = core.Net("my_first_net")
print("Current network proto:\n\n{}".format(net.Proto()))
```
Let's create a blob called X, and use GaussianFill to fill it with some random data.
```
X = net.GaussianFill([], ["X"], mean=0.0, std=1.0, shape=[2, 3], run_once=0)
print("New network proto:\n\n{}".format(net.Proto()))
```
You might have observed a few differences from the earlier `core.CreateOperator` call. Basically, when we have a net, you can direct create an operator *and* add it to the net at the same time using Python tricks: essentially, if you call `net.SomeOp` where SomeOp is a registered type string of an operator, this essentially gets translated to
```
op = core.CreateOperator("SomeOp", ...)
net.Proto().op.append(op)
```
Also, you might be wondering what X is. X is a `BlobReference` which basically records two things:
- what its name is. You can access the name by str(X)
- which net it gets created from. It is recorded by an internal variable `_from_net`, but most likely
you won't need that.
Let's verify it. Also, remember, we are not actually running anything yet, so X contains nothing but a symbol. Don't expect to get any numerical values out of it right now :)
```
print("Type of X is: {}".format(type(X)))
print("The blob name is: {}".format(str(X)))
```
Let's continue to create W and b.
```
W = net.GaussianFill([], ["W"], mean=0.0, std=1.0, shape=[5, 3], run_once=0)
b = net.ConstantFill([], ["b"], shape=[5,], value=1.0, run_once=0)
```
Now, one simple code sugar: since the BlobReference objects know what net it is generated from, in addition to creating operators from net, you can also create operators from BlobReferences. Let's create the FC operator in this way.
```
Y = X.FC([W, b], ["Y"])
```
Under the hood, `X.FC(...)` simply delegates to `net.FC` by inserting `X` as the first input of the corresponding operator, so what we did above is equivalent to
```
Y = net.FC([X, W, b], ["Y"])
```
Let's take a look at the current network.
```
print("Current network proto:\n\n{}".format(net.Proto()))
```
Too verbose huh? Let's try to visualize it as a graph. Caffe2 ships with a very minimal graph visualization tool for this purpose. Let's show that in ipython.
```
from caffe2.python import net_drawer
from IPython import display
graph = net_drawer.GetPydotGraph(net, rankdir="LR")
display.Image(graph.create_png(), width=800)
```
So we have defined a `Net`, but nothing gets executed yet. Remember that the net above is essentially a protobuf that holds the definition of the network. When we actually want to run the network, what happens under the hood is:
- Instantiate a C++ net object from the protobuf;
- Call the instantiated net's Run() function.
Before we do anything, we should clear any earlier workspace variables with `ResetWorkspace()`.
Then there are two ways to run a net from Python. We will do the first option in the example below.
1. Using `workspace.RunNetOnce()`, which instantiates, runs and immediately destructs the network.
2. A little bit more complex and involves two steps:
(a) call `workspace.CreateNet()` to create the C++ net object owned by the workspace, and
(b) use `workspace.RunNet()` by passing the name of the network to it.
```
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.RunNetOnce(net)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
# Let's dump the contents of the blobs
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
```
Now let's try the second way to create the net, and run it. First clear the variables with `ResetWorkspace()`, create the net with the workspace's net object you created earlier `CreateNet(net_object)`, and then run the net by name with `RunNet(net_name)`.
```
workspace.ResetWorkspace()
print("Current blobs in the workspace: {}".format(workspace.Blobs()))
workspace.CreateNet(net)
workspace.RunNet(net.Proto().name)
print("Blobs in the workspace after execution: {}".format(workspace.Blobs()))
for name in workspace.Blobs():
print("{}:\n{}".format(name, workspace.FetchBlob(name)))
```
There are a few differences between `RunNetOnce` and `RunNet`, but probably the main difference is the computation time overhead. Since `RunNetOnce` involves serializing the protobuf to pass between Python and C and instantiating the network, it may take longer to run. Let's see in this case what the overhead is.
```
# It seems that %timeit magic does not work well with
# C++ extensions so we'll basically do for loops
start = time.time()
for i in range(1000):
workspace.RunNetOnce(net)
end = time.time()
print('Run time per RunNetOnce: {}'.format((end - start) / 1000))
start = time.time()
for i in range(1000):
workspace.RunNet(net.Proto().name)
end = time.time()
print('Run time per RunNet: {}'.format((end - start) / 1000))
```
OK, so above are a few key components if you would like to use Caffe2 from the python side. We are going to add more to the tutorial as we find more needs. For now, kindly check out the rest of the tutorials!
| github_jupyter |
<h1 align="center">Teoría Generalizada del Medio Efectivo de la Polarización Inducida: Inclusiones Esféricas</h1>
<div align="right">Por David A. Miranda, PhD<br>2021</div>
<h2>1. Importa las librerias</h2>
```
import numpy as np
import matplotlib.pyplot as plt
```
# 2. Detalles teóricos
La Teoría Generalizada del Medio Efectivo de la Polarización Inducida, GEMTIP, fue formulada por el profesor [Zhadanov en 2008](http://www.cemi.utah.edu/PDF_70_10/2008b.pdf). En esta teória se modela las propiedades eléctricas de un medio heterogéneo por medio de uno homogéneo efectivo, de manera análoga a como un cuando de un circuito eléctrico complejo, constituido por muchos elementos, se obtiene el circuito equivalente.
Para el caso de inclusiones esféricas, la conductividad eléctrica efectiva $\sigma_e$ está dada por:
$$\sigma_e = \sigma_0 \left\{ 1 + \sum_{l=1}^N f_l M_l \left[ 1 - \frac{1}{1 + (j\omega\tau_l)^{c_l}} \right] \right\}$$
Donde $\sigma_0 = 1/\rho_0$ es la conductividad eléctrica del medio soporte; $f_l$, la fracción de volumen que ocupan las inclusiones $l$-ésimas; $M_l = 3 \frac{\rho_0 - \rho_l}{2\rho_l + \rho_0}$, la polarizabilidad de las inclusiones $l$-ésimas; $\rho_l = 1/\sigma_l$, la resistividad eléctrica de las inclusiones $l$-ésimas; $\omega$, la frecuencia angular de la perturbación externa; $\tau_l$, el tiempo de relajación de las inclusiones $l$-ésimas y $c_l$, el parámetro de relajación de las inclusiones $l$-ésimas.
# 3. El modelo
El siguiente método implementa el modelo GEMTIP para describir la conductividad eléctrica de inclusiones esféricas.
```
def gemtip_sh(params, w):
params_keys = ['rho_0','rho_l', 'f_l', 'tau_l', 'c_l']
sigma_e = np.array([])
# Verificación de parámetros #
if type(params) != type({}):
print('Error! The params must be a dictionary.')
return sigma_e
this_keys = params.keys()
for key in params_keys:
if not key in this_keys:
print('Error! The parameter %s information was omited in params.' % key)
return sigma_e
N = [ len(params[key]) for key in params_keys[1:] ]
if np.std(N) != 0:
print('Error in the number of parameters *_l, please, review.')
return sigma_e
rho_0 = params['rho_0']
rho_l = np.array(params['rho_l'])
f_l = np.array(params['f_l'])
tau_l = np.array(params['tau_l'])
c_l = np.array(params['c_l'])
M_l = 3 * (rho_0 - rho_l) / (2 * rho_l + rho_0)
if np.sum(f_l) >= 1:
print('Error! The sum of all f_l must be less than one.')
return sigma_e
#############################
w = w.reshape(len(w), 1)
sum_elements = 0
for fl, Ml, Tl, cl in zip(f_l, M_l, tau_l, c_l):
sum_elements += fl*Ml * (1 - 1 / ( 1 + (1j * w * Tl) ** cl ))
sigma_e = (1 + sum_elements)/rho_0
return sigma_e
```
# 4. Ejemplo
```
f = np.logspace(-3, 7, 2000)
w = 2 * np.pi * f
params = {
'rho_0' : 20,
'rho_l' : [
5,
2,
1,
],
'f_l' : [
0.1,
0.2,
0.3,
],
'tau_l' : [
100,
1e-2,
1e-5,
],
'c_l' : [
0.8,
0.7,
0.9,
],
}
sigma_e = gemtip_sh(params, w)
rho_e = 1 / sigma_e
fig, ax1 = plt.subplots(dpi = 120)
color = 'k'
ax1.set_xlabel('Frecuencia [Hz]')
ax1.set_ylabel(r'$real\{\rho_e\}$ $[\Omega m]$', color=color)
ax1.semilogx(f, rho_e.real, color=color)
ax1.tick_params(axis='y', labelcolor=color)
ax2 = ax1.twinx()
color = 'tab:red'
ax2.set_ylabel(r'$-imag\{\rho_e\}$ $[\Omega m]$', color=color) # we already handled the x-label with ax1
ax2.semilogx(f, -rho_e.imag, color=color)
ax2.tick_params(axis='y', labelcolor=color)
fig.tight_layout()
```
# 5. Pregunas de autoexplicación
Resuleva las siguientes preguntas de autoexplicación:
+ ¿Cuáles son los datos de entrada y salida del método gemtip_sh?
+ ¿Cómo se cálcula la conductividad eléctrica con el método gemtip_sh?
+ El método gemtip_sh tiene implementado un algoritmo para detectar errores (verificación de parámetros), explique cada uno de los errores que detecta el algoritmo y de un ejemplo para cada posible error.
+ ¿Cómo se relacionan los picos de la parte imaginaria de la resistividad efectiva $\rho_e$ y los tiempos de relajación $\tau_l$?
+ Reproduzca los espectros mostrados en el artículo de [Zhadanov 2008](http://www.cemi.utah.edu/PDF_70_10/2008b.pdf) para inclusiones esféricas.
+ ¿Qué diferencias encontró en la definición de los parámetros al reproducir los espectros del artículo de [Zhadanov 2008](http://www.cemi.utah.edu/PDF_70_10/2008b.pdf)?
End
| github_jupyter |
```
# Erasmus+ ICCT project (2018-1-SI01-KA203-047081)
# Toggle cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Toggle cell visibility <a href="javascript:code_toggle()">here</a>.''')
display(tag)
# Hide the code completely
# from IPython.display import HTML
# tag = HTML('''<style>
# div.input {
# display:none;
# }
# </style>''')
# display(tag)
%matplotlib inline
import control
import numpy
import sympy as sym
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
#print a matrix latex-like
def bmatrix(a):
"""Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)
:a: numpy array
:returns: LaTeX bmatrix as a string
"""
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
return '\n'.join(rv)
# Display formatted matrix:
def vmatrix(a):
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{vmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{vmatrix}']
return '\n'.join(rv)
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('example@example.com', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
```
## Krmiljenje lateralne pozicije kvadrotorja
<img src="Images\EX38-QuadrotorLat.PNG" alt="drawing" width="300x300">
Lateralno hitrost kvadrotorja $u$ lahko krmilimo z nagibom vozila za kot okrog vzdolžne osi $\theta$. Predpostavimo, da je vozilo opremljeno z avtopilotom, ki krmili kot okrog vzodlžne osi $\theta$ do zahtevane vrednosti $\theta_r$, s čimer lahko zaprtozančno dinamiko okrog vzdolžne osi modeliramo kot odziv sistema drugega reda s poloma v $-20\pm25i$ in enotskim ojačanjem. Ko je vozilo nagnjeno za kot $\theta$, propelerji generirajo lateralno silo, ki je približno enaka $F_l = F\sin{\theta} \approx mg\theta$, kjer sta $m=1800$ g masa vozila, in $g = 9.81$ m/$\text{s}^2$ težnostni pospešek. Prisotnost vetra se izraža v obliki lateralne sile $D = -cu = -0.8u$. Kot okrog vzdolžne osi mora biti znotraj intervala vrednosti $\pm20$ stopinj skozi celotno obratovanje. Pozicija vozila je merjena z GPS.
Razviti želimo regulator, katerega vhod je $\theta_r$, $s$ pa izhod ter izpolnjuje naslednje zahteve:
- čas ustalitve krajši od 7 s;
- brez prenihaja;
- brez ali z minimalnim odstopkom v stacionarnem stanju v odzivu na koračno funkcijo pomika.
Lateralno dinamiko lahko popišemo z naslednjima enačbama:
$$
m\dot{u} = -cu + mg\theta
$$
in
$$
\dot s = u
$$
Možna popis dinamike (gibanja) okrog vzdolžne osi je:
\begin{cases}
\dot{z} = \begin{bmatrix} 0 & 1 \\ -\omega_n^2 & -2\xi\omega_n \end{bmatrix}z + \begin{bmatrix} 0 \\ 1 \end{bmatrix}\theta_r \\
\theta = \begin{bmatrix} \omega_n^2 & 0 \end{bmatrix}z,
\end{cases}
pri čemer velja $z = \begin{bmatrix} z_1 & z_2 \end{bmatrix}^T$, $\omega_n=\sqrt{(-20)^2+(25)^2} \approx 32$ in $\xi = \frac{20}{\omega_n} \approx 0.62$.
Če združimo zgornji enačbi in definiramo vektor stanj $x = \begin{bmatrix} x_1 & x_2 & x_3 & x_4 \end{bmatrix}^T= \begin{bmatrix} s & u & z_1 & z_2 \end{bmatrix}^T$ lahko zapišemo:
\begin{cases}
\dot{x} = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & -c/m & g\omega_n^2 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & -\omega_n^2 & -2\xi\omega_n \end{bmatrix}x + \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}\theta_r \\
y = \begin{bmatrix} 1 & 0 & 0 & 0 \end{bmatrix}x
\end{cases}
### Načrtovanje regulatorja
#### Načrtovanje krmilnika
Da zagotovimo praktično ničen odstopek v stacionarnem stanju, dodamo vnaprejšnje ojačanje, ki je enako inverzni vrednosti statičnega ojačanja zaprtozančnega sistema. Poli dinamike okrog vzdolžne osi so hitri; z namenom ohranitve vrednosti $\theta$ med prehodnim pojavom razporedimo pole zaprtozančnega sistema tako, da so 13-krat manjši od polov odprtozančnega sistema: ($-1.54\pm1.92i$). Glede preostalih polov pa naredimo naslednje: rahlo povečamo konvergenco pola $-c/m$ (=> $-0.7$), zadnji pol pa razporedimo v $-0.7$.
#### Načrtovanje spoznavalnika
Razvijemo spoznavalnik vseh stanj razširjenega sistema s poli, ki ležijo blizu $-30$, s čimer zagotovimo razmeroma hitro konvergenco.
### Kako upravljati s tem interaktivnim primerom?
- Testiraj sistem ob prisotnosti napak v začetnih ocenjenih stanjih in, če je potrebno, razporedi pole tako, da bo sistem ustrezal danim zahtevam.
```
# Preparatory cell
X0 = numpy.matrix('0.0; 0.0; 0.0; 0.0')
K = numpy.matrix([0,0,0,0])
L = numpy.matrix([[0],[0],[0],[0]])
X0w = matrixWidget(4,1)
X0w.setM(X0)
Kw = matrixWidget(1,4)
Kw.setM(K)
Lw = matrixWidget(4,1)
Lw.setM(L)
eig1c = matrixWidget(1,1)
eig2c = matrixWidget(2,1)
eig3c = matrixWidget(1,1)
eig4c = matrixWidget(2,1)
eig1c.setM(numpy.matrix([-0.7]))
eig2c.setM(numpy.matrix([[-1.54],[-1.92]]))
eig3c.setM(numpy.matrix([-0.7]))
eig4c.setM(numpy.matrix([[-1.],[-1.]]))
eig1o = matrixWidget(1,1)
eig2o = matrixWidget(2,1)
eig3o = matrixWidget(1,1)
eig4o = matrixWidget(2,1)
eig1o.setM(numpy.matrix([-30.]))
eig2o.setM(numpy.matrix([[-30.],[0.]]))
eig3o.setM(numpy.matrix([-30.1]))
eig4o.setM(numpy.matrix([[-30.2],[0.]]))
# Misc
#create dummy widget
DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
#create button widget
START = widgets.Button(
description='Test',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Test',
icon='check'
)
def on_start_button_clicked(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW.value> 0 :
DW.value = -1
else:
DW.value = 1
pass
START.on_click(on_start_button_clicked)
# Define type of method
selm = widgets.Dropdown(
options= ['Nastavi K in L', 'Nastavi lastne vrednosti'],
value= 'Nastavi lastne vrednosti',
description='',
disabled=False
)
# Define the number of complex eigenvalues
selec = widgets.Dropdown(
options= ['brez kompleksnih lastnih vrednosti', 'dve kompleksni lastni vrednosti', 'štiri kompleksne lastne vrednosti'],
value= 'dve kompleksni lastni vrednosti',
description='Kompleksne lastne vrednosti krmilnika:',
disabled=False
)
seleo = widgets.Dropdown(
options= ['brez kompleksnih lastnih vrednosti', 'dve kompleksni lastni vrednosti'],
value= 'brez kompleksnih lastnih vrednosti',
description='Kompleksne lastne vrednosti spoznavalnika:',
disabled=False
)
#define type of ipout
selu = widgets.Dropdown(
options=['impulzna funkcija', 'koračna funkcija', 'sinusoidna funkcija', 'kvadratni val'],
value='koračna funkcija',
description='Vhod:',
style = {'description_width': 'initial'},
disabled=False
)
# Define the values of the input
u = widgets.FloatSlider(
value=10,
min=0,
max=20,
step=1,
description='',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
period = widgets.FloatSlider(
value=0.5,
min=0.001,
max=10,
step=0.001,
description='Perioda: ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.3f',
)
simTime = widgets.FloatText(
value=10,
description='',
disabled=False
)
# Support functions
def eigen_choice(selec,seleo):
if selec == 'brez kompleksnih lastnih vrednosti':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = True
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = True
eigc = 0
if seleo == 'brez kompleksnih lastnih vrednosti':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = True
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = True
eigo = 0
if selec == 'dve kompleksni lastni vrednosti':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = True
eig4c.children[1].children[0].disabled = True
eigc = 2
if seleo == 'dve kompleksni lastni vrednosti':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = True
eig4o.children[1].children[0].disabled = True
eigo = 2
if selec == 'štiri kompleksne lastne vrednosti':
eig1c.children[0].children[0].disabled = True
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = True
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = False
eigc = 4
if seleo == 'štiri kompleksne lastne vrednosti':
eig1o.children[0].children[0].disabled = True
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = True
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = False
eigo = 4
return eigc, eigo
def method_choice(selm):
if selm == 'Nastavi K in L':
method = 1
selec.disabled = True
seleo.disabled = True
if selm == 'Nastavi lastne vrednosti':
method = 2
selec.disabled = False
seleo.disabled = False
return method
c = 0.8
m = 1.8
omega = 32
xi = 0.62
g = 9.81
A = numpy.matrix([[0, 1, 0, 0],
[0, -c/m, g*omega**2, 0],
[0, 0, 0, 1],
[0, 0, -omega**2, -2*xi*omega]])
B = numpy.matrix([[0],[0],[0],[1]])
C = numpy.matrix([[1,0,0,0]])
OLpoles, OLvectors = numpy.linalg.eig(A)
def main_callback2(X0w, K, L, eig1c, eig2c, eig3c, eig4c, eig1o, eig2o, eig3o, eig4o, u, period, selm, selec, seleo, selu, simTime, DW):
eigc, eigo = eigen_choice(selec,seleo)
method = method_choice(selm)
if method == 1:
solc = numpy.linalg.eig(A-B*K)
solo = numpy.linalg.eig(A-L*C)
if method == 2:
#for bettere numerical stability of place
if eig1c[0,0]==eig2c[0,0] or eig1c[0,0]==eig3c[0,0] or eig1c[0,0]==eig4c[0,0]:
eig1c[0,0] *= 1.01
if eig2c[0,0]==eig3c[0,0] or eig2c[0,0]==eig4c[0,0]:
eig3c[0,0] *= 1.015
if eig1o[0,0]==eig2o[0,0] or eig1o[0,0]==eig3o[0,0] or eig1o[0,0]==eig4o[0,0]:
eig1o[0,0] *= 1.01
if eig2o[0,0]==eig3o[0,0] or eig2o[0,0]==eig4o[0,0]:
eig3o[0,0] *= 1.015
if eigc == 0:
K = control.acker(A, B, [eig1c[0,0], eig2c[0,0], eig3c[0,0], eig4c[0,0]])
Kw.setM(K)
if eigc == 2:
K = control.acker(A, B, [eig3c[0,0],
eig1c[0,0],
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigc == 4:
K = control.acker(A, B, [numpy.complex(eig4c[0,0], eig4c[1,0]),
numpy.complex(eig4c[0,0],-eig4c[1,0]),
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigo == 0:
L = control.place(A.T, C.T, [eig1o[0,0], eig2o[0,0], eig3o[0,0], eig4o[0,0]]).T
Lw.setM(L)
if eigo == 2:
L = control.place(A.T, C.T, [eig3o[0,0],
eig1o[0,0],
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
if eigo == 4:
L = control.place(A.T, C.T, [numpy.complex(eig4o[0,0], eig4o[1,0]),
numpy.complex(eig4o[0,0],-eig4o[1,0]),
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
sys = sss(A,B,[[1,0,0,0],[0,0,0,0]],[[0],[1]])
syse = sss(A-L*C,numpy.hstack((B,L)),numpy.eye(4),numpy.zeros((4,2)))
sysc = sss(0,[0,0,0,0],0,-K)
sys_append = control.append(sys,syse,sysc)
try:
sys_CL = control.connect(sys_append,
[[1,7],[2,2],[3,1],[4,3],[5,4],[6,5],[7,6]],
[1],
[1,2])
except:
sys_CL = control.connect(sys_append,
[[1,7],[2,2],[3,1],[4,3],[5,4],[6,5],[7,6]],
[1],
[1,2])
X0w1 = numpy.zeros((8,1))
X0w1[4,0] = X0w[0,0]
X0w1[5,0] = X0w[1,0]
X0w1[6,0] = X0w[2,0]
X0w1[7,0] = X0w[3,0]
u1 = u
try:
DCgain = control.dcgain(sys_CL[0,0])
u = u/DCgain
except:
print("Napaka v izračunu DC ojačanja zaprtozančnega krmiljenega sistema. Vnaprejšnje ojačanje je nastavljeno na 1.")
DCgain = 1
if simTime != 0:
T = numpy.linspace(0, simTime, 10000)
else:
T = numpy.linspace(0, 1, 10000)
if selu == 'impulzna funkcija': #selu
U = [0 for t in range(0,len(T))]
U[0] = u
U1 = [0 for t in range(0,len(T))]
U1[0] = u1
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'koračna funkcija':
U = [u for t in range(0,len(T))]
U1 = [u1 for t in range(0,len(T))]
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'sinusoidna funkcija':
U = u*numpy.sin(2*numpy.pi/period*T)
U1 = u1*numpy.sin(2*numpy.pi/period*T)
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'kvadratni val':
U = u*numpy.sign(numpy.sin(2*numpy.pi/period*T))
U1 = u1*numpy.sign(numpy.sin(2*numpy.pi/period*T))
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
try:
step_info_dict = control.step_info(sys_CL[0,0],SettlingTimeThreshold=0.05,T=T)
print('Poli odprtozančnega sistema: ', numpy.round(OLpoles,3))
print('Zaprtozančni sistem')
print('Informacije o odzivu sistema: \n\tČas vzpona [s] =',step_info_dict['RiseTime'],'\n\tČas ustalitve (5%) [s] =',step_info_dict['SettlingTime'],'\n\tPrenihaj [%]=',step_info_dict['Overshoot'])
print('')
print('Maksimalna vrednost theta (delež od vrednosti 20°)=', max(abs(xout[2]*omega**2))/(numpy.pi/180*20)*100)
except:
print("Napaka v izračunu informacij o odzivu sistema.")
print("Ojačanje zaprtozančnega sistema =",DCgain)
fig = plt.figure(num='Simulacija 1', figsize=(14,12))
fig.add_subplot(221)
plt.title('Odziv sistema')
plt.ylabel('Izhod')
plt.plot(T,yout[0],T,U1,'r--')
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$y$','Referenca'])
plt.grid()
fig.add_subplot(222)
plt.title('Vhod')
plt.ylabel('$u$')
plt.plot(T,yout[1])
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.grid()
fig.add_subplot(223)
plt.title('Odziv tretjega stanja')
plt.ylabel(r'$x_3$')
plt.plot(T,xout[2],
T,[20*numpy.pi/180/omega**2 for i in range(len(T))],'r--',
T,[-20*numpy.pi/180/omega**2 for i in range(len(T))],'r--')
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.grid()
fig.add_subplot(224)
plt.title('Napaka ocene stanj')
plt.ylabel('Napaka ocene stanj')
plt.plot(T,xout[4]-xout[0])
plt.plot(T,xout[5]-xout[1])
plt.plot(T,xout[6]-xout[2])
plt.plot(T,xout[7]-xout[3])
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$e_{1}$','$e_{2}$','$e_{3}$','$e_{4}$'])
plt.grid()
#plt.tight_layout()
alltogether2 = widgets.VBox([widgets.HBox([selm,
selec,
seleo,
selu]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.HBox([widgets.Label('K:',border=3), Kw,
widgets.Label('Lastne vrednosti:',border=3),
widgets.HBox([eig1c,
eig2c,
eig3c,
eig4c])])]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.VBox([widgets.HBox([widgets.Label('L:',border=3), Lw, widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('Lastne vrednosti:',border=3),
eig1o,
eig2o,
eig3o,
eig4o,
widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('X0 est.:',border=3), X0w]),
widgets.Label(' ',border=3),
widgets.HBox([
widgets.VBox([widgets.Label('Simulacijski čas [s]:',border=3)]),
widgets.VBox([simTime])])]),
widgets.Label(' ',border=3)]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.Label('Referenca [m]:',border=3),
u,
period,
START])])
out2 = widgets.interactive_output(main_callback2, {'X0w':X0w, 'K':Kw, 'L':Lw,
'eig1c':eig1c, 'eig2c':eig2c, 'eig3c':eig3c, 'eig4c':eig4c,
'eig1o':eig1o, 'eig2o':eig2o, 'eig3o':eig3o, 'eig4o':eig4o,
'u':u, 'period':period, 'selm':selm, 'selec':selec, 'seleo':seleo, 'selu':selu, 'simTime':simTime, 'DW':DW})
out2.layout.height = '900px'
display(out2, alltogether2)
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Azure Machine Learning Pipeline with HyperDriveStep
This notebook is used to demonstrate the use of HyperDriveStep in AML Pipeline.
## Prerequisites and Azure Machine Learning Basics
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration Notebook](https://aka.ms/pl-config) first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
## Azure Machine Learning and Pipeline SDK-specific imports
```
import azureml.core
from azureml.core import Workspace, Experiment
from azureml.core.datastore import Datastore
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.exceptions import ComputeTargetException
from azureml.data.data_reference import DataReference
from azureml.pipeline.steps import HyperDriveStep, HyperDriveStepRun
from azureml.pipeline.core import Pipeline, PipelineData
from azureml.train.dnn import TensorFlow
# from azureml.train.hyperdrive import *
from azureml.train.hyperdrive import RandomParameterSampling, BanditPolicy, HyperDriveConfig, PrimaryMetricGoal
from azureml.train.hyperdrive import choice, loguniform
import os
import shutil
import urllib
import numpy as np
import matplotlib.pyplot as plt
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
```
## Initialize workspace
Initialize a workspace object from persisted configuration. If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure the config file is present at .\config.json
```
ws = Workspace.from_config()
print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep = '\n')
```
## Create an Azure ML experiment
Let's create an experiment named "tf-mnist" and a folder to hold the training scripts.
> The best practice is to use separate folders for scripts and its dependent files for each step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.
> The script runs will be recorded under the experiment in Azure.
```
script_folder = './tf-mnist'
os.makedirs(script_folder, exist_ok=True)
exp = Experiment(workspace=ws, name='Hyperdrive_sample')
```
## Download MNIST dataset
In order to train on the MNIST dataset we will first need to download it from Yan LeCun's web site directly and save them in a `data` folder locally.
```
os.makedirs('./data/mnist', exist_ok=True)
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz', filename = './data/mnist/train-images.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz', filename = './data/mnist/train-labels.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz', filename = './data/mnist/test-images.gz')
urllib.request.urlretrieve('http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz', filename = './data/mnist/test-labels.gz')
```
## Show some sample images
Let's load the downloaded compressed file into numpy arrays using some utility functions included in the `utils.py` library file from the current folder. Then we use `matplotlib` to plot 30 random images from the dataset along with their labels.
```
from utils import load_data
# note we also shrink the intensity values (X) from 0-255 to 0-1. This helps the neural network converge faster.
X_train = load_data('./data/mnist/train-images.gz', False) / 255.0
y_train = load_data('./data/mnist/train-labels.gz', True).reshape(-1)
X_test = load_data('./data/mnist/test-images.gz', False) / 255.0
y_test = load_data('./data/mnist/test-labels.gz', True).reshape(-1)
count = 0
sample_size = 30
plt.figure(figsize = (16, 6))
for i in np.random.permutation(X_train.shape[0])[:sample_size]:
count = count + 1
plt.subplot(1, sample_size, count)
plt.axhline('')
plt.axvline('')
plt.text(x = 10, y = -10, s = y_train[i], fontsize = 18)
plt.imshow(X_train[i].reshape(28, 28), cmap = plt.cm.Greys)
plt.show()
```
## Upload MNIST dataset to blob datastore
A [datastore](https://docs.microsoft.com/azure/machine-learning/service/how-to-access-data) is a place where data can be stored that is then made accessible to a Run either by means of mounting or copying the data to the compute target. In the next step, we will use Azure Blob Storage and upload the training and test set into the Azure Blob datastore, which we will then later be mount on a Batch AI cluster for training.
```
ds = ws.get_default_datastore()
ds.upload(src_dir='./data/mnist', target_path='mnist', overwrite=True, show_progress=True)
```
## Retrieve or create a Azure Machine Learning compute
Azure Machine Learning Compute is a service for provisioning and managing clusters of Azure virtual machines for running machine learning workloads. Let's create a new Azure Machine Learning Compute in the current workspace, if it doesn't already exist. We will then run the training script on this compute target.
If we could not find the compute with the given name in the previous cell, then we will create a new compute here. This process is broken down into the following steps:
1. Create the configuration
2. Create the Azure Machine Learning compute
**This process will take a few minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell.**
```
cluster_name = "gpu-cluster"
try:
compute_target = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing compute target {}.'.format(cluster_name))
except ComputeTargetException:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size="STANDARD_NC6",
max_nodes=4)
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, timeout_in_minutes=20)
print("Azure Machine Learning Compute attached")
```
## Copy the training files into the script folder
The TensorFlow training script is already created for you. You can simply copy it into the script folder, together with the utility library used to load compressed data file into numpy array.
```
# the training logic is in the tf_mnist.py file.
shutil.copy('./tf_mnist.py', script_folder)
# the utils.py just helps loading data from the downloaded MNIST dataset into numpy arrays.
shutil.copy('./utils.py', script_folder)
```
## Create TensorFlow estimator
Next, we construct an [TensorFlow](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn.tensorflow?view=azure-ml-py) estimator object.
The TensorFlow estimator is providing a simple way of launching a TensorFlow training job on a compute target. It will automatically provide a docker image that has TensorFlow installed -- if additional pip or conda packages are required, their names can be passed in via the `pip_packages` and `conda_packages` arguments and they will be included in the resulting docker.
The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
The TensorFlow estimator also takes a `framework_version` parameter -- if no version is provided, the estimator will default to the latest version supported by AzureML. Use `TensorFlow.get_supported_versions()` to get a list of all versions supported by your current SDK version or see the [SDK documentation](https://docs.microsoft.com/en-us/python/api/azureml-train-core/azureml.train.dnn?view=azure-ml-py) for the versions supported in the most current release.
```
est = TensorFlow(source_directory=script_folder,
compute_target=compute_target,
entry_script='tf_mnist.py',
use_gpu=True,
framework_version='1.13')
```
## Intelligent hyperparameter tuning
Now let's try hyperparameter tuning by launching multiple runs on the cluster. First let's define the parameter space using random sampling.
In this example we will use random sampling to try different configuration sets of hyperparameters to maximize our primary metric, the best validation accuracy (`validation_acc`).
```
ps = RandomParameterSampling(
{
'--batch-size': choice(25, 50, 100),
'--first-layer-neurons': choice(10, 50, 200, 300, 500),
'--second-layer-neurons': choice(10, 50, 200, 500),
'--learning-rate': loguniform(-6, -1)
}
)
```
Now we will define an early termnination policy. The `BanditPolicy` basically states to check the job every 2 iterations. If the primary metric (defined later) falls outside of the top 10% range, Azure ML terminate the job. This saves us from continuing to explore hyperparameters that don't show promise of helping reach our target metric.
Refer [here](https://docs.microsoft.com/azure/machine-learning/service/how-to-tune-hyperparameters#specify-an-early-termination-policy) for more information on the BanditPolicy and other policies available.
```
early_termination_policy = BanditPolicy(evaluation_interval=2, slack_factor=0.1)
```
Now we are ready to configure a run configuration object, and specify the primary metric `validation_acc` that's recorded in your training runs. If you go back to visit the training script, you will notice that this value is being logged after every epoch (a full batch set). We also want to tell the service that we are looking to maximizing this value. We also set the number of samples to 20, and maximal concurrent job to 4, which is the same as the number of nodes in our computer cluster.
```
hd_config = HyperDriveConfig(estimator=est,
hyperparameter_sampling=ps,
policy=early_termination_policy,
primary_metric_name='validation_acc',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=4,
max_concurrent_runs=4)
```
## Add HyperDrive as a step of pipeline
### Setup an input for the hypderdrive step
Let's setup a data reference for inputs of hyperdrive step.
```
data_folder = DataReference(
datastore=ds,
data_reference_name="mnist_data")
```
### HyperDriveStep
HyperDriveStep can be used to run HyperDrive job as a step in pipeline.
- **name:** Name of the step
- **hyperdrive_config:** A HyperDriveConfig that defines the configuration for this HyperDrive run
- **estimator_entry_script_arguments:** List of command-line arguments for estimator entry script
- **inputs:** List of input port bindings
- **outputs:** List of output port bindings
- **metrics_output:** Optional value specifying the location to store HyperDrive run metrics as a JSON file
- **allow_reuse:** whether to allow reuse
- **version:** version
```
metrics_output_name = 'metrics_output'
metirics_data = PipelineData(name='metrics_data',
datastore=ds,
pipeline_output_name=metrics_output_name)
hd_step_name='hd_step01'
hd_step = HyperDriveStep(
name=hd_step_name,
hyperdrive_config=hd_config,
estimator_entry_script_arguments=['--data-folder', data_folder],
inputs=[data_folder],
metrics_output=metirics_data)
```
### Run the pipeline
```
pipeline = Pipeline(workspace=ws, steps=[hd_step])
pipeline_run = exp.submit(pipeline)
```
### Monitor using widget
```
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
```
### Wait for the completion of this Pipeline run
```
pipeline_run.wait_for_completion()
```
### Retrieve the metrics
Outputs of above run can be used as inputs of other steps in pipeline. In this tutorial, we will show the result metrics.
```
metrics_output = pipeline_run.get_pipeline_output(metrics_output_name)
num_file_downloaded = metrics_output.download('.', show_progress=True)
import pandas as pd
import json
with open(metrics_output._path_on_datastore) as f:
metrics_output_result = f.read()
deserialized_metrics_output = json.loads(metrics_output_result)
df = pd.DataFrame(deserialized_metrics_output)
df
```
## Find and register best model
When all the jobs finish, we can find out the one that has the highest accuracy.
```
hd_step_run = HyperDriveStepRun(step_run=pipeline_run.find_step_run(hd_step_name)[0])
best_run = hd_step_run.get_best_run_by_primary_metric()
best_run
```
Now let's list the model files uploaded during the run.
```
print(best_run.get_file_names())
```
We can then register the folder (and all files in it) as a model named `tf-dnn-mnist` under the workspace for deployment.
```
model = best_run.register_model(model_name='tf-dnn-mnist', model_path='outputs/model')
```
## Deploy the model in ACI
Now we are ready to deploy the model as a web service running in Azure Container Instance [ACI](https://azure.microsoft.com/en-us/services/container-instances/).
### Create score.py
First, we will create a scoring script that will be invoked by the web service call.
* Note that the scoring script must have two required functions, `init()` and `run(input_data)`.
* In `init()` function, you typically load the model into a global object. This function is executed only once when the Docker container is started.
* In `run(input_data)` function, the model is used to predict a value based on the input data. The input and output to `run` typically use JSON as serialization and de-serialization format but you are not limited to that.
```
%%writefile score.py
import json
import numpy as np
import os
import tensorflow as tf
from azureml.core.model import Model
def init():
global X, output, sess
tf.reset_default_graph()
model_root = Model.get_model_path('tf-dnn-mnist')
saver = tf.train.import_meta_graph(os.path.join(model_root, 'mnist-tf.model.meta'))
X = tf.get_default_graph().get_tensor_by_name("network/X:0")
output = tf.get_default_graph().get_tensor_by_name("network/output/MatMul:0")
sess = tf.Session()
saver.restore(sess, os.path.join(model_root, 'mnist-tf.model'))
def run(raw_data):
data = np.array(json.loads(raw_data)['data'])
# make prediction
out = output.eval(session=sess, feed_dict={X: data})
y_hat = np.argmax(out, axis=1)
return y_hat.tolist()
```
### Create myenv.yml
We also need to create an environment file so that Azure Machine Learning can install the necessary packages in the Docker image which are required by your scoring script. In this case, we need to specify packages `numpy`, `tensorflow`.
```
from azureml.core.runconfig import CondaDependencies
cd = CondaDependencies.create()
cd.add_conda_package('numpy')
cd.add_tensorflow_conda_package()
cd.save_to_file(base_directory='./', conda_file_path='myenv.yml')
print(cd.serialize_to_string())
```
### Deploy to ACI
Now we can deploy. **This cell will run for about 7-8 minutes**. Behind the scene, AzureML will build a Docker container image with the given configuration, if already not available. This image will be deployed to the ACI infrastructure and the scoring script and model will be mounted on the container. The model will then be available as a web service with an HTTP endpoint to accept REST client calls.
```
%%time
from azureml.core.model import InferenceConfig
from azureml.core.webservice import AciWebservice
from azureml.core.webservice import Webservice
from azureml.core.model import Model
inference_config = InferenceConfig(runtime = "python",
entry_script = "score.py",
conda_file = "myenv.yml")
aciconfig = AciWebservice.deploy_configuration(cpu_cores=1,
memory_gb=1,
tags={'name':'mnist', 'framework': 'TensorFlow DNN'},
description='Tensorflow DNN on MNIST')
service = Model.deploy(ws, 'tf-mnist-svc', [model], inference_config, aciconfig)
service.wait_for_deployment(show_output=True)
```
**Tip: If something goes wrong with the deployment, the first thing to look at is the logs from the service by running the following command:**
```
print(service.get_logs())
```
This is the scoring web service endpoint:
```
print(service.scoring_uri)
```
### Test the deployed model
Let's test the deployed model. Pick 30 random samples from the test set, and send it to the web service hosted in ACI. Note here we are using the `run` API in the SDK to invoke the service. You can also make raw HTTP calls using any HTTP tool such as curl.
After the invocation, we print the returned predictions and plot them along with the input images. Use red font color and inversed image (white on black) to highlight the misclassified samples. Note since the model accuracy is pretty high, you might have to run the below cell a few times before you can see a misclassified sample.
```
import json
# find 30 random samples from test set
n = 30
sample_indices = np.random.permutation(X_test.shape[0])[0:n]
test_samples = json.dumps({"data": X_test[sample_indices].tolist()})
test_samples = bytes(test_samples, encoding='utf8')
# predict using the deployed model
result = service.run(input_data=test_samples)
# compare actual value vs. the predicted values:
i = 0
plt.figure(figsize = (20, 1))
for s in sample_indices:
plt.subplot(1, n, i + 1)
plt.axhline('')
plt.axvline('')
# use different color for misclassified sample
font_color = 'red' if y_test[s] != result[i] else 'black'
clr_map = plt.cm.gray if y_test[s] != result[i] else plt.cm.Greys
plt.text(x=10, y=-10, s=y_hat[s], fontsize=18, color=font_color)
plt.imshow(X_test[s].reshape(28, 28), cmap=clr_map)
i = i + 1
plt.show()
```
We can also send raw HTTP request to the service.
```
import requests
# send a random row from the test set to score
random_index = np.random.randint(0, len(X_test)-1)
input_data = "{\"data\": [" + str(list(X_test[random_index])) + "]}"
headers = {'Content-Type':'application/json'}
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("POST to url", service.scoring_uri)
print("input data:", input_data)
print("label:", y_test[random_index])
print("prediction:", resp.text)
```
Let's look at the workspace after the web service was deployed. You should see
* a registered model named 'model' and with the id 'model:1'
* an image called 'tf-mnist' and with a docker image location pointing to your workspace's Azure Container Registry (ACR)
* a webservice called 'tf-mnist' with some scoring URL
```
models = ws.models
for name, model in models.items():
print("Model: {}, ID: {}".format(name, model.id))
images = ws.images
for name, image in images.items():
print("Image: {}, location: {}".format(name, image.image_location))
webservices = ws.webservices
for name, webservice in webservices.items():
print("Webservice: {}, scoring URI: {}".format(name, webservice.scoring_uri))
```
## Clean up
You can delete the ACI deployment with a simple delete API call.
```
service.delete()
```
| github_jupyter |
# Hide your messy video background using neural nets, Part 2
> "Using our trained model to blur the background of video frames with OpenCV."
- toc: true
- branch: master
- badges: true
- comments: false
- categories: [fastai, privacy, opencv]
- image: images/articles/2021-backgroundblur-2/thumbnail.jpg
- hide: false
```
#hide
!pip install fastai==2.2.5 opencv-python==4.5.1.48 -q
#hide
from fastai.vision.all import *
import cv2
```
In [Part 1](https://deeplearning.berlin/fastai/privacy/getting%20started/2021/02/09/Background-Blur-Part-1.html) we created our own dataset of webcam pictures and trained a model that separates the person from the background. Now, we're going to use this model to blur the background of a webcam video.
<video width="640" height="360" controls autoplay loop muted playsinline>
<source src="/images/articles/2021-backgroundblur-2/smooth.mp4" type="video/mp4">
<source src="/images/articles/2021-backgroundblur-2/smooth.webm" type="video/webm">
Your browser does not support the video tag.
</video>
## Load Learner
The `Learner` expects to find all functions that were defined when creating it, in our case that is `create_mask`. We don't need any custom functionality however, so we define an empty `create_mask` function.
```
def create_mask(): pass
```
Load the `Learner` we exported in Part 1. If you have not trained a model in part 1, you can download [my model](https://www.dropbox.com/s/nl8u2veoa1bywwl/unet-resnet18-person-background.pkl?dl=0) and play around. I can't guarantee that it works under any conditions other than my living room though 😀
```
learn = load_learner('unet-resnet18-person-background.pkl')
```
## Practicing Predictions
> Note: You can skip this part and jump to the [OpenCV part](#Constructing-the-Image-With-Blurred-Background). I included this section because I wanted to see and show the different outputs of the `predict` function.
Let's pick a random file from our training images to practice getting the model predictions:
```
fnames = get_image_files('training')
image = fnames[0]
PILImage.create(image).show();
```
Get predictions of one training image:
```
preds = learn.predict(image)
#collapse_output
preds
```
There are different tensors in the predictions. `preds[0]` contains the output after `argmax`, so it picks the class with the higher probability. Every pixel is either a `0` or a `1` in line with our two classes.
```
preds[0].show(cmap='Blues', vmin=0, vmax=1);
#collapse
print(f'''unique values: {np.unique(preds[0])}
type: {type(preds[0])}
data type: {preds[0].dtype}''')
```
`preds[1]` contains the same values, just in a different type (`TensorImage` instead of `TensorMask`)
```
preds[1].show(cmap='Blues', vmin=0, vmax=1);
#collapse
print(f'''unique values: {np.unique(preds[1])}
type: {type(preds[1])}
data type: {preds[1].dtype}''')
```
`preds[2]` is a tensor with three dimensions. It contains the probabilities of the two classes as float values.
```
preds[2].shape
#collapse
print(f'''unique values: {np.unique(preds[2])}
type: {type(preds[2])}
data type: {preds[2].dtype}''')
```
Probabilities for the `background` class:
```
preds[2][0].show(cmap='Blues');
```
Probabilities for the `person` class:
```
preds[2][1].show(cmap='Blues');
```
## Constructing the Image With Blurred Background
We could use clean predictions `preds[1]` with just `0`s and `1`s for a simple mask. I tried that initially and it worked, it resulted in some rough edges however.
Instead, we will use the raw probabilities from `preds[2][1]` since it results in a smoother image. You can try for yourself which one you like btter.
Let's define a simple blur function.
```
def blur(img: np.ndarray, kernel_size=5, sigma_x=0) -> np.ndarray:
# Make sure that kernel size is an odd number
if kernel_size % 2 == 0:
kernel_size += 1
return cv2.GaussianBlur(img, (kernel_size, kernel_size), sigma_x)
```
We now define a function that blurs the background and blends in the original frame with an alpha mask. Thank you to [learnopencv.com](https://learnopencv.com/alpha-blending-using-opencv-cpp-python/) for their useful code!
```
def masked_blur(image: np.ndarray, mask: TensorImage) -> np.ndarray:
"mask must have dimensions (360,640)"
foreground = cv2.resize(image, (640,360), interpolation=cv2.INTER_AREA)
background = blur(foreground, kernel_size=61)
# Convert uint8 to float
foreground = foreground.astype(np.float32)
background = background.astype(np.float32)
# Some transforms to match the dimensions and type of the cv2 image
alpha = to_np(mask.unsqueeze(2).repeat(1,1,3)).astype(np.float32)
# Multiply the foreground with the alpha matte
foreground = cv2.multiply(alpha, foreground)
# Multiply the background with ( 1 - alpha )
background = cv2.multiply(1.0 - alpha, background)
# Add the masked foreground and background.
result = cv2.add(foreground, background)
# Convert to integer
result = result.astype(np.uint8)
return result
```
Read an image and create predictions:
```
frame = cv2.imread(str(image))
preds = learn.predict(image)
alpha = preds[2][1]
```
Create the resulting image and have a look:
```
output = masked_blur(frame, alpha)
output_rgb = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
PILImage.create(output_rgb)
```
Apart from my grumpy look, I think this is a quite nice result!
## Processing a Video Clip
As for now, we just work with a saved video file. To work with live webcam video, we would have to increase the speed of the inference process by a lot. On my current Paperspace Gradient machine (P4000) it runs at about 0.5 FPS....
Setting up video files. `testclip.mp4` is a video I shot with my webcam. The arguments for the `VideoWriter` are framerate and dimensions. I chose 25 because I think this is the framerate of my webcam, and 640x360 are the dimensions we used to train the neural net.
```
cap = cv2.VideoCapture('testclip.mp4')
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter('output/testclip-output.mp4', fourcc, 25, (640, 360))
```
### Main Loop
We use this while loop to capture every frame of the video. For every frame we
1. Resize it to 640x360
2. Convert it to from cv2 BGR to RGB
3. Use the model to predict the mask
4. Create the image with blurred background
5. Write this image to the output video
Additionally, we save some frames as `jpg` files to inspect them.
```
i = 0
while cap.isOpened():
# Capture frame
ret, frame = cap.read()
# Break loop at end of video
if ret == False:
break
# Resize frame and convert to RGB
frame = cv2.resize(frame, (640,360), interpolation=cv2.INTER_AREA)
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Run inference and create alpha mask from result
preds = learn.predict(frame_rgb)
mask = preds[2][1]
# Blur background and convert it to integer type
output = masked_blur(frame, mask)
# Write frame to video
out.write(output)
# Save every 25th output as jpg, just to find a good thumbnail :)
if i == 0 or i%25 == 0:
cv2.imwrite('output/output_'+str(i)+'.jpg', output)
# Increase counter
i += 1
# Release opened files
cap.release()
out.release()
```
### Results
Let's look at a single frame:
```
PILImage.create('output/output_0.jpg')
```
And the resulting video:
<video width="640" height="360" controls autoplay loop muted playsinline>
<source src="/images/articles/2021-backgroundblur-2/smooth.mp4" type="video/mp4">
<source src="/images/articles/2021-backgroundblur-2/smooth.webm" type="video/webm">
Your browser does not support the video tag.
</video>
I think that looks quite good. There are some rough edges and my arms are not recognized well, but overall I'm happy with the result for this little project.
## To Do
There are many aspects which we could improve:
- The biggest thing to improve now is inference speed. As I mentioned, the current implementation works only with video files, not live video, and it runs at about 0.5 frames per second 🥴
- The U-Net is a pretty heavy model, even with the relatively small Resnet18 backbone. The saved weights are 167MB. This alone is reason enough for the model to run slow. Since we run the model frame by frame, the GPU is not helping much because there is no parallelization.
- The next step would be better generalization. I suspect that this model is currently very much optimized for myself. If we wanted to roll this out as a feature for many people, we would have to include many people in our training dataset, as well as different backgrounds, cameras, and lightning situations.
- Aesthetics could be improved. There is a "shadow" around the person in the foreground, an artifact of blurring the whole picture including the person.
Let me know when you found this helpful or implemented something similar yourself, or if you're stuck. I'd be happy to hear from you on [Twitter](https://twitter.com/daflowjoe)!
| github_jupyter |
```
from facenet_pytorch import MTCNN
import cv2
from PIL import Image
import numpy as np
from matplotlib import pyplot as plt
from tqdm.notebook import tqdm
import matplotlib.image as mpimg
os.getcwd()
mtcnn = MTCNN(margin=20, keep_all=True, post_process=False, device='cuda:0')
image = "test_image/6_faces.jpg"
image = mpimg.imread(image)
# image = Image.fromarray(image)
plt.imshow(image)
type(image), image.shape
# Detect face
# Visualize
fig, axes = plt.subplots(1, len(faces))
for face, ax in zip(faces, axes):
ax.imshow(face.permute(1, 2, 0).int().numpy())
ax.axis('off')
fig.show()
faces = mtcnn(image)
list_of_faces = []
for face in faces:
face_array = face.permute(1, 2, 0).int().numpy()
list_of_faces.append(face_array)
image_1 = list_of_faces[0]
plt.imshow(image_1)
image_1
image_to_write = cv2.cvtColor(image_1, cv2.COLOR_BGRA2RGB)
img2 = cv2.imread(image_1)
# image_1 is one of face
from matplotlib import pyplot as plt
from PIL import Image
# Use PIL instead of cv2.
image = Image.fromarray((image_1).astype(np.uint8))
plt.imshow(image)
image.save("test.jpg")
with open(image, 'w') as file:
print()
"""
PredictFace(face: Image.fromarray) -> dict
for face in faces:
- face = ToArray(face)
- data = PredictFace(face)
- blobUrl = UploadFaceToBucket(face)
- InsertDataToDB(data, blobURL)
"""
mtcnn(image, save_path='test.jpg')
from model_architecture import *
import os
import cv2
import mtcnn
import pickle
import numpy as np
from sklearn.preprocessing import Normalizer
from tensorflow.keras.models import load_model
import tensorflow as tf
tf.__version__
tf.config.set_visible_devices([], 'GPU')
face_data = 'image/'
required_shape = (160,160)
face_encoder = InceptionResNetV2()
path = "model/facenet_keras_weights.h5"
face_encoder.load_weights(path)
face_detector = mtcnn.MTCNN()
encodes = []
encoding_dict = dict()
l2_normalizer = Normalizer('l2')
def normalize(img):
mean, std = img.mean(), img.std()
return (img - mean) / std
for face_names in os.listdir(face_data):
person_dir = os.path.join(face_data,face_names)
for image_name in os.listdir(person_dir):
image_path = os.path.join(person_dir,image_name)
img_BGR = cv2.imread(image_path)
img_RGB = cv2.cvtColor(img_BGR, cv2.COLOR_BGR2RGB)
# GETTING THE FACE ONLY
x = face_detector.detect_faces(img_RGB)
x1, y1, width, height = x[0]['box']
x1, y1 = abs(x1) , abs(y1)
x2, y2 = x1+width , y1+height
face = img_RGB[y1:y2 , x1:x2]
# NORMALIZE THE DATA AND DO PREDICTION AND ENCODING
face = normalize(face)
face = cv2.resize(face, required_shape)
face_d = np.expand_dims(face, axis=0)
encode = face_encoder.predict(face_d)[0]
encodes.append(encode)
if encodes:
encode = np.sum(encodes, axis=0 )
encode = l2_normalizer.transform(np.expand_dims(encode, axis=0))[0]
encoding_dict[face_names] = encode
path = 'encodings.pkl'
with open(path, 'wb') as file:
pickle.dump(encoding_dict, file)
list_of_faces[0]
image_1=list_of_faces[0]
image = Image.fromarray((image_1).astype(np.uint8))
plt.imshow(image)
type(image_1)
im = cv2.imread("X:/bangkit-project/ml-project/test_image/yusril1.jpeg")
img = cv2.cvtColor(im, cv2.COLOR_BGR2RGB)
type(img)
```
| github_jupyter |
# Part 1 - Introduction to Grid
##### Grid is a platform to **train**, **share** and **manage** models and datasets in a **distributed**, **collaborative** and **secure way**.
Grid platform aims to be a secure peer to peer platform. It was created to use pysyft's features to perform federated learning processes without the need to manage distributed workers directly. Nowadays, to perform machine learning process with PySyft library, the user needs to manage directly all the workers' stuff (start nodes, manage node connections, turn off nodes, etc). Grid platform solves this in a transparent way. The user won't need to know about how the nodes are connected or where is some specific dataset.
Authors:
- Ionésio Junior - Github: [IonesioJunior](https://github.com/IonesioJunior)
## Why should we use grid?
As mentioned before, the grid is basically a platform that uses PySyft library to manage distributed workers providing some special features.
**We should use grid to:**
- Train models using datasets that we've never seen (without getting access to its real values).
- Train a model with encrypted datasets.
- Provide Secure MLaaS running encrypted model inferences across grid network.
- We can serve an encrypted model without giving its real weights to anyone.
- We can run encrypted inferences without sending our private data to anyone.
- Mitigate risks and impacts using Federated Learning's **"privacy by design"** property.
- Manage the privacy level of datasets stored at grid network allowing/disallowing access to them.
## How it works?
We have two concepts of grid: **Private Grid Platform** and **Public Grid Platform**
### Private Grid
###### Private Grid is used to build private's grid platform.
It will empower you with the control to manage the entire platform, you'll be able to create, remove and manage all nodes connected on your grid network. However, with power and control, you'll need to take care of the grid platform by yourself.
- To build it, you'll need to know previously where is each grid node that you want to use in your infrastructure.
- You will need to configure scale up/scale down routines (nº of nodes) by yourself.
- You can add pr remove nodes.
- You will be connected directly with these nodes.
<p align="center">
<img height="600px" width="600px" src="https://github.com/OpenMined/rfcs/blob/master/20190821-grid-platform/DHT-grid.png?raw=true">
</p>
```
import syft as sy
import torch as th
from syft.workers.node_client import NodeClient
hook = sy.TorchHook(th)
# How to build / use a private grid network
# 1 - Start the grid nodes.
# 2 - Connect to them directly
# 3 - Create Private Grid using their instances.
# We need to know the address of every node.
node1 = NodeClient(hook, "ws://localhost:3000")
node2 = NodeClient(hook, "ws://localhost:3001")
node3 = NodeClient(hook, "ws://localhost:3002")
node4 = NodeClient(hook, "ws://localhost:3003")
my_grid = sy.PrivateGridNetwork(node1,node2,node3,node4)
```
### Public Grid
###### Public Grid offers the oportunity to work as a real collaborative platform.
Unlike the private grid, anyone has the power to control all nodes connected to the public grid, the platform will be managed by grid gateway. This component will update the network automatically and perform queries through the nodes. It's important to note that the grid gateway can **only perform non-privileged commands** on grid nodes, it will avoid some vulnerabilities.
Therefore, anyone can register a new node, upload new datasets using their nodes to share it with everyone in a secure way.
Public Grid should work as a **Secure Data Science platform** (such as Kaggle, but using Privacy-Preserving concepts):
- We send pointers to datasets instead of real datasets.
- We can share our models across the network in an encrypted way.
- We can run inferences using our sensitive datasets without send the real value of it to anyone.
<p align="center">
<img height="600px" width="600px" src="https://github.com/OpenMined/rfcs/blob/master/20190821-grid-platform/partially_grid.png?raw=true">
</p>
```
# How to build/use a public grid network
# 1 - Start the grid nodes
# 2 - Register them at grid gateway component
# 3 - Use grid gateway to perform queries.
# You just need to know the adress of grid gateway.
my_grid = sy.PublicGridNetwork(hook, "http://localhost:5000")
```
# Congratulations!!! - Time to Join the Community!
Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
### Star PySyft on GitHub
The easiest way to help our community is just by starring the GitHub repos! This helps raise awareness of the cool tools we're building.
- [Star PySyft](https://github.com/OpenMined/PySyft)
### Join our Slack!
The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
### Join a Code Project!
The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
- [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
- [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
### Donate
If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
[OpenMined's Open Collective Page](https://opencollective.com/openmined)
| github_jupyter |
```
# @title Installation
!curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/master/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s
!wget -c http://dl.fbaipublicfiles.com/habitat/mp3d_example.zip && unzip -o mp3d_example.zip -d /content/habitat-sim/data/scene_datasets/mp3d/
!pip uninstall --yes pyopenssl
!pip install pyopenssl
# @title Colab Setup and Imports { display-mode: "form" }
# @markdown (double click to see the code)
import os
import random
import sys
import git
import numpy as np
from gym import spaces
%matplotlib inline
from matplotlib import pyplot as plt
%cd "/content/habitat-lab"
if "google.colab" in sys.modules:
# This tells imageio to use the system FFMPEG that has hardware acceleration.
os.environ["IMAGEIO_FFMPEG_EXE"] = "/usr/bin/ffmpeg"
repo = git.Repo(".", search_parent_directories=True)
dir_path = repo.working_tree_dir
%cd $dir_path
from PIL import Image
import habitat
from habitat.core.logging import logger
from habitat.core.registry import registry
from habitat.sims.habitat_simulator.actions import HabitatSimActions
from habitat.tasks.nav.nav import NavigationTask
from habitat_baselines.common.baseline_registry import baseline_registry
from habitat_baselines.config.default import get_config as get_baselines_config
# @title Define Observation Display Utility Function { display-mode: "form" }
# @markdown A convenient function that displays sensor observations with matplotlib.
# @markdown (double click to see the code)
# Change to do something like this maybe: https://stackoverflow.com/a/41432704
def display_sample(
rgb_obs, semantic_obs=np.array([]), depth_obs=np.array([])
): # noqa B006
from habitat_sim.utils.common import d3_40_colors_rgb
rgb_img = Image.fromarray(rgb_obs, mode="RGB")
arr = [rgb_img]
titles = ["rgb"]
if semantic_obs.size != 0:
semantic_img = Image.new(
"P", (semantic_obs.shape[1], semantic_obs.shape[0])
)
semantic_img.putpalette(d3_40_colors_rgb.flatten())
semantic_img.putdata((semantic_obs.flatten() % 40).astype(np.uint8))
semantic_img = semantic_img.convert("RGBA")
arr.append(semantic_img)
titles.append("semantic")
if depth_obs.size != 0:
depth_img = Image.fromarray(
(depth_obs / 10 * 255).astype(np.uint8), mode="L"
)
arr.append(depth_img)
titles.append("depth")
plt.figure(figsize=(12, 8))
for i, data in enumerate(arr):
ax = plt.subplot(1, 3, i + 1)
ax.axis("off")
ax.set_title(titles[i])
plt.imshow(data)
plt.show(block=False)
```
## Setup PointNav Task
```
cat "./configs/test/habitat_all_sensors_test.yaml"
if __name__ == "__main__":
config = habitat.get_config(
config_paths="./configs/test/habitat_all_sensors_test.yaml"
)
try:
env.close()
except NameError:
pass
env = habitat.Env(config=config)
action = None
obs = env.reset()
valid_actions = ["TURN_LEFT", "TURN_RIGHT", "MOVE_FORWARD", "STOP"]
interactive_control = False # @param {type:"boolean"}
while action != "STOP":
display_sample(obs["rgb"])
print(
"distance to goal: {:.2f}".format(
obs["pointgoal_with_gps_compass"][0]
)
)
print(
"angle to goal (radians): {:.2f}".format(
obs["pointgoal_with_gps_compass"][1]
)
)
if interactive_control:
action = input(
"enter action out of {}:\n".format(", ".join(valid_actions))
)
assert (
action in valid_actions
), "invalid action {} entered, choose one amongst " + ",".join(
valid_actions
)
else:
action = valid_actions.pop()
obs = env.step(
{
"action": action,
}
)
env.close()
print(env.get_metrics())
```
## RL Training
```
if __name__ == "__main__":
config = get_baselines_config(
"./habitat_baselines/config/pointnav/ppo_pointnav_example.yaml"
)
# set random seeds
if __name__ == "__main__":
seed = "42" # @param {type:"string"}
steps_in_thousands = "10" # @param {type:"string"}
config.defrost()
config.TASK_CONFIG.SEED = int(seed)
config.TOTAL_NUM_STEPS = int(steps_in_thousands)
config.LOG_INTERVAL = 1
config.freeze()
random.seed(config.TASK_CONFIG.SEED)
np.random.seed(config.TASK_CONFIG.SEED)
if __name__ == "__main__":
trainer_init = baseline_registry.get_trainer(config.TRAINER_NAME)
trainer = trainer_init(config)
trainer.train()
# @markdown (double click to see the code)
# example tensorboard visualization
# for more details refer to [link](https://github.com/facebookresearch/habitat-lab/tree/master/habitat_baselines#additional-utilities).
try:
from IPython import display
with open("./res/img/tensorboard_video_demo.gif", "rb") as f:
display.display(display.Image(data=f.read(), format="png"))
except ImportError:
pass
```
## Key Concepts
All the concepts link to their definitions:
1. [`habitat.sims.habitat_simulator.HabitatSim`](https://github.com/facebookresearch/habitat-lab/blob/master/habitat/sims/habitat_simulator/habitat_simulator.py#L159)
Thin wrapper over `habitat_sim` providing seamless integration with experimentation framework.
2. [`habitat.core.env.Env`](https://github.com/facebookresearch/habitat-lab/blob/master/habitat/core/env.py)
Abstraction for the universe of agent, task and simulator. Agents that you train and evaluate operate inside the environment.
3. [`habitat.core.env.RLEnv`](https://github.com/facebookresearch/habitat-lab/blob/71d409ab214a7814a9bd9b7e44fd25f57a0443ba/habitat/core/env.py#L278)
Extends the `Env` class for reinforcement learning by defining the reward and other required components.
4. [`habitat.core.embodied_task.EmbodiedTask`](https://github.com/facebookresearch/habitat-lab/blob/71d409ab214a7814a9bd9b7e44fd25f57a0443ba/habitat/core/embodied_task.py#L242)
Defines the task that the agent needs to solve. This class holds the definition of observation space, action space, measures, simulator usage. Eg: PointNav, ObjectNav.
5. [`habitat.core.dataset.Dataset`](https://github.com/facebookresearch/habitat-lab/blob/4b6da1c4f8eb287cea43e70c50fe1d615a261198/habitat/core/dataset.py#L63)
Wrapper over information required for the dataset of embodied task, contains definition and interaction with an `episode`.
6. [`habitat.core.embodied_task.Measure`](https://github.com/facebookresearch/habitat-lab/blob/master/habitat/core/embodied_task.py#L82)
Defines the metrics for embodied task, eg: [SPL](https://github.com/facebookresearch/habitat-lab/blob/d0db1b55be57abbacc5563dca2ca14654c545552/habitat/tasks/nav/nav.py#L533).
7. [`habitat_baselines`](https://github.com/facebookresearch/habitat-lab/tree/71d409ab214a7814a9bd9b7e44fd25f57a0443ba/habitat_baselines)
RL, SLAM, heuristic baseline implementations for the different embodied tasks.
## Create a new Task
```
if __name__ == "__main__":
config = habitat.get_config(
config_paths="./configs/test/habitat_all_sensors_test.yaml"
)
@registry.register_task(name="TestNav-v0")
class NewNavigationTask(NavigationTask):
def __init__(self, config, sim, dataset):
logger.info("Creating a new type of task")
super().__init__(config=config, sim=sim, dataset=dataset)
def _check_episode_is_active(self, *args, **kwargs):
logger.info(
"Current agent position: {}".format(self._sim.get_agent_state())
)
collision = self._sim.previous_step_collided
stop_called = not getattr(self, "is_stop_called", False)
return collision or stop_called
if __name__ == "__main__":
config.defrost()
config.TASK.TYPE = "TestNav-v0"
config.freeze()
try:
env.close()
except NameError:
pass
env = habitat.Env(config=config)
action = None
env.reset()
valid_actions = ["TURN_LEFT", "TURN_RIGHT", "MOVE_FORWARD", "STOP"]
interactive_control = False # @param {type:"boolean"}
while env.episode_over is not True:
display_sample(obs["rgb"])
if interactive_control:
action = input(
"enter action out of {}:\n".format(", ".join(valid_actions))
)
assert (
action in valid_actions
), "invalid action {} entered, choose one amongst " + ",".join(
valid_actions
)
else:
action = valid_actions.pop()
obs = env.step(
{
"action": action,
"action_args": None,
}
)
print("Episode over:", env.episode_over)
env.close()
```
## Create a new Sensor
```
@registry.register_sensor(name="agent_position_sensor")
class AgentPositionSensor(habitat.Sensor):
def __init__(self, sim, config, **kwargs):
super().__init__(config=config)
self._sim = sim
# Defines the name of the sensor in the sensor suite dictionary
def _get_uuid(self, *args, **kwargs):
return "agent_position"
# Defines the type of the sensor
def _get_sensor_type(self, *args, **kwargs):
return habitat.SensorTypes.POSITION
# Defines the size and range of the observations of the sensor
def _get_observation_space(self, *args, **kwargs):
return spaces.Box(
low=np.finfo(np.float32).min,
high=np.finfo(np.float32).max,
shape=(3,),
dtype=np.float32,
)
# This is called whenver reset is called or an action is taken
def get_observation(self, observations, *args, episode, **kwargs):
return self._sim.get_agent_state().position
if __name__ == "__main__":
config = habitat.get_config(
config_paths="./configs/test/habitat_all_sensors_test.yaml"
)
config.defrost()
# Now define the config for the sensor
config.TASK.AGENT_POSITION_SENSOR = habitat.Config()
# Use the custom name
config.TASK.AGENT_POSITION_SENSOR.TYPE = "agent_position_sensor"
# Add the sensor to the list of sensors in use
config.TASK.SENSORS.append("AGENT_POSITION_SENSOR")
config.freeze()
try:
env.close()
except NameError:
pass
env = habitat.Env(config=config)
obs = env.reset()
obs.keys()
print(obs["agent_position"])
env.close()
```
## Create a new Agent
```
# An example agent which can be submitted to habitat-challenge.
# To participate and for more details refer to:
# - https://aihabitat.org/challenge/2020/
# - https://github.com/facebookresearch/habitat-challenge
class ForwardOnlyAgent(habitat.Agent):
def __init__(self, success_distance, goal_sensor_uuid):
self.dist_threshold_to_stop = success_distance
self.goal_sensor_uuid = goal_sensor_uuid
def reset(self):
pass
def is_goal_reached(self, observations):
dist = observations[self.goal_sensor_uuid][0]
return dist <= self.dist_threshold_to_stop
def act(self, observations):
if self.is_goal_reached(observations):
action = HabitatSimActions.STOP
else:
action = HabitatSimActions.MOVE_FORWARD
return {"action": action}
```
### Other Examples
[Create a new action space](https://github.com/facebookresearch/habitat-lab/blob/master/examples/new_actions.py)
```
# @title Sim2Real with Habitat { display-mode: "form" }
try:
from IPython.display import HTML
HTML(
'<iframe width="560" height="315" src="https://www.youtube.com/embed/Hun2rhgnWLU" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>'
)
except ImportError:
pass
```
Deploy habitat-sim trained models on real robots with the [habitat-pyrobot bridge](https://github.com/facebookresearch/habitat-lab/blob/71d409ab214a7814a9bd9b7e44fd25f57a0443ba/habitat/sims/pyrobot/pyrobot.py)
```python
# Are we in sim or reality?
if args.use_robot: # Use LoCoBot via PyRobot
config.SIMULATOR.TYPE = "PyRobot-Locobot-v0"
else: # Use simulation
config.SIMULATOR.TYPE = "Habitat-Sim-v0"
```
Paper: [https://arxiv.org/abs/1912.06321](https://arxiv.org/abs/1912.06321)
| github_jupyter |
```
#importing libraries
import pandas as pd
import numpy as np
import scipy.stats as stats
import statsmodels.formula.api as smf
```
__Q1: Descriptive analysis__
__Q1.1: 1.1 Summary statistics__
```
#Read the data
data = pd.read_csv('progresa-sample.csv.bz2')
#Checking all the columns of the data
data.columns
#Validating the data
data.head()
#Verifying the shape
data.shape
#Recoding the progresa variable
data['progresa'] = np.where(data['progresa'] == 'basal' ,1.0,0.0)
#Checking the validity of the variable
data['progresa'].nunique()
#Dropping the null variables
data = data.dropna()
data['progresa'].nunique()
#Shape after dropping the null variables.
data.shape
```
_There were totally 10128 rows affected by null values_
```
#Putting all the necessary variables in the data to check the mean, median etc
data2 = pd.DataFrame(data[['sex', 'indig', 'dist_sec','sc','grc','fam_n', 'min_dist',
'dist_cap', 'poor', 'progresa', 'hohedu', 'hohwag', 'welfare_index',
'hohsex', 'hohage', 'age', 'grc97', 'sc97']].describe())
#Validating the data
data2.head()
#Resetting index
data2 = data2.reset_index()
#Validating the data
data2.head()
#Pivoting the table
data2.pivot_table(columns=['index'])
```
_All the variables are displayed in a neat tabular format_
_The variables appear in alphabetic order_
__1.2 : Differences at baseline?__
```
from scipy import stats
from scipy.stats import t
```
__Q1.2.1 and Q1.2.2__
```
#Reading the data again to make changes to the data and not copying it to the original
data2 = pd.read_csv('progresa-sample.csv.bz2')
#Mapping poor and progresa
data2['poor'] = data2['poor'].map({'pobre': 1, 'no pobre': 0})
data2['progresa'] = data2['progresa'].map({'basal' : 1, '0':0})
# Segregating into treatment and Control data
treatment_97 = pd.DataFrame(data2[(data2.year == 97) & (data2.poor == 1) & (data2.progresa == 1)])
control_97 = pd.DataFrame(data2[(data2.year == 97) & (data2.poor == 1) & (data2.progresa == 0)])
# Selecting rows where poor=1 and the year=97, and then grouping by 'progresa' column
new_data = data2[(data2.year == 97) & (data2.poor == 1)].groupby('progresa').mean()
new_data.drop(new_data.columns[[0,9,16,17]], axis =1,inplace=True)
new_data = new_data.transpose()
# swapping columns to match the structure of the required table
new_data = new_data[[1.0,0.0]]
# Resetting Index
new_data.reset_index(level=0, inplace=True)
new_data.rename(columns={'index' : 'Variable name', 0: 'Average value (Control villages)', 1: 'Average value (Treatment villages)'}, inplace=True)
# List of all Variables
var_list = list(new_data['Variable name'])
# Calculating T test for the Treatment, Control
tt = list(stats.ttest_ind(treatment_97[var_list], control_97[var_list], nan_policy='omit'))
# Adding the remaining two columns.
new_data['Difference (Treat - Control)'] = tt[0]
new_data['p-value'] = tt[1]
# for a better look at the insignificant data with respect to the value of p
new_data['p<0.05'] = new_data['p-value'] < 0.05
new_data.sort_values('Variable name')
```
_Displayed above are all the means differences and p-values with Variable name in ascending order_
__Q1.2.3, Q1.2.4, Q1.2.5__
```
new_data[new_data['p<0.05']==True].sort_values('Variable name')
```
_Q1.2.3 There are 8 variables shown above which are statistically significant between the control and the treatment variables_
_They are namely: dist_cap, dist_sec, hohage, hohwag, min_dist, sex, welfare_index_
_Q1.2.4 It matters that there are baseline differences because if the differences are too large, we can say that the data is biased thus making the causality of the progresa program weak_
_However, the differences are not too large for the statistically significant variables_
_Q1.2.5 The measurement of the impact cannot be acurately measured by the baseline differences alone, we need to explore the linear relationship between the control and treatments variables to correctly identify the impact of the measurement._
__Q2: Measuring Impact__
```
#Checking the shape
data.shape
#Taking only the poor data and creating a variable called after
newdata = data[data['poor'] == 'pobre']
newdata['after'] = np.where(newdata['year']==98, True, False)
#Validating the data
newdata.head()
#Checking the shape
newdata.shape
#Dropping NAs
newdata = newdata.dropna()
#Describing the data
newdata.after.describe()
```
__Q2.1.1 compute the estimator by just comparing the average schooling rates for these villages.__
```
#Average schooling rates
newdata[newdata.progresa==1.0].groupby('after').sc.mean()
```
_Displayed the average schooling rates:_
_The average schooling rate before 1998 was 82.2% and the average schooling rate after 1998 was 84.92%_
__Q2.1.2 now re-compute the estimator using linear regression, and individual schooling rates. Do not include other regressors.__
```
#Linear model on sc and after
m = smf.ols(formula = 'sc~after', data=newdata[newdata.progresa == 1.0])
m.fit().summary()
```
_Above displayed is the linear regression model of the school rates with the after variable_
_We can see that the estimate increases by 0.0266 when the after is true. The value for After=True is statistically significant_
_Therefore, The average schooling rate before 1998 was 82.27% and the average schooling rate after 1998 was 84.87% which is comparable to the mean model_
__Q2.1.3 finally, estimate a multiple regression model that includes other covariates.__
```
#Multiple regression model on sc and other covariates
m = smf.ols(formula = 'sc~after + dist_sec + sex + min_dist + dist_cap + hohedu', data=newdata[newdata.progresa == 1.0])
m.fit().summary()
#r.summary()
```
_In the multiple regression model we can see that after=True is 0.0255 which means that the individual schooling rates have increased by 0.0255 after 1998. This variable is slightly smaller as compared to the linear model without covariates._
_All the variables are statistically significant in the above model with p-value = 0.00. The t-value is the highest for the progresa variable. It seems to be much more significant compared to others_
_Therefore, The average schooling rate before 1998 was 71.47% and the average schooling rate after 1998 was 73.97% which is comparable to the mean model_
__compare all the estimators. Are your estimates statistically significant? What do they suggest
about the efficacy of the progresa__
_The average enrollment rate before and after 1998 compares as follows:_
Before After
Simple mean 82.2% 84.92%
Linear model 82.27% 84.87%
Multiple model 71.47% 73.97%
_In all the cases, we see that the enrollment rate has increased with time having the progresa effect. The simple mean model and the linear model are almost comparable. The multiple model is quite lesser because of the effect of all the variables._
_In all three cases, the values for After are statistically significant._
__2.2: Cross-sectional estimator__
```
#Validating the data
data.shape
#Selecting only the poor households
newdata = data[data['poor'] == 'pobre']
#Dropping NAs
newdata = newdata.dropna()
#Making an after variable
newdata['after'] = np.where(newdata['year']==98, True, False)
#Describing the after variable
newdata.after.describe()
```
__Begin by estimating the impact of Progresa by compring the average enrollment rate among
poor households in the treatment villages and the average enrollment rate among poor households
in the control villages. What do you find?__
```
#Getting the average enrollment rate for treatment and control villages
newdata[newdata['after']==True].groupby('progresa').sc.mean()
```
_We find that the average enrollment rate for the treatment villages(Progresa=1) is 84% whereas the average enrollment rate for control villages(Progresa=0) is 81%_
__Now repeat the estimator using simple regression.__
```
#Making a linear model
m = smf.ols(formula = 'sc ~ progresa', data=newdata)
r = m.fit()
r.summary()
```
_From the above results, we can see that with the effect of progresa the average enrollment rate increased by 0.0218 that is 2.1%_
_Without the effect of progresa, the enrollment rate is 81.32%. Therefore with the effect of progresa, the enrollment rate is 83.42%_
__Third, use multiple regression to get the same estimate.__
```
#Using multiple regression model
m = smf.ols(formula = 'sc ~ progresa + dist_sec + sex + min_dist + dist_cap + hohedu', data=newdata)
r = m.fit()
r.summary()
```
_In the multiple regression model, we can see that the impact of progresa is slightly lesser as compared to the linear model. The enrollment rate is increasing by 1.9% here compared to the 2.1% in the linear model._
_Without the effect of progresa or any other factors, the average enrollment according to the model above is 70.41%.
Therefore, the enrollment rate for treatment villages is 72.34% with progresa_
_Many factors are afftecting this model there by diminishing the effect of progresa_
__Finally, as above, compare your three estimators. What do you find? Are the effects statisti-
cally significant?__
_The average enrollment rate for the treatment and control villages compares as follows:_
Treatment Control
Simple mean 84% 81%
Linear model 83.5% 81.32%
Multiple model 72.34% 70.41%
_In all the cases, we see that the enrollment rate has increased with progresa. The simple mean model and the linear model are almost comparable. The multiple model is slightly lesser because of the effect of all the variables._
_In all three cases, the values for progresa are statistically significant_
__2.3: Differences-in-differences estimator__
```
#Validating the data
data.shape
#Selecting only the poor households
newdata = data[data['poor'] == 'pobre']
#Dropping NAs
newdata = newdata.dropna()
#Making an after variable
newdata['after'] = np.where(newdata['year']==98, True, False)
#Describing the after variable
newdata.after.describe()
```
__Start with the simple table. However, DiD requires 4-way comparison. So compare the average
enrollment rate among poor households in the treatment villages and the average enrollment rate
among poor households in the control villages, both 1997 and 1998. What do you find?__
```
#Displaying the diff table
newdata.groupby(['progresa', 'after'], as_index = False).sc.mean()
```
_We found that:_
_The difference for the Control Sample(Progresa = 0) is:_
0.815066 - 0.810923 = 0.004143
_The difference for the Treatment Sameple(Progresa = 1) is:_
0.822697 - 0.849257 = -0.02656
_The estimate of the impact(Diff-in-Diff) is:_
0.004143 - (-0.02656) = 0.030703
_The difference for 97 is:_
0.815066 - 0.822697 = -0.007631
_The difference for 98 is:_
0.810923 - 0.849257 = -0.038334
_The estimate of the impact(Diff-in-Diff) is:_
-0.007631 - (-0.038334) = 0.030703
_This indicates that the increase in the enrollment rate can be credited to the progresa treatment_
_Without the progresa and time in place, the enrollment rate is 81.5% and with both of them it is, 84.57%
_In both cases, the estimate of the impact is 0.030703_
__2.3.2:Now repeat the estimator using simple regression.__
```
#Cross linear model of progresa and after
m = smf.ols(formula = 'sc ~ progresa * after', data=newdata)
r = m.fit()
r.summary()
```
_The area of interest in the above regression model is progresa:after[T:True] whose coefficient is 0.0307 which is ths same as compared to the diff-in-diff calculated in the tabular format._
_From the above table:_
_The enrollment rate without the effect of progresa or time: 81.51%_
_The enrollment rate with progresa, without the effect of time: 82.27%_
_The enrollment rate in 1998, without the effect of progresa: 80.75_
_The enrollment rate with the effect of progressa and in 1998: 84.58%_
_Therefore, the enrollment is the highest for effect of progresa and 1998. These values are comparable to the tabular diff-in-diff values_
_We can say that estimate through difference in difference approach are more accurate than simple difference because control and treatment variables are considered together with the introduction of interaction term where as they are considered in silos for simple difference method._
__And as above, use multiple regression to get the same estimate.__
```
#Using multiple regression model
m = smf.ols(formula = 'sc ~ progresa * after + dist_sec + sex + min_dist + dist_cap + hohedu', data=newdata)
r = m.fit()
r.summary()
```
_From the above table:_
_The enrollment rate without the effect of progresa or time: 70.67%_
_The enrollment rate with progresa, without the effect of time: 71.19%_
_The enrollment rate in 1998, without the effect of progresa: 70.14_
_The enrollment rate with the effect of progressa and in 1998: 73.74%_
_Therefore, the enrollment is the highest for effect of progresa and 1998. These values are quite lesser compared to the linear model and the tabular model._
__Finally, as above, compare your three estimators. What do you find? Are the effects statistically significant?__
_The average enrollment rate with progresa and time in diff-in-diff models compares as follows:_
Treatment Control
Simple diff-in-diff 81.5% 84.57%
Linear model 81.51% 84.58%
Multiple model 70.68% 73.74%
_In all the cases, we see that the enrollment rate has increased with progresa and time. The simple diff-in-diff model and the linear model are almost comparable. The multiple model is slightly lesser because of the effect of all the variables._
_In the linear and the multiple model, the after and progresa variable without affecting each other are not statistically significant_
_Whereas, after&1998 are statistically significant in both the cases_
__Q 2.4 Compare the estimators__
__List the identifying assumptions (counterfactual assumptions) behind all three models. Which ones do you find more/less plausible?__
_Counterfactual assumptions for all three models are as follows:_
_1. Before-and-after: The counterfactual argument would be that the avg. enrollment would be the same in 1997 and 1998 with or without the effect of progresa. Time will not be taken into consideration._
_2. Cross sectional estimator: The counterfactual argument would be that avg. enrollment would be the same with or without the effect of progresa. Therefore, The average difference in outcomes between Treated and Control group is solely due to the treatment and no other factor._
_3. Diff-in-Diff: The counterfactual argument would be that the avg. enrollment would be the same with or without progresa. It would also remain the same for 1997 and 1998._
_The counter factual assumption of Cross sectional Estimator and Before-After Estimator is the least plausible because the avg enrollment as seen from the above model has always slightly increased with the effect of progresa. But, It is also possible that there can be other trends or confounding variables which can increase the average enrollment in schools over a period of time and the average enrollement rate of the treatment villages could actually change even if progresa program wouldn't have occurred._
_However, this could be verified if we had a ontrol group which is not affected by progresa to compare the effect of progresa. This comparison helps us to eliminate the confounding effect after the treatment and arrive at a real casual impact because of progresa._
__Compare the estimates of all three models. Do your analysis suggest that progresa program had a positive impact on schooling rates?__
_From the analysis I observed the following -_
_There was a statistical difference between treatment and control groups. The division is not completely random. Hence, there is a flaw in our baseline and it makes our further analysis less reliable._
_According to Before-after estimator,Cross-sectional and Differences-in-differences estimator, we observed a positive impact of progresa on the average enrollment in school. We saw the highest impact of progresa in Difference in Difference estimator and Hence, it is the most reliable method to estimate the effect of progresa. This is because Difference in Difference relaxes the underlying assumption of Before-after and cross-sectional estimator and accurately calculates the average enrollment rate among poor households in the treatment villages and the average enrollment rate among poor households in the control villages, both 1997 and 1998._
| github_jupyter |
```
# Author: Xiang Zhang (zhan6668)
# Description: This IPython notebook pre-process the movie data for Avatar-Project1-Phase3
import os, sys, re
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy import stats
# Design a function to rename the titles
def renameTitle(x):
title = x
year = ''
if "(" in x:
#print(x.split(" ("))
title = x.rsplit(" (",1)[0]
year = x.rsplit(" (",1)[-1][:-1]
return [title, year]
# Design a function to rename the genres
def renameGenres(x):
genre_list = [x]
if "|" in x:
genre_list = x.split("|")
return genre_list
# 1. links.csv
df_link = pd.read_csv('./GroupLens-MovieLens-25m/links.csv', index_col=0)
df_link
# 2. movies.csv
df_movie = pd.read_csv('./GroupLens-MovieLens-25m/movies.csv', index_col=0)
mv_title_list = []
mv_year_list = []
for mv in df_movie['title'].tolist():
mv = mv.replace(u'\xa0', u' ')
[title, year] = renameTitle(mv)
mv_title_list.append(title)
mv_year_list.append(year)
df_movie['movie_title'] = mv_title_list
df_movie['movie_year'] = mv_year_list
mv_genre_list = []
for mv in df_movie['genres'].tolist():
genre_list = renameGenres(mv)
mv_genre_list.append(genre_list)
df_movie['movie_genres'] = mv_genre_list
df_movie
df_movie
df_movie_new = df_movie.join(df_link, on='movieId')[['imdbId','movie_title', 'movie_year', 'movie_genres']]
def renameIMDbId(x):
x_len = len(str(x))
if x_len == 7:
x = 'tt' + str(x)
elif x_len<7:
x = 'tt' + (7-x_len)*'0' + str(x)
else:
x = 'tt' + str(x)
#print('larger than 7')
#print(x)
return x
df_movie_new['imdbId'].apply(renameIMDbId)
df_movie_new['imdbId'] = df_movie_new['imdbId'].apply(renameIMDbId)
# 3. read in IMDb ratings
df_imdb_rating = pd.read_csv('./IMDb/title.ratings.tsv', sep='\t', index_col=0)
df_movie_new = df_movie_new.join(df_imdb_rating['averageRating'], on='imdbId')
df_movie_new.set_index('imdbId').to_csv('movies_info.csv')
df_movie_new
# 4. now we turn to human - actor/actress/director
df_name = pd.read_csv('./IMDb/name.basics.tsv',sep='\t',index_col=0)
df_name
print(df_name.shape[0])
print(df_name[df_name['primaryProfession'].isnull()].shape[0])
# identify whether this person is an actor/actress/director
label_list = []
counter = 0
for row in df_name.iterrows():
#idx = row[0]
local_label_list = []
if pd.isnull(row[1]['primaryProfession']):
local_label_list.append('')
else:
if 'actor' in row[1]['primaryProfession']:
local_label_list.append('actor')
if 'actress' in row[1]['primaryProfession']:
local_label_list.append('actress')
if 'director' in row[1]['primaryProfession']:
local_label_list.append('director')
if len(local_label_list) > 1:
#print(local_label_list)
counter += 1
label_list.append(local_label_list)
counter
df_name['professionLabel'] = label_list
df_name_simp = df_name[['primaryName', 'knownForTitles','professionLabel']]
df_name_simp_1work = df_name_simp[df_name_simp['professionLabel'].str.len()==1]
df_name_simp_1work = df_name_simp_1work[df_name_simp_1work['knownForTitles'].str.len()>2]
link_list = []
for row in df_name_simp_1work.iterrows():
profess = row[1]['professionLabel'][0]
if profess == 'actor' or profess == 'actress':
link_list.append('acted_in')
elif profess == 'director':
link_list.append('directed')
else:
link_list.append('null')
len(link_list)
df_name_simp_1work['link'] = link_list
df_name_simp_1work = df_name_simp_1work[df_name_simp_1work['link']!='null']
df_name_simp_1work.to_csv('./input_Neo4j/workers_1label.csv')
df_name_simp_1work[df_name_simp_1work['link']=='acted_in'].to_csv('./input_Neo4j/workers_1label_act.csv')
df_name_simp_1work[df_name_simp_1work['link']=='directed'].to_csv('./input_Neo4j/workers_1label_direct.csv')
# now look at multi-label
df_name_simp[df_name_simp['professionLabel'].str.len()>2]
# manually revise after checking
df_name_simp.loc['nm5510978']['professionLabel'] = ['actress', 'director']
df_name_simp.loc['nm7434057']['professionLabel'] = ['actress', 'director']
df_name_simp[df_name_simp['professionLabel'].str.len()>2]
df_name_simp[df_name_simp['professionLabel'].str.len()==2]
df_name_simp[df_name_simp['professionLabel'].str.len()==2].to_csv('./input_Neo4j/workers_2labels.csv')
df_name_simp_2work = df_name_simp[df_name_simp['professionLabel'].str.len()==2]
df_movie_new
df_crew = pd.read_csv('./IMDb/title.crew.tsv', sep='\t',index_col=0)
df_crew
df_movie_new.join(df_crew['directors'], on='imdbId').set_index('imdbId').to_csv('./input_Neo4j/movies_info.csv')
df_movie_new = df_movie_new.join(df_crew['directors'], on='imdbId')
df_movie_new
# 0320 change the format of movie_genres
df_movie_new = pd.read_csv('./input_Neo4j/movies_info.csv',index_col=0)
def reformat_genres(x):
return ",".join(eval(x))
df_movie_new['movie_genres'] = df_movie_new['movie_genres'].apply(reformat_genres)
df_movie_new
df_movie_new.to_csv('./input_Neo4j/movies_info_0321.csv')
df_name_simp_2work
# 5. customer rating
df_cratings = pd.read_csv('./GroupLens-MovieLens-25m/ratings.csv')
df_cratings.shape
df_cratings = df_cratings.join(df_movie_new, on='movieId')[['userId','imdbId','rating']]
df_cratings
def renameCustId(x):
x_len = len(str(x))
if x_len == 6:
x = 'c' + str(x)
elif x_len<6:
x = 'c' + (6-x_len)*'0' + str(x)
else:
x = 'c' + str(x)
#print('larger than 6')
#print(x)
return x
df_cratings['userId'] = df_cratings['userId'].apply(renameCustId)
df_cratings.set_index('userId').to_csv('./input_Neo4j/customer_ratings.csv')
df_cratings.set_index('userId')
df_movie_new.set_index('imdbId')
df_df_name_simp_1work_performer = df_name_simp_1work[df_name_simp_1work['link']=='acted_in']
df_df_name_simp_1work_performer.shape[0]
df_df_name_simp_1work_performer['professionLabel_2'] = ['performer']*3487727
df_df_name_simp_1work_performer.to_csv('./input_Neo4j/workers_1label_act.csv')
df_df_name_simp_1work_performer
df_name_simp_1work_director = df_name_simp_1work[df_name_simp_1work['link']=='directed']
df_name_simp_1work_director.shape
df_name_simp_1work_director['professionLabel_2'] = ['director']*632346
df_name_simp_1work_director.to_csv('./input_Neo4j/workers_1label_direct.csv')
df_name_simp_1work_director
df_name_simp_2work
# 0322: get the subset of customer ratings
df_movie_new = pd.read_csv('./input_Neo4j/movies_info_0321.csv',index_col=0)
df_movie_new
df_movie_new[df_movie_new['averageRating'].isnull()]
df_movie_new.loc['tt0154827']
df_ratings = pd.read_csv('./input_Neo4j/customer_ratings.csv',index_col=0)
df_ratings
len(df_ratings['imdbId'].unique())
np.intersect1d(df_movie_new.index, df_ratings['imdbId'].unique()).shape
df_ratings['imdbId'].value_counts()
(df_ratings['imdbId'].value_counts()<=3).value_counts()
# partition to 10 pieces
df_ratings.iloc[:2500000].to_csv('./input_Neo4j/customer_ratings_part1.csv')
df_ratings.iloc[:2500000]
df_ratings.iloc[2500000:5000000].to_csv('./input_Neo4j/customer_ratings_part2.csv')
df_ratings.iloc[5000000:7500000].to_csv('./input_Neo4j/customer_ratings_part3.csv')
df_ratings.iloc[7500000:10000000].to_csv('./input_Neo4j/customer_ratings_part4.csv')
df_ratings.iloc[10000000:12500000].to_csv('./input_Neo4j/customer_ratings_part5.csv')
df_ratings.iloc[12500000:15000000].to_csv('./input_Neo4j/customer_ratings_part6.csv')
df_ratings.iloc[15000000:17500000].to_csv('./input_Neo4j/customer_ratings_part7.csv')
df_ratings.iloc[17500000:20000000].to_csv('./input_Neo4j/customer_ratings_part8.csv')
df_ratings.iloc[20000000:22500000].to_csv('./input_Neo4j/customer_ratings_part9.csv')
df_ratings.iloc[22500000:].to_csv('./input_Neo4j/customer_ratings_part10.csv')
```
| github_jupyter |
## Dependencies
```
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# # Pixel-level transforms
# if p_pixel_1 >= .4:
# image = tf.image.random_saturation(image, lower=.7, upper=1.3)
# if p_pixel_2 >= .4:
# image = tf.image.random_contrast(image, lower=.8, upper=1.2)
# if p_pixel_3 >= .4:
# image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/123-cassava-leaf-effnetb3-scl-no-aux-2-512x512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
class UnitNormLayer(L.Layer):
"""
Normalize vectors (euclidean norm) in batch to unit hypersphere.
"""
def __init__(self, **kwargs):
super(UnitNormLayer, self).__init__(**kwargs)
def call(self, input_tensor):
norm = tf.norm(input_tensor, axis=1)
return input_tensor / tf.reshape(norm, [-1, 1])
def encoder_fn(input_shape):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
norm_embeddings = UnitNormLayer()(base_model.output)
model = Model(inputs=inputs, outputs=norm_embeddings)
return model
def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
unfreeze_model(encoder) # unfreeze all layers except "batch normalization"
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
features = L.Dropout(.5)(features)
features = L.Dense(512, activation='relu')(features)
features = L.Dropout(.5)(features)
output = L.Dense(N_CLASSES, activation='softmax', name='output')(features)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy')(features)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd')(features)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
encoder = encoder_fn((None, None, CHANNELS))
model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=True)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
| github_jupyter |
<p align="center">
<img src="https://github.com/GeostatsGuy/GeostatsPy/blob/master/TCG_color_logo.png?raw=true" width="220" height="240" />
</p>
## Demonstration of Lorenz Coefficient for Quantifying Spatial, Subsurface Heterogeneity
#### Alan Scherman, Rice University, UT PGE 2020 SURI
#### Supervised by:
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
***
#### Introduction
This is a demonstration of how to calculate the Lorenz Coefficient of a subsurface sample from well log porosity and permeability measurements. The Lorenz Coefficient is an important heuristic imported form the sphere of macroeconomics used to quantify the spatial heterogeneity of a subsurface sample. The Lorenz Coefficient is obtained by doubling the area between the Lorenz Curve and the Homogeneous Line. It ranges from 0 to 1 where, by convention, a coefficient of less than 0.3 suggests low heterogeneity and a coefficient of more than 0.6 indicates high heterogeneity. In practical terms, a low spatial heterogeneity allows for simple displacement of subsurface fluids and a high recovery factor.
***
#### Objective
To understand and apply the methodology to calculate the Lorenz Curve and Coefficient from porosity and permeability measurements through Python functionalities.
***
#### Calculation procedure
The following list contains the necessary steps to determine the Lorenz Coefficient of a subsurface sample:
**(1)** Sort porosity ($\phi$) and permeability (**_k_**) in descending order of ratio **_k_**/$\phi$;
**(2)** Calculate storage (**_S.C._**) and flow capacity (**_F.C._**) of each layer (region between depth measurements) with:
<br>
\begin{equation}
S.C. = \phi*h
\end{equation}
\begin{equation}
F.C. = k*h
\end{equation}
where **_h_** is the layer thickness;
**(3)** Calculate the _cumulative_ storage (**_C.S.C._**) and flow capacities (**_C.F.C._**) of each layer with:
<br>
\begin{equation}
C.S.C. = \sum^{current layer}_{i = 1} \phi*h
\end{equation}
\begin{equation}
C.F.C. = \sum^{current layer}_{i = 1} k*h
\end{equation}
**(4)** Normalize the cumulative storage and flow capacities by dividing them by the largest cumulative storage and flow capacities (i.e. the last cumulative capacities calculated), respectively;
**(5)** _(Optional)_ Plot the normalized cumulative flow capacities against the normalized cumulative storage capacities (i.e. the Lorenz Curve);
**(6)** Find a curve fit for the normalized capacities in the Cartesian plane (usually a 3rd degree polynomial is sufficient);
**(7)** Integrate to find the area between the Lorenz Curve and the Homogenous Line;
**(8)** Divide the result found in **(7)** by 0.5 to obtain the Lorenz Coefficient:
\begin{equation}
LorenzCoefficient = \frac{\int^{1}_{0} (LorenzCurve - HomogeneousLine)}{0.5}
\end{equation}
***
#### Load the required libraries
The program below utilizes some standard Python packages. These should be previously installed if you have Anaconda or other similar software.
```
import pandas as pd # To import data from .xlsx or .csv file
import numpy as np # For numerical array management
from matplotlib import pyplot as plt # For graphical display of Lorenz Curve
```
If you get a package import error, you may have to first install some of these packages. This can usually be accomplished by opening up a command window on Windows and then typing 'python -m pip install [package-name]'. More assistance is available with the respective package docs.
***
#### Importing porosity and permeability data
The data file for this demonstration is availbale at [here](https://github.com/GeostatsGuy/GeoDataSets/blob/master/WellPorPermSample_3.xlsx).
The well log data will be imported from an Excel (.xlsx) file. There are similar Pandas methods which allow the importation of data from .csv and other common file types. First, the Excel data is stored as a Pandas data frame. Then, the depth, porosity, and permeability data are placed into separate data frames by means of their Excel column names. Make sure that the strings match every character of the column titles in the Excel file.
```
df = pd.read_excel('WellPorPermSample_3.xlsx') # Include file directory with os.chdir() method if necessary
depth = pd.DataFrame(df, columns = ['Depth (m)']);
poros = pd.DataFrame(df, columns = ['Por (%)']);
perme = pd.DataFrame(df, columns = ['Perm (mD)']);
```
To certify that the subsurface data have been imported appropriately:
```
df.head()
```
***
#### Convert data frames to NumPy arrays
In order to facilitate data management and allow the use of helpful mathematical methods, we'll convert the Pandas data frames to NumPy arrays. Also, let's transpose the arrays to row format for conventionality.
```
depth = np.transpose(depth.to_numpy()[:,0])
poros = np.transpose(poros.to_numpy()[:,0])
perme = np.transpose(perme.to_numpy()[:,0])
```
Once again, let's take a look at the resulting arrays:
```
print('depth =', depth[:5], '\nporos =', poros[:5], '\nperme =', perme[:5]) # To visualize the first 5 elements of each array
```
***
#### Determine thicknesses of layers
The layer thickness is found by subtracting a shallower depth from a deeper depth. For most well logs, the layers will have a uniform thickness. However, irregular thicknesses are also possible and should be accounted for. Whichever is the case, the layer thicknesses array can be calculated in the following manner:
```
layer_thick = depth - np.concatenate(([0],depth[:-1]))
```
Note that the layer_thick array should have the same dimensions as the porosity and permeability arrays.
***
#### Step (1) - Sort data in decreasing order of k/$\phi$
Recall that the first step to calculate the Lorenz Coefficient is to sort all layers in descending order of their ratio of permeability by porosity. Because both the porosity and permeability values are expressed as NumPy arrays, these ratios can be easily obtained by:
```
ratio = perme/poros # Equivalent to np.multiply(perme,poros)
```
Now, it's possible to sort the layer thickness, porosity and permeability arrays in the desired pattern:
```
layer_thick = np.flip(layer_thick[np.argsort(ratio)], axis = 0);
poros = np.flip(poros[np.argsort(ratio)], axis = 0)
perme = np.flip(perme[np.argsort(ratio)], axis = 0)
```
***
#### Step (2) - Calculate storage and flow capacities
The storage and flow capacities are found by performing element-wise multiplication of the porosity and permeability arrays by the thickness array. Hence:
```
storage_cap = poros*layer_thick
flow_cap = perme*layer_thick
```
***
#### Step (3) - Calculate the cumulative storage and flow capacities
The next step is to compute the cumulative capacities for each layer. An easy solution to this assignment is provided by a for-loop. First, we pre-allocate the cumulative storage and flow capacities arrays. Then, we set the cumulative capacities of the first layer equal to its own capacities. Finally, we run a for-loop to compute the remaining cumlative capacities for the sequential layers.
```
cumul_storage_cap = np.zeros(len(flow_cap)+1)
cumul_flow_cap = np.zeros(len(flow_cap)+1)
cumul_storage_cap[1] = storage_cap[0]
cumul_flow_cap[1] = flow_cap[0]
for i in range(2,len(flow_cap)+1):
cumul_storage_cap[i] = cumul_storage_cap[i-1] + storage_cap[i-1]
cumul_flow_cap[i] = cumul_flow_cap[i-1] + flow_cap[i-1]
```
Note that the length of the cumulative capacities arrays are one index greater than the flow and storage capacity arrays. In fact, the first entry of both arrays is set to 0 (zero) in order to ensure convergence of the future Lorenz Curve to the origin.
***
#### Step (4) - Normalize the cumulative storage and flow capacities
To normalize the cumulative storage and flow capacities (i.e. limiting the range to [0,1]), we'll perform element-wise division of the cumulative storage and flow capacities by the sum of the all layers' storage and flow capacities, respectively. Thus, the normalized (also known as fractional) storage and flow capacities are given by:
```
frac_storage_cap = cumul_storage_cap/cumul_storage_cap[-1]
frac_flow_cap = cumul_flow_cap/cumul_flow_cap[-1]
```
***
#### Step (5) - Plot the Lorenz Curve
Even though we say plot the Lorenz Curve, we'll indeed plot the fractional flow capacities against the fractional storage capacities (i.e. we'll plot some of the points on the Lorenz Curve).
Let's instantiate a figure to hold our Lorenz Curve points:
```
plt.figure(figsize = (16,12))
plt.style.use('fivethirtyeight') # Just a plot style I find pretty
plt.plot([0,1], [0,1], color = 'red', label = 'Homogeneous Line') # Homogeneous line
plt.plot(frac_storage_cap, frac_flow_cap, color = 'black', \
label = 'Lorenz Curve points', linestyle = 'dotted') # Sample points of Lorenz Curve
plt.title('Lorenz Curve', size = 50)
plt.xlabel('Fraction of Total Storage Capacity (m)', size = 36)
plt.ylabel('Fraction of Total Flow Capacity (mD*m)', size = 36)
plt.legend(loc = 4, facecolor = '#C0C0C0', fontsize = 25)
plt.show()
```
***
#### Step (6) - Find a curve fit
Now that we have some of the data points of the Lorenz Curve, we can apply a curve fit to predict the behavior of this entire subsurface sample. We'll use a 3rd degree polynomial to fit the Lorenz Curve.
```
weights = np.ones(len(frac_storage_cap))
weights[0] = 1000; weights[-1] = 1000 # To ensure the fit converges to (0,0) and (1,1)
poly_fit = np.polyfit(frac_storage_cap, frac_flow_cap, deg=3, w=weights)
print('poly_fit = ', poly_fit)
```
The resulting array contains the coefficients of the polynomial fit. To better visualize this polynomial in standard equation form:
```
poly_fit = np.poly1d(poly_fit)
print(poly_fit)
```
***
#### Step (7) - Integrate to find the area between the Lorenz Curve and the Homogenenous LIne
The advantage of having applied the _numpy.poly1d()_ method on Step (6) is that we now have a callable object for which we can evaluate it over any domain (here [0,1]). Then:
```
integral = np.polyint(poly_fit)
print(integral)
```
The result above can be easily validated since the fit has polynomial behavior. Now, we can evaluate the definite integral for the area under the Lorenz Curve and then subtract the area under the homogeneous line (i.e. area of isosceles triangle of side length 1 = 0.5):
```
lorenz_area = integral(1) - integral(0) - 0.5
print(lorenz_area)
```
Note that for practical purposes the integral(0) term could have been ignored since the integration constant is assumed to be 0.
***
#### Step (8) - Obtain the Lorenz Coefficient
The final step of this procedure is to divide the _lorenz_area_ calculated on Step (7) by 0.5, which is the total area between the lines **y = 1** and **y = x** in the domain [0,1]:
```
lorenz_coefficient = lorenz_area/0.5
print('Lorenz Coefficient = %.3f' %lorenz_coefficient)
```
Because the Lorenz Coefficient sits below the conventional threshold of 0.3, we can assume that the subsurface sample near the well exhibits simple displacement of fluids and a potentially high recovery factor.
Let's replicate the plot from Step (5) with shaded areas to better illustrate this Lorenz Coefficient:
```
fig = plt.figure(figsize = (16,12))
plt.style.use('fivethirtyeight')
plt.plot([0,1],[0,1], label='Homogeneous Curve', color = 'red', linewidth = '3') # Homogeneous line
plt.plot(frac_storage_cap, frac_flow_cap, label = 'Lorenz Curve points', \
linestyle = 'dotted', color = 'black')
plt.plot(np.linspace(0,1,100), poly_fit(np.linspace(0,1,100)), linestyle = 'dashed',\
linewidth = '3', label = 'Polynomial Fit', color = 'white') # Polynomial fit
plt.fill_between(np.linspace(0,1,100), poly_fit(np.linspace(0,1,100)),\
np.linspace(0,1,100), alpha = 0.75) # Blue area
plt.fill_between(np.linspace(0,1,100), poly_fit(np.linspace(0,1,100)), 1,\
alpha = 0.75, color = 'orange') # Orange area
plt.title('Lorenz Curve and Coefficient', size = 50)
plt.xlabel('Fraction of Total Storage Capacity (m)', size = 36)
plt.ylabel('Fraction of Total Flow Capacity (mD*m)', size = 36)
plt.legend(loc = 4, facecolor = '#C0C0C0', fontsize = 25)
plt.text(0.3, 0.8, 'Lorenz Coefficient = \n %.3f' %lorenz_coefficient, size = 40, horizontalalignment = 'center', \
bbox=dict(facecolor='none', edgecolor='black'))
plt.show()
```
It's now clear that, graphically, the Lorenz Coefficient is equivalent to the blue area divided by the sum of the blue and orange areas.
***
#### Conclusion
This exercise detailed the procedure towards calculating the Lorenz Coefficient of a subsurface sample from its porosity and permeability measurements. We also covered some of the Python resources available to improve management of well data, perform the necessary mathematical transformations, and visualize the solution curve and the Lorenz Coefficient.
If you use the GeostatsPy package, the calculation and display of the Lorenz Coefficient are available through the **lorenz_curve()** and **lorenz_display()** functions. Please check the documentation on those for more detail.
***
#### More on Michael Pyrcz and the Texas Center for Geostatistics:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
| github_jupyter |

[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/healthcare/DEID_EHR_DATA.ipynb)
# **De-identify Structured Data**
To run this yourself, you will need to upload your license keys to the notebook. Otherwise, you can look at the example outputs at the bottom of the notebook. To upload license keys, open the file explorer on the left side of the screen and upload `workshop_license_keys.json` to the folder that opens.
## 1. Colab Setup
Import license keys
```
import os
import json
with open('/content/spark_nlp_for_healthcare.json', 'r') as f:
license_keys = json.load(f)
license_keys.keys()
secret = license_keys['SECRET']
os.environ['SPARK_NLP_LICENSE'] = license_keys['SPARK_NLP_LICENSE']
os.environ['AWS_ACCESS_KEY_ID'] = license_keys['AWS_ACCESS_KEY_ID']
os.environ['AWS_SECRET_ACCESS_KEY'] = license_keys['AWS_SECRET_ACCESS_KEY']
sparknlp_version = license_keys["PUBLIC_VERSION"]
jsl_version = license_keys["JSL_VERSION"]
print ('SparkNLP Version:', sparknlp_version)
print ('SparkNLP-JSL Version:', jsl_version)
```
Install dependencies
```
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install Spark NLP
! pip install --ignore-installed spark-nlp==$sparknlp_version
! python -m pip install --upgrade spark-nlp-jsl==$jsl_version --extra-index-url https://pypi.johnsnowlabs.com/$secret
```
Import dependencies into Python
```
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ['PATH'] = os.environ['JAVA_HOME'] + "/bin:" + os.environ['PATH']
import pandas as pd
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp_jsl.annotator import *
from sparknlp.base import *
import sparknlp_jsl
```
Start the Spark session
```
spark = sparknlp_jsl.start(secret)
```
## 2. Select the NER model and construct the pipeline
Select the models:
* NER Deidentification models: **ner_deid_enriched, ner_deid_large**
* Deidentification models: **deidentify_large, deidentify_rb, deidentify_rb_no_regex**
For more details: https://github.com/JohnSnowLabs/spark-nlp-models#pretrained-models---spark-nlp-for-healthcare
```
# Change this to the model you want to use and re-run the cells below.
# Anatomy models: ner_anatomy
MODEL_NAME = "ner_deid_large"
DEID_MODEL_NAME = "deidentify_large"
```
Create the pipeline
```
documentAssembler = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentenceDetector = SentenceDetector()\
.setInputCols(["document"])\
.setOutputCol("sentence")
tokenizer = Tokenizer()\
.setInputCols(["sentence"])\
.setOutputCol("token")
# Clinical word embeddings trained on PubMED dataset
word_embeddings = WordEmbeddingsModel.pretrained("embeddings_clinical", "en", "clinical/models")\
.setInputCols(["sentence", "token"])\
.setOutputCol("embeddings")
# NER model trained on n2c2 datasets)
clinical_ner = NerDLModel.pretrained(MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "embeddings"]) \
.setOutputCol("ner")
# NER Overwriter to ensure all the entities are deidentified.
# Use this if the NER does not recognize entities.
neroverwriter = NerOverwriter() \
.setInputCols(["ner"]) \
.setOutputCol("ner_overwrited") \
.setStopWords(['AIQING', 'YBARRA']) \
.setNewResult("B-NAME")
ner_converter = NerConverterInternal()\
.setInputCols(["sentence", "token", "ner_overwrited"])\
.setOutputCol("ner_chunk")
nlp_pipeline = Pipeline(stages=[
documentAssembler,
sentenceDetector,
tokenizer,
word_embeddings,
clinical_ner,
neroverwriter,
ner_converter])
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
```
## 3. Create example inputs
```
# Enter examples as strings in this array
df = pd.DataFrame({'Name': ['Dave'], 'DOB':['1970-01-01'], 'Address': ['Kensington Street'],
'Summary':['Mr. Dave said he has cut his alcohol back to 6 pack once a week. He has cut back his cigarettes to one time per week. His PCP was M.D William Boss who had suggested some tests.']
})
```
# 4. De-identify using Obfuscation Method
Define De-identification Model
```
deidentification = DeIdentificationModel.pretrained(DEID_MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "ner_chunk"]) \
.setOutputCol("deidentified") \
.setObfuscateDate(True)\
.setMode('obfuscate')
#helper function
def deid_row(df):
res_m = {}
for col in df.columns:
result = pipeline_model.transform(spark.createDataFrame(pd.DataFrame({'text':[df[col].values[0]]})))
deid_text = deidentification.transform(result)
res1 = deid_text.toPandas()
sent = ''
for r in res1['deidentified'].iloc[0]:
sent = sent + ' ' + r[3]
res_m[col] = sent
return pd.DataFrame([res_m])
result_obfuscated = deid_row(df, )
```
Visualize
```
result_obfuscated
```
# 5. De-identify using Masking Method
Define De-identification Model
```
deidentification = DeIdentificationModel.pretrained(DEID_MODEL_NAME, "en", "clinical/models") \
.setInputCols(["sentence", "token", "ner_chunk"]) \
.setOutputCol("deidentified") \
.setObfuscateDate(True)\
.setMode('mask')
result_masked = deid_row(df)
```
Visualize
```
result_masked
```
| github_jupyter |
# Beautiful Charts
**Inhalt:** Etwas Chart-Formatierung
**Nötige Skills:** Erste Schritte mit Pandas
**Lernziele:**
- Basic Parameter in der Plot-Funktion kennenlernen
- Charts formatieren mit weiteren Befehlen
- Intro für Ready-Made Styles und Custom Styles
- Charts exportieren
**Weitere Ressourcen:**
- Alle Ressourcen: siehe https://github.com/MAZ-CAS-DDJ/kurs_19_20/tree/master/08%20Pandas%20Teil%201/material
- Simons Cheat Sheet: https://github.com/MAZ-CAS-DDJ/kurs_19_20/blob/master/08%20Pandas%20Teil%201/material/plotting.md
## Charts in Pandas
Eine Reihe von Basic Chart-Funktionalitäten haben wir bereits kennengelernt:
- Line Plots
- Bar Charts
- Histogramme
- etc.
Wenn wir darüber hinausgehen wollen, kann es sehr schnell kompliziert werden. Es gibt zig verschiedene Arten, wie man auf die Funktionen zugreifen kann und Charts formatieren kann.
- Die Funktion, die wir bereits kennen, heisst `plot()`. Wir können sie ausgehend von einem Dataframe oder einer Serie verwenden. Hier die offizielle Referenz dazu: https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
- Im Hintergrund der `plot()`-Funktion steht die Matplotlib-Bibliothek: https://matplotlib.org/index.html. Bei manchen Formatierungs-Optionen müssen wir Befehle direkt von dort verwenden.
## Setup
Wir importieren dieses Mal diverse Libraries:
- Pandas
```
import pandas as pd
```
- und Matplotlib, um auf einige Spezialfunktionen zugreifen zu können
```
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.image as mpimg
import matplotlib.ticker as ticker
```
Wie immer geben wir den Befehl, den Output der plot-Funktion direkt als Bild anzuzeigen
```
%matplotlib inline
```
## das Beispiel
Eine Liste von Ländern mit ihrer Grösse, BIP pro Kopf und Lebenserwartung
```
path = "dataprojects/countries/countries.csv"
df = pd.read_csv(path)
df.head(3)
```
## Elemente eines Charts
Ein Chart besteht aus überraschend vielen Elementen.
Die meisten Programmiersprachen verwenden ähnliche Namen dafür.
Hier die Bezeichnungen bei Pandas / Matplotlib:
(Quelle: https://matplotlib.org/tutorials/introductory/usage.html#sphx-glr-tutorials-introductory-usage-py)
```
from IPython.display import display, Image
img = Image(filename='BeautifulCharts/anatomy.png')
display(img)
```
## Ein simpler Scatterplot
Das hier kennen wir bereits:
```
df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
figsize=(10,7))
```
## Den Chart verschönern
(oder verschlimmern, je nach dem, wie man es nimmt...)
### Variante 1: nur plot()-Parameter
In der Plot-Funktion selbst hat es bereits einige Parameter, mit denen wir etwas spielen können:
```
df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
alpha=0.5, #Transparenz der Füllfarbe
s=40, #Grösse der PUnkte
color='purple', #Farbe der Punkte
linewidth=2, #Dicke der Rahmenlinie
xlim=(-2000,52000), #Min und Max für die X-Achse
ylim=(38, 82), #Min und Max für die Y-Achse
xticks=[0,10000,20000,30000,40000,50000], #Die X-Ticks einzeln spezifizieren
yticks=[0,40,50,60,70,80], #Die Y-Ticks einzeln spezifizieren
figsize=(11,8), #Grösse der Abbildung
grid=True, #Gitternetzlinien ja/nein
fontsize=14, #Schriftrösse der Tick Labels
title='Ab einem BIP pro Kopf von 20000 steigt die Lebenserwartug nicht mehr')
```
### Variante 2: plot()-Parameter und matplotlib-Funktionen
Dazu gibt es noch zig weitere Einstellungen, die man im Nachhinein definieren oder verändern kann.
Wir müssen dazu den Output der `plot()`-Funktion in einer eigenen Variable speichern. Typischerweise: `ax`
```
#Was man mit der Pandas-Funktion alles machen kann
ax = df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
alpha=0.5,
s=40,
color='darkblue',
linewidth=2,
xlim=(-2000,62000),
ylim=(38, 82),
xticks=[0,20000,40000,60000],
yticks=[40,60,80],
figsize=(11,8),
grid=True,
fontsize=14)
#Was man separat einstellen kann:
# - Titel
title_font = {'fontsize': 20, 'fontweight': 'bold', 'fontname': 'Comic Sans MS'}
ax.set_title('Reiche Leute leben länger, aber nicht ewig', fontdict=title_font, loc='left')
# - Achsenbeschriftungen
label_font = {'fontsize': 14, 'fontweight': 'bold', 'fontname': 'Comic Sans MS'}
ax.set_ylabel("Lebenserwartung", fontdict=label_font)
ax.set_xlabel("BIP pro Kopf", fontdict=label_font)
ax.yaxis.set_label_position('left')
# - Ticks
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('${x:,}'))
# Gitternetz formatieren
ax.grid(which='major', linestyle='-.', linewidth='0.5', color='black', )
ax.minorticks_on()
ax.grid(which='minor', linestyle='-.', linewidth='0.2', color='blue', )
# - Rahmenlinien ausschalten
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
# - Hintergrundfarbe
ax.set_facecolor('#EEEEEE')
```
### Variante 3: nur matplotlib-Funktion
Manchmal erstellen wir einen Plot gar nicht via plot(), sondern über matplotlib. ZB bei Small Multiples.
In dem Fall müssen wir praktisch alle Parameter via Matplotlib setzen, leider.
```
# Zuerst ein Figure- und ein Ax-Objekt erstellen
fig, ax = plt.subplots()
# Dann aus dem Ax-Objekt heraus einen Plot erstellen.
# Die scatter()-Funktion ist ähnlich, aber nicht identisch wie plot(kind='scatter')
ax.scatter(x=df['gdp_per_capita'], y=df['life_expectancy'],
alpha=0.5,
s=40,
color='darkgreen',
linewidth=2)
# Eine ganze Reihe von Chart-Formatierungsparametern konnten wir hier nicht ansprechen
# Wir müssen sie extra nochmals neu setzen
# - Die Grösse der Abbildung
fig.set_size_inches(11, 8) #NEU
# - Den Titel
ax.set_title('Reiche Leute leben länger, aber nicht ewig', fontsize=20, fontname='Impact', loc='left')
# - Die Achsen
ax.set_xlim([0, 50000]) #NEU
ax.set_ylim([40, 80]) #NEU
ax.set_ylabel("Lebenserwartung", fontsize=14, fontname='Impact', fontweight='bold')
ax.set_xlabel("BIP pro Kopf", fontsize=14, fontname='Impact', fontweight='bold')
# - Die Ticks
ax.xaxis.set_ticks([0, 10000, 20000, 30000, 40000, 50000]) #NEU
ax.yaxis.set_ticks([40, 50, 60, 70, 80]) #NEU
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('${x:,}'))
# - Das Gitter
ax.grid(which='major', linestyle='-.', linewidth='0.5', color='grey', )
# - Etc. etc: Alles, was wir in Variante 2 verwendet haben, können wir auch hier verwenden.
```
Klingt kompliziert...? **Ja, ist es!** Darum: Mit den Formatierungen am besten erst ganz am Schluss herumspielen, wenn es wirklich darum geht, einen Chart irgendwo zu präsentieren. Für schnelles Abchecken und Austesten von Charts lohnt sich das einfach nicht.
Eine andere Variante, wie wir etwas rascher mit Styles spielen können, finden wir noch weiter unten.
Zuerst aber noch was anderes.
## Legende und Farben
Wie vorgehen, wenn wir die einzelnen Punkte entsprechend einer Kategorie einfärben wollen, zB nach dem Kontinent? Hier eine Lösung.
### Für die Farben
```
df['continent'].unique()
colors = {
'Asia': 'green',
'Europe': 'blue',
'Africa': 'brown',
'N. America': 'yellow',
'S. America': 'red',
'Oceania': 'purple'
}
colorlist = df['continent'].apply(lambda continent: colors[continent])
```
### Für die Legende
```
patches = []
for continent, color in colors.items():
this_patch = mpatches.Patch(color=color, label=continent, alpha=0.5)
patches.append(this_patch)
```
### Für die Punktegrösse
```
area = df['population'] / 400000
```
### Plotten
```
#Was man mit der Pandas-Funktion alles machen kann
ax = df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
alpha=0.5,
s=area,
color=colorlist,
linewidth=2,
xlim=(-2000,52000),
ylim=(38, 82),
xticks=[0,10000,20000,30000,40000,50000],
yticks=[0,40,50,60,70,80],
figsize=(11,8),
grid=True,
fontsize=14)
#Was man separat einstellen kann: - Titel
ax.set_title('Reiche Länder werden irgendwann nicht mehr älter', fontsize=16, fontweight='bold')
# - Achsenbeschriftungen
ax.set_ylabel("Lebenserwartung", fontsize=14, fontweight='bold')
ax.set_xlabel("BIP pro Kopf", fontsize=14, fontweight='bold')
# - Ticks ausschalten
ax.xaxis.set_ticks_position('none')
ax.yaxis.set_ticks_position('none')
# - Legende (this is really an ugly way to do this)
ax.legend(handles=patches, frameon=False, fontsize=14)
```
Hans Rosling would be so proud!! https://www.ted.com/playlists/474/the_best_hans_rosling_talks_yo
### Wichtig
Nochmals: Eine genaue und vollständige Liste der Parameter zu kriegen, ist so gut wie unmöglich (tell me if you find one!).
Daher, und nicht nur daher, lohnt es sich im allgemeinen nicht, allzu viel Zeit für die Formatierung von Charts aufzuwenden. Besser: Daten oder pdf evportieren und anderswo weiterbearbeiten.
Eine andere Option ist, mit einem prädefinierten Stil zu arbeiten
## Exportieren
Wir können einzelne Plots als Dateien exportieren. Dazu 1x diese Einstellung ausführen:
```
matplotlib.rcParams['pdf.fonttype'] = 42 #important for the fonts
```
Und dann exportieren.
- als pdf
```
df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
title='Lebenserwartung und Wohlstand')
plt.savefig("BeautifulCharts/Lebenserwartung-Wohlstand.pdf")
```
- als svg-Vektorgrafik
```
df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
title='Lebenserwartung und Wohlstand')
plt.savefig("BeautifulCharts/Lebenserwartung-Wohlstand.svg")
```
## Prädefinierte Stile
Diese Stile sind ziemlich praktisch. Man kann sich eine Liste davon anzeigen lassen:
```
print(plt.style.available)
```
Um einen bestimmten Stil zu verwenden:
```
plt.style.use('seaborn')
```
Umgesetzt sieht das dann so aus:
```
df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
title='Lebenserwartung und Wohlstand')
```
Der neue Style bleibt so lange gespeichert, bis wir ihn wieder zurücksetzen.
```
plt.style.use('default')
```
## Custom Style Sheets
Wer es mit den Matplotlib wirklich wissen will, kann sich auch sein eigenes Stylesheet erstellen.
Schritt1: Erstelle eine Datei mit diesem Namen (oder irgendeinem anderen Namen):
`my_style.mplstyle`
In die Datei, schreibe die eigenen Default-Werte für bestimmte Stil-Elemente rein:
`axes.titlesize : 20
axes.labelsize : 16
lines.linewidth : 3
lines.markersize : 10
xtick.labelsize : 14
ytick.labelsize : 14
axes.grid : True
grid.color : red`
etc.
Die Dokumentation über alle möglichen Parameter gibt es hier: https://matplotlib.org/tutorials/introductory/customizing.html#sphx-glr-tutorials-introductory-customizing-py
Style laden:
```
plt.style.use('BeautifulCharts/my_style.mplstyle')
```
Test:
```
df.plot(kind='scatter',
x='gdp_per_capita',
y='life_expectancy',
title='Lebenserwartung und Wohlstand')
```
| github_jupyter |
# Microsoft Azure Computer Vision API with Python
This Jupyter Notebook is almost a verbatim copy of that found here:
- https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/quickstarts/python
In order to use this notebook, you must obtain a subscription key:
- https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/vision-api-how-to-topics/howtosubscribe
## Computer Vision Python Quick Starts
This article provides information and code samples to help you quickly get started using the Computer Vision API with Python to accomplish the following tasks:
* [Analyze an image](#AnalyzeImage)
* [Use a domain-specific Model](#DomainSpecificModel)
* [Intelligently generate a thumbnail](#GetThumbnail)
* [Detect and extract printed text from an image](#OCR)
* [Detect and extract handwritten text from an image](#RecognizeText)
To use the Computer Vision API, you need a subscription key. You can get free subscription keys [here](https://docs.microsoft.com/azure/cognitive-services/Computer-vision/Vision-API-How-to-Topics/HowToSubscribe).
You can run this example locally in your own Jupyter Notebook or online via [MyBinder](https://mybinder.org) by clicking on the launch Binder badge:
[](https://mybinder.org/v2/gh/Microsoft/cognitive-services-notebooks/master?filepath=VisionAPI.ipynb)
## Analyze an image with Computer Vision API using Python
<a name="AnalyzeImage"> </a>
With the [Analyze Image method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa), you can extract visual features based on image content. You can upload an image or specify an image URL and choose which features to return, including:
* A detailed list of tags related to the image content.
* A description of image content in a complete sentence.
* The coordinates, gender, and age of any faces contained in the image.
* The ImageType (clip art or a line drawing).
* The dominant color, the accent color, or whether an image is black & white.
* The category defined in this [taxonomy](https://docs.microsoft.com/azure/cognitive-services/computer-vision/category-taxonomy).
* Does the image contain adult or sexually suggestive content?
### Analyze an image
To begin analyzing images, replace `subscription_key` with a valid API key that you obtained earlier.
```
subscription_key = None
assert subscription_key
```
Next, ensure that region in `vision_base_url` corresponds to the one where you generated the API key (`westus`, `westcentralus`, etc.). If you are using a free trial subscription key, you do not need to make any changes here.
```
vision_base_url = "https://westcentralus.api.cognitive.microsoft.com/vision/v1.0/"
```
The image analysis URL looks like the following (see REST API docs [here](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa)):
<code>
https://[location].api.cognitive.microsoft.com/vision/v1.0/<b>analyze</b>[?visualFeatures][&details][&language]
</code>
```
vision_analyze_url = vision_base_url + "analyze"
```
To begin analyzing an image, set `image_url` to the URL of any image that you want to analyze.
```
# some example images from my lab:
# https://makeabilitylab.umiacs.umd.edu/media/banner/IMG_1988653f5fb4981f4a50bd91eb39a16ae970.JPG
# https://makeabilitylab.umiacs.umd.edu/media/banner/3_copy44eb2137a25b42ddbef7aa2bf94c9733.jpg
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/1/12/Broadway_and_Times_Square_by_night.jpg/450px-Broadway_and_Times_Square_by_night.jpg"
```
The following block uses the `requests` library in Python to call out to the Computer Vision `analyze` API and return the results as a JSON object. The API key is passed in via the `headers` dictionary and the types of features to recognize via the `params` dictionary. To see the full list of options that can be used, refer to the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa) documentation for image analysis.
```
import requests
headers = {'Ocp-Apim-Subscription-Key': subscription_key }
params = {'visualFeatures': 'Categories,Description,Color'}
data = {'url': image_url}
response = requests.post(vision_analyze_url, headers=headers, params=params, json=data)
response.raise_for_status()
analysis = response.json()
```
The `analysis` object contains various fields that describe the image. The most relevant caption for the image can be obtained from the `descriptions` property.
```
image_caption = analysis["description"]["captions"][0]["text"].capitalize()
print(image_caption)
```
The following lines of code display the image and overlay it with the inferred caption.
```
%matplotlib inline
from PIL import Image
from io import BytesIO
import matplotlib.pyplot as plt
image = Image.open(BytesIO(requests.get(image_url).content))
plt.imshow(image)
plt.axis("off")
_ = plt.title(image_caption, size="x-large", y=-0.1)
```
## Use a domain-specific model <a name="DomainSpecificModel"> </a>
A [domain-specific model](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fd) is a model trained to identify a specific set of objects in an image. The two domain-specific models that are currently available are _celebrities_ and _landmarks_.
To view the list of domain-specific models supported, you can make the following request against the service.
```
model_url = vision_base_url + "models"
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
models = requests.get(model_url, headers=headers).json()
[model["name"] for model in models["models"]]
```
### Landmark identification
To begin using the domain-specific model for landmarks, set `image_url` to point to an image to be analyzed.
```
image_url = "https://upload.wikimedia.org/wikipedia/commons/f/f6/Bunker_Hill_Monument_2005.jpg"
```
The service end point to analyze images for landmarks can be constructed as follows:
```
landmark_analyze_url = vision_base_url + "models/landmarks/analyze"
print(landmark_analyze_url)
```
The image in `image_url` can now be analyzed for any landmarks. The identified landmark is stored in `landmark_name`.
```
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'model': 'landmarks'}
data = {'url': image_url}
response = requests.post(landmark_analyze_url, headers=headers, params=params, json=data)
response.raise_for_status()
analysis = response.json()
assert analysis["result"]["landmarks"] is not []
landmark_name = analysis["result"]["landmarks"][0]["name"].capitalize()
image = Image.open(BytesIO(requests.get(image_url).content))
plt.imshow(image)
plt.axis("off")
_ = plt.title(landmark_name, size="x-large", y=-0.1)
```
### Celebrity identification
Along the same lines, the domain-specific model for identifying celebrities can be invoked as shown next. First set `image_url` to point to the image of a celebrity.
```
image_url = "https://upload.wikimedia.org/wikipedia/commons/d/d9/Bill_gates_portrait.jpg"
```
The service end point for detecting celebrity images can be constructed as follows:
```
celebrity_analyze_url = vision_base_url + "models/celebrities/analyze"
print(celebrity_analyze_url)
```
Next, the image in `image_url` can be analyzed for celebrities
```
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'model': 'celebrities'}
data = {'url': image_url}
response = requests.post(celebrity_analyze_url, headers=headers, params=params, json=data)
response.raise_for_status()
analysis = response.json()
print(analysis)
```
The following lines of code extract the name and bounding box for one of the celebrities found:
```
assert analysis["result"]["celebrities"] is not []
celebrity_info = analysis["result"]["celebrities"][0]
celebrity_name = celebrity_info["name"]
celebrity_face = celebrity_info["faceRectangle"]
```
Next, this information can be overlaid on top of the original image using the following lines of code:
```
from matplotlib.patches import Rectangle
plt.figure(figsize=(5,5))
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image, alpha=0.6)
origin = (celebrity_face["left"], celebrity_face["top"])
p = Rectangle(origin, celebrity_face["width"], celebrity_face["height"],
fill=False, linewidth=2, color='b')
ax.axes.add_patch(p)
plt.text(origin[0], origin[1], celebrity_name, fontsize=20, weight="bold", va="bottom")
_ = plt.axis("off")
```
## Get a thumbnail with Computer Vision API
<a name="GetThumbnail"> </a>
Use the [Get Thumbnail method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fb) to crop an image based on its region of interest (ROI) to the height and width you desire. The aspect ratio you set for the thumbnail can be different from the aspect ratio of the input image.
To generate the thumbnail for an image, first set `image_url` to point to its location.
```
image_url = "https://upload.wikimedia.org/wikipedia/commons/9/94/Bloodhound_Puppy.jpg"
```
The service end point to generate the thumbnail can be constructed as follows:
```
thumbnail_url = vision_base_url + "generateThumbnail"
print(thumbnail_url)
```
Next, a 50-by-50 pixel thumbnail for the image can be generated by calling this service endpoint.
```
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'width': '50', 'height': '50','smartCropping': 'true'}
data = {'url': image_url}
response = requests.post(thumbnail_url, headers=headers, params=params, json=data)
response.raise_for_status()
```
You can verify that the thumbnail is indeed 50-by-50 pixels using the Python Image Library.
```
thumbnail = Image.open(BytesIO(response.content))
print("Thumbnail is {0}-by-{1}".format(*thumbnail.size))
thumbnail
```
## Optical character recognition (OCR) with Computer Vision API <a name="OCR"> </a>
Use the [Optical Character Recognition (OCR) method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fc) to detect text in an image and extract recognized characters into a machine-usable character stream.
To illustrate the OCR API, set `image_url` to point to the text to be recognized.
```
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/Atomist_quote_from_Democritus.png/338px-Atomist_quote_from_Democritus.png"
```
The service end point for OCR for your region can be constructed as follows:
```
ocr_url = vision_base_url + "ocr"
print(ocr_url)
```
Next, you can call into the OCR service to get the text that was recognized along with bounding boxes. In the parameters shown, `"language": "unk"` automatically detects the language in the text and `"detectOrientation": "true"` automatically aligns the image. For more information, see the [REST API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fc).
```
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'language': 'unk', 'detectOrientation ': 'true'}
data = {'url': image_url}
response = requests.post(ocr_url, headers=headers, params=params, json=data)
response.raise_for_status()
analysis = response.json()
```
The word bounding boxes and text from the results of analysis can be extracted using the following lines of code:
```
line_infos = [region["lines"] for region in analysis["regions"]]
word_infos = []
for line in line_infos:
for word_metadata in line:
for word_info in word_metadata["words"]:
word_infos.append(word_info)
word_infos
```
Finally, the recognized text can be overlaid on top of the original image using the `matplotlib` library.
```
plt.figure(figsize=(5,5))
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image, alpha=0.5)
for word in word_infos:
bbox = [int(num) for num in word["boundingBox"].split(",")]
text = word["text"]
origin = (bbox[0], bbox[1])
patch = Rectangle(origin, bbox[2], bbox[3], fill=False, linewidth=2, color='y')
ax.axes.add_patch(patch)
plt.text(origin[0], origin[1], text, fontsize=20, weight="bold", va="top")
_ = plt.axis("off")
```
## Text recognition with Computer Vision API <a name="RecognizeText"> </a>
Use the [RecognizeText method](https://ocr.portal.azure-api.net/docs/services/56f91f2d778daf23d8ec6739/operations/587f2c6a154055056008f200) to detect handwritten or printed text in an image and extract recognized characters into a machine-usable character stream.
Set `image_url` to point to the image to be recognized.
```
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Cursive_Writing_on_Notebook_paper.jpg/800px-Cursive_Writing_on_Notebook_paper.jpg"
```
The service end point for the text recognition service can be constructed as follows:
```
text_recognition_url = vision_base_url + "RecognizeText"
print(text_recognition_url)
```
The handwritten text recognition service can be used to recognize the text in the image. In the `params` dictionary, set `handwriting` to `false` to recognize only printed text.
```
headers = {'Ocp-Apim-Subscription-Key': subscription_key}
params = {'handwriting' : True}
data = {'url': image_url}
response = requests.post(text_recognition_url, headers=headers, params=params, json=data)
response.raise_for_status()
```
The text recognition service does not return the recognized text by itself. Instead, it returns immediately with an "Operation Location" URL in the response header that must be polled to get the result of the operation.
```
operation_url = response.headers["Operation-Location"]
```
After obtaining the `operation_url`, you can query it for the analyzed text. The following lines of code implement a polling loop in order to wait for the operation to complete. Notice that the polling is done via an HTTP `GET` method instead of `POST`.
```
import time
analysis = {}
while not "recognitionResult" in analysis:
response_final = requests.get(response.headers["Operation-Location"], headers=headers)
analysis = response_final.json()
time.sleep(1)
```
Next, the recognized text along with the bounding boxes can be extracted as shown in the following line of code. An important point to note is that the handwritten text recognition API returns bounding boxes as **polygons** instead of **rectangles**. Each polygon is _p_ is defined by its vertices specified using the following convention:
<i>p</i> = [<i>x</i><sub>1</sub>, <i>y</i><sub>1</sub>, <i>x</i><sub>2</sub>, <i>y</i><sub>2</sub>, ..., <i>x</i><sub>N</sub>, <i>y</i><sub>N</sub>]
```
polygons = [(line["boundingBox"], line["text"]) for line in analysis["recognitionResult"]["lines"]]
```
Finally, the recognized text can be overlaid on top of the original image using the extracted polygon information. Notice that `matplotlib` requires the vertices to be specified as a list of tuples of the form:
<i>p</i> = [(<i>x</i><sub>1</sub>, <i>y</i><sub>1</sub>), (<i>x</i><sub>2</sub>, <i>y</i><sub>2</sub>), ..., (<i>x</i><sub>N</sub>, <i>y</i><sub>N</sub>)]
and the post-processing code transforms the polygon data returned by the service into the form required by `matplotlib`.
```
from matplotlib.patches import Polygon
plt.figure(figsize=(15,15))
image = Image.open(BytesIO(requests.get(image_url).content))
ax = plt.imshow(image)
for polygon in polygons:
vertices = [(polygon[0][i], polygon[0][i+1]) for i in range(0,len(polygon[0]),2)]
text = polygon[1]
patch = Polygon(vertices, closed=True,fill=False, linewidth=2, color='y')
ax.axes.add_patch(patch)
plt.text(vertices[0][0], vertices[0][1], text, fontsize=20, va="top")
_ = plt.axis("off")
```
## Analyze an image stored on disk
The Computer Vision REST APIs don't just accept URLs to publically accessible images. They can also be provided the image to be analyzed as part of the HTTP body. For mode details of this feature, see the documentation [here](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa).
The code in this section uses this feature to analyze a sample image on disk. The primary difference between passing in an image URL vs. image data is that the header to the request must contain an entry of the form:
```py
{"Content-Type": "application/octet-stream"}
```
and the binary image data must be passed in via the `data` parameter to `requests.post` as opposed to the `json` parameter.
First, download a sample image from the [Computer Vision API](https://azure.microsoft.com/services/cognitive-services/computer-vision/) page to the local file system and make `image_path` point to it.
```
%%bash
mkdir -p images
curl -Ls https://aka.ms/csnb-house-yard -o images/house_yard.jpg
image_path = "images/house_yard.jpg"
```
Then, read it into a byte array and send it to the Vision service to be analyzed.
```
image_data = open(image_path, "rb").read()
headers = {'Ocp-Apim-Subscription-Key': subscription_key,
"Content-Type": "application/octet-stream" }
params = {'visualFeatures': 'Categories,Description,Color'}
response = requests.post(vision_analyze_url,
headers=headers,
params=params,
data=image_data)
response.raise_for_status()
analysis = response.json()
image_caption = analysis["description"]["captions"][0]["text"].capitalize()
image_caption
```
As before, the caption can be easily overlaid on the image. Notice that since the image is already available locally, the process is slightly shorter.
```
image = Image.open(image_path)
plt.imshow(image)
plt.axis("off")
_ = plt.title(image_caption, size="x-large", y=-0.1)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/JavaScripts/CloudMasking/Landsat8SurfaceReflectance.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/CloudMasking/Landsat8SurfaceReflectance.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/JavaScripts/CloudMasking/Landsat8SurfaceReflectance.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# This example demonstrates the use of the pixel QA band to mask
# clouds in surface reflectance (SR) data. It is suitable
# for use with any of the Landsat SR datasets.
# Function to cloud mask from the pixel_qa band of Landsat 8 SR data.
def maskL8sr(image):
# Bits 3 and 5 are cloud shadow and cloud, respectively.
cloudShadowBitMask = 1 << 3
cloudsBitMask = 1 << 5
# Get the pixel QA band.
qa = image.select('pixel_qa')
# Both flags should be set to zero, indicating clear conditions.
mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0) \
.And(qa.bitwiseAnd(cloudsBitMask).eq(0))
# Return the masked image, scaled to reflectance, without the QA bands.
return image.updateMask(mask).divide(10000) \
.select("B[0-9]*") \
.copyProperties(image, ["system:time_start"])
# Map the function over one year of data.
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR') \
.filterDate('2016-01-01', '2016-12-31') \
.map(maskL8sr)
composite = collection.median()
# Display the results.
Map.addLayer(composite, {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3})
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from meijer import Meijer
m = Meijer()
self = m
def get_list(self):
request = dict()
request["url"] = "https://mservices.meijer.com/listmanagement/api/list"
request["headers"] = {
"Accept": "application/meijer.shoppingList.ShoppingList-v1.0+json",
}
r = self.get(**request)
return r
def get_favorites(self):
request = dict()
request["url"] = "https://mservices.meijer.com/listmanagement/api/favorites"
request["headers"] = {
"Accept": "application/vnd.meijer.favorites-v1.0+json",
}
r = self.get(**request)
return r
r = get_favorites(self)
r
r = get_list(self)
r
def list_add(self, description):
request = dict()
request["url"] = "https://mservices.meijer.com/listmanagement/api/list"
request["headers"] = {
"Accept": "application/vnd.meijer.listManagement.list-v1.0+json",
"Content-Type": "application/vnd.meijer.listManagement.list-v1.0+json",
}
request["json"] = {'listItems': [{
'itemDescription': description,
'quantity': 1,
}]}
r = self.post(**request)
return r
list_add(self, "FooBaz100")
for i in range(10):
list_add(self, f"Hello World {str(i)}")
import json
for list_item in r["listItems"]:
break
def list_complete(self, item_id):
request = dict()
request["url"] = f"https://mservices.meijer.com/listmanagement/api/listItem/markascompleted/{item_id}"
request["headers"] = {
"Accept": "application/vnd.meijer.listManagement.list-v1.0+json",
"Content-Type": "application/vnd.meijer.listManagement.list-v1.0+json",
}
r = self.session.put(**request)
return r
def list_uncomplete(self, item_id):
request = dict()
request["url"] = f"https://mservices.meijer.com/listmanagement/api/listItem/markasnotcompleted/{item_id}"
request["headers"] = {
"Accept": "application/vnd.meijer.listManagement.list-v1.0+json",
"Content-Type": "application/vnd.meijer.listManagement.list-v1.0+json",
}
r = self.session.put(**request)
return r
list_ = get_list(self)
for item in list_["listItems"]:
list_complete(self, item["listItemId"])
class MeijerList(object):
"""Meijer Shopping List."""
def __init__(self, meijer=None):
if meijer is None:
self.meijer=Meijer()
else:
self.meijer=meijer
def get(self):
request = dict()
request["url"] = "https://mservices.meijer.com/listmanagement/api/list"
request["headers"] = {
"Accept": "application/meijer.shoppingList.ShoppingList-v1.0+json",
}
r = self.meijer.get(**request)
return r
def add(self, description):
request = dict()
request["url"] = "https://mservices.meijer.com/listmanagement/api/list"
request["headers"] = {
"Accept": "application/vnd.meijer.listManagement.list-v1.0+json",
"Content-Type": "application/vnd.meijer.listManagement.list-v1.0+json",
}
request["json"] = {'listItems': [{
'itemDescription': description,
'quantity': 1,
}]}
r = self.post(**request)
return r
def complete(self, item: [dict, int]):
if isinstance(item, dict) and "listItemId" in item:
listItemId = item["listItemId"]
elif isinstance(item, int):
listItemId = item
request = dict()
request["url"] = f"https://mservices.meijer.com/listmanagement/api/listItem/markascompleted/{listItemId}"
request["headers"] = {
"Accept": "application/vnd.meijer.listManagement.list-v1.0+json",
"Content-Type": "application/vnd.meijer.listManagement.list-v1.0+json",
}
r = self.meijer.put(**request)
return r
def uncomplete(self, item: [dict, int]):
if isinstance(item, dict) and "listItemId" in item:
listItemId = item["listItemId"]
elif isinstance(item, int):
listItemId = item
request = dict()
request["url"] = f"https://mservices.meijer.com/listmanagement/api/listItem/markasnotcompleted/{listItemId}"
request["headers"] = {
"Accept": "application/vnd.meijer.listManagement.list-v1.0+json",
"Content-Type": "application/vnd.meijer.listManagement.list-v1.0+json",
}
r = self.meijer.put(**request)
return r
@property
def items(self):
return self.get()["listItems"]
@property
def count(self):
r = self.get()
assert r["totalCount"] == len(r["listItems"])
return r["totalCount"]
shopping_list = MeijerList(meijer=self)
for item in shopping_list.items:
shopping_list.uncomplete(item)
shopping_list.get()
shopping_list.count
l = shopping_list.get()
```
| github_jupyter |
# Working with brainsight module
```
from pynetstim.brainsight import BrainsightProject, chunk_samples, plot_chunks
from pynetstim.plotting import plotting_points
from pynetstim.coordinates import FreesurferCoords
from pynetstim.freesurfer_files import Surf
from pynetstim.utils import clean_plot
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
```
## Brainsight project
```
subject = 'Broad_70_MW'
base_dir = '/users/ehsantadayon/Desktop/Broad_70_MW'
freesurfer_dir = '/Users/ehsantadayon/Desktop/Broad_70_MW/freesurfer2'
project_dir = os.path.join(base_dir,'{subject}/pynetstim_output'.format(subject=subject))
brainsight_file = os.path.join(base_dir, 'brainsight/brainsight.txt'.format(subject=subject))
project = BrainsightProject(subject, freesurfer_dir = freesurfer_dir,
project_dir= project_dir,
brainsight_file = brainsight_file)
#project.summary(plot_pulses=True,overwrite=True,heightpx=250,widthpx=900)
```
#### samples and targets
```
targets = project.brainsight_targets.to_freesurfer_coords()
samples = project.brainsight_samples
def get_coords_df(targets,coord_types='all', by=None,vals=None,to_df=True):
if coord_types=='all':
coord_types = targets.coordinates.keys()
if by:
targets = targets.subset(by,vals)
results = []
for coord_type in coord_types:
coords = targets.coordinates[coord_type]
coords_df = pd.DataFrame(coords,index=targets.name,columns=[coord_type+'_{axis}'.format(axis=a) for a in ['X','Y','Z']])
print(coords_df.head())
results.append(coords_df)
return pd.concat(results,axis=1)
import pandas as pd
get_coords_df(targets,by='name',vals=['R_DLPFC'])
### getting samples for L_DLPFC
lipl_samples = samples.get_target_stims('L_IPL')
lipl_samples.head()
### plotting samples target errors
fig,ax = plt.subplots()
ax.plot(lipl_samples.target_error,'*-')
ax.set_xlabel('Stimulation Pulse')
ax.set_ylabel('LIPL Target Error')
fig,ax = clean_plot(fig,ax)
```
As the jumps in the plot shows, LDLPFC has been stimulated at different sessions. We can chunk the samples.
```
chunks = chunk_samples(lipl_samples, thr=50)
plot_chunks(chunks)
```
## visualization of stimulations
```
targets.get_name()
fstargets = targets.to_freesurfer_coords()
names_to_plot = ['c_Def','c_DAN','L_DLPFC','L_IPL','V1_M1_3']
### subseting targets based on name
fstargets_to_plot = fstargets.subset('name',names_to_plot)
p = plotting_points(fstargets_to_plot,hemi='both',surf='white',
show_roi=True,show_name=True,scale_factor=1,name_scale=4,
opacity=1,annot='Yeo2011_7Networks_N1000',show_directions=True)
p.brain.save_imageset('sample_',views=['dor','med','lat'])
p.show()
```

| github_jupyter |
# Webscraping Color Palette
## Scraping rules
- You should check a site's terms and conditions before you scrape them. It's their data and they likely have some rules to govern it.
- Be nice - A computer will send web requests much quicker than a user can. Make sure you space out your requests a bit so that you don't hammer the site's server.
- Scrapers break - Sites change their layout all the time. If that happens, be prepared to rewrite your code.
- Web pages are inconsistent - There's sometimes some manual clean up that has to happen even after you've gotten your data.
<h3>Import necessary modules</h3>
```
import numpy as np
import pandas as pd
import os
import requests
# from bs4 import BeautifulSoup
import selenium
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
# from selenium.webdriver.support.ui import WebDriverWait
# from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
```
### Selenium headless driver options
```
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--incognito")
chrome_options.binary_location = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary'
driver_dir = '../_driver_headless/chromedriver'
```
### Selenium browser (not headless) options
```
browser_options = Options()
browser_options.add_argument("--incognito")
browser_options.binary_location = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary'
```
### Start chrome browser instance
```
browser = webdriver.Chrome(executable_path=os.path.abspath(driver_dir), chrome_options=browser_options)
```
### Scrape with browser
```
url = 'http://www.color-hex.com/color-palette'
browser.get(url)
browser.current_url
```
### Test X-path
```
url = 'http://www.color-hex.com/color-palette/'
keyword = 61326
url += str(keyword)
browser.get(url)
[tag.text for tag in browser.find_elements_by_xpath('//td/a')]
[tag.text for tag in browser.find_elements_by_xpath('//em')]
url = 'http://www.color-hex.com/color-palette/{}'.format(150)
browser.get(url)
```
### Test GET status code
### Scrape all palettes: #0 to #100000 and write to store
### Restart browser
```
browser = webdriver.Chrome(executable_path=os.path.abspath(driver_dir), chrome_options=browser_options)
palettes = []
url = 'http://www.color-hex.com/color-palette/'
browser.set_page_load_timeout(10)
for i in range(60000, 70000):
try:
url = 'http://www.color-hex.com/color-palette/{}'.format(i)
browser.get(url)
print('.', end='', flush=True)
pal_name = browser.find_elements_by_xpath('//em')
if pal_name:
name = [tag.text for tag in pal_name]
hexs = [tag.text for tag in browser.find_elements_by_xpath('//td/a')]
item = (i, ''.join(name), url, hexs)
print(item)
palettes.append(item)
except TimeoutException as ex:
print(ex)
continue
# store results in batches during scraping and append dataframe
if i % 20 == 0:
print('.', end='', flush=True)
%store palettes
df_palettes = pd.DataFrame(palettes, columns=['number', 'name', 'url', 'hexs'])
df_palettes['hexs'] = df_palettes['hexs'].astype(list)
df_palettes.to_csv('../_data/col_hex_palettes.csv', mode='a', index=False)
palettes = []
for i in :
df_palettes['hexs'] = df_palettes['hexs'].astype(list)
df_palettes.loc[i, 'hexs']
pal = sns.color_palette(df_palettes.loc[i, 'hexs'])
sns.palplot(sns.color_palette(pal))
```
### Refresh palettes
```
%store -r
palettes
import pandas as pd
df_palettes = pd.DataFrame(palettes)
df_palettes
# Append to csv file
df_palettes.to_csv('../_data/color_hex_palettes.csv', mode='a', index=False)
# df = pd.read_csv('../_data/color_hex_palettes.csv')
# df.columns
# df = df[['0', '1', '2', '3']]
# df.to_csv('../_data/color_hex_palettes.csv', index=False)
df_palettes = pd.read_csv('../_data/color_hex_palettes.csv')
import seaborn as sns
import re
for pal in df_palettes.loc[:5, '3']:
pall = re.sub('[\[\]]', '', pal)
pall = list(pall.split(','))
print(pall, type(pall))
pal = sns.color_palette(pal)
sns.palplot(sns.color_palette(pal))
```
| github_jupyter |
## Copy your notebook version
[](https://colab.research.google.com/github/Building-ML-Pipelines/building-machine-learning-pipelines/blob/master/chapters/adv_tfx/Custom_TFX_Components.ipynb)
Bit.ly: https://bit.ly/custom_TFX_components
Colab: https://colab.research.google.com/github/Building-ML-Pipelines/building-machine-learning-pipelines/blob/master/chapters/adv_tfx/Custom_TFX_Components.ipynb
# Workshop - Developing TensorFlow Extended Components
TLDR: TensorFlow Extended (TFX) allows data scientists to assemble production pipelines for model updates and then run the pipelines of a variety of orchestration tools.
TFX provide basic components to ingest, validate and transform data, as well as for model training, tuning, validation and deployments.

Figure taken from "Building Machine Learning Pipelines", O'Reilly July 2020, Hapke, Nelson
One of the strengths of TFX is the extensibility of the framework by building custom components.
### Applications for custom components can:
* Ingestion of user specific data (e.g. images or custom database tables)
* Compiling specific pipeline reports
* Communicating pipeline results (e.g. via Slack or MS Teams)
* Generating additional pipeline artifacts, e.g. model and data cards
## Workshop Outline
In this workshop, we'll introduce two ways of building your TFX components for your ML pipelines. In particular, we'll focus on:
* Brief overview of TFX and pipelines
* Presentation how to build a component from scratch
* Workshop how to extend existing components
In this workshop we are implementing a TFX component to ingest images directly into the ML pipeline and generate labels for each image instead of converting the images to TFRecord representations outside of the pipeline.
What are the benefits of the implementation?
* Conversion is tracked in the ML Metadata store
* Component output can be cached
* No "glue code" required to connect the images to the pipeline

Figure taken from "Building Machine Learning Pipelines", O'Reilly July 2020, Hapke, Nelson
## TFX - Quick Intro
TFX provides a variety of stand-alone tools and pipeline components.

Figure taken from "Building Machine Learning Pipelines", O'Reilly July 2020, Hapke, Nelson
## TFX Components
TFX components consists of 3 parts:
* Component driver
* Component executor
* Component publisher
The driver and publisher communicate with the ML metadata store and they retrieve the ML artifacts. Components pass data references from component to component and not the actual data!
The action happens in the component executor. More later about that ...

## How can you implement custom components?
### Python-implementation based components
* Easiest way of building TFX components
* Newly added `@component` decorator
* Function defines the executor behavior of a component
More info: https://github.com/tensorflow/tfx/blob/master/docs/guide/custom_function_component.md
Example from the TFX docs
```
@component
def MyValidationComponent(
model: InputArtifact[Model],
blessing: OutputArtifact[Model],
accuracy_threshold: Parameter[int] = 10,
) -> OutputDict(accuracy=float):
'''My simple custom model validation component.'''
accuracy = evaluate_model(model)
if accuracy >= accuracy_threshold:
write_output_blessing(blessing)
return {
'accuracy': accuracy
}
```
### Container-based components
* Language independent
* Docker image required
* Execute container via `create_container_component` with inputs, outputs and parameters defined
* Great for including non-Python code in your pipeline
More info: https://github.com/tensorflow/tfx/blob/master/docs/guide/container_component.md
### Fully implemented components
* Best for reusing existing components
More details below ...
## How to implement a component?

Figures taken from "Building Machine Learning Pipelines", O'Reilly July 2020, Hapke, Nelson
## Extending existing TFX components

### Benefits
* Less boiler plate code
* Reuse of existing component drivers and publishers
* Faster implementation
## Where to find more details?
If you are interested in a detailed introduction to TensorFlow Extended and other TensorFlow libraries, check out:
* [TensorFlow and TFX documentation](https://www.tensorflow.org/tfx)
* O'Reilly publication on machine learning pipelines with TFX
<img src="https://drive.google.com/uc?export=view&id=17Rtpso9UrE6HmhxCmtyd0aETr3WKSZ0e" width="450">
* [Amazon.com](https://www.amazon.com/dp/1492053198/)
* [Powells.com](https://www.powells.com/book/building-machine-learning-pipelines-9781492053194)
## Code Outline
* Download example dataset
* Install required Python packages
* Restart notebook kernel
* Import required packages & modules
* Define helper functions
* Walk through a component implementation from scratch
* Implement a component by overwriting the component executor
* Create a pipeline with the component
## Download example dataset
For this workshop we'll be using the public cats & dogs dataset created by Microsoft. The data set contains two folders: "Dog" and "Cat".
```
!rm -rf /content/PetImages/
!rm *.zip
!wget https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip
!unzip -q -d /content/ /content/kagglecatsanddogs_3367a.zip
!echo "Count images"
!ls -U /content/PetImages/Cat | wc -l
!ls -U /content/PetImages/Dog | wc -l
!echo "Reduce images for demo purposes"
!cd /content/PetImages/Cat && ls -U | head -12000 | xargs rm
!cd /content/PetImages/Dog && ls -U | head -12000 | xargs rm
!echo "Count images after removal"
!ls -U /content/PetImages/Cat | wc -l
!ls -U /content/PetImages/Dog | wc -l
```
## Install required Python packages
```
!pip install -qU tfx
import tfx
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
```
## Restart notebook kernel
```
%%skip_for_export
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
```
## Import required packages & modules
```
import base64
import logging
import os
import random
import re
import sys
from typing import Any, Dict, Iterable, List, Text
import absl
import apache_beam as beam
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tfx
from google.protobuf import json_format
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import (dataset_metadata, dataset_schema,
metadata_io, schema_utils)
from tfx import types
from tfx.components import (Evaluator, Pusher, ResolverNode, StatisticsGen,
Trainer)
from tfx.components.base import (base_component, base_driver, base_executor,
executor_spec)
from tfx.components.example_gen import driver
from tfx.components.example_gen.base_example_gen_executor import (
INPUT_KEY, BaseExampleGenExecutor)
from tfx.components.example_gen.component import FileBasedExampleGen
from tfx.components.example_gen.import_example_gen.component import \
ImportExampleGen
from tfx.components.example_gen.utils import dict_to_example
from tfx.components.example_validator.component import ExampleValidator
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.executor import GenericExecutor
from tfx.components.transform.component import Transform
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import data_types, metadata, pipeline
from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner
from tfx.orchestration.experimental.interactive.interactive_context import \
InteractiveContext
from tfx.proto import evaluator_pb2, example_gen_pb2, pusher_pb2, trainer_pb2
from tfx.types import (Channel, artifact_utils, channel_utils,
standard_artifacts)
from tfx.types.component_spec import ChannelParameter, ExecutionParameter
from tfx.types.standard_artifacts import Model, ModelBlessing
from tfx.utils import io_utils
from tfx.utils.dsl_utils import external_input
%load_ext tfx.orchestration.experimental.interactive.notebook_extensions.skip
%%skip_for_export
logger = logging.getLogger()
logger.setLevel(logging.CRITICAL)
```
## Define helper functions
```
%%skip_for_export
%%writefile {"helpers.py"}
import tensorflow as tf
def _int64_feature(value):
"""Wrapper for inserting int64 features into Example proto."""
if not isinstance(value, list):
value = [value]
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def _bytes_feature(value):
"""Wrapper for inserting bytes features into Example proto."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def get_label_from_filename(filename):
""" Function to set the label for each image. In our case, we'll use the file
path of a label indicator. Based on your initial data
Args:
filename: string, full file path
Returns:
int - label
Raises:
NotImplementedError if not label category was detected
"""
lowered_filename = filename.lower()
if "dog" in lowered_filename:
label = 0
elif "cat" in lowered_filename:
label = 1
else:
raise NotImplementedError("Found unknown image")
return label
def _convert_to_example(image_buffer, label):
"""Function to convert image byte strings and labels into tf.Example structures
Args:
image_buffer: byte string representing the image
label: int
Returns:
TFExample data structure containing the image (byte string) and the label (int encoded)
"""
example = tf.train.Example(
features=tf.train.Features(
feature={
'image/raw': _bytes_feature(image_buffer),
'label': _int64_feature(label)
}))
return example
def get_image_data(filename):
"""Process a single image file.
Args:
filename: string, path to an image file e.g., '/path/to/example.JPG'.
Returns:
TFExample data structure containing the image (byte string) and the label (int encoded)
"""
label = get_label_from_filename(filename)
byte_content = tf.io.read_file(filename)
rs = _convert_to_example(byte_content.numpy(), label)
return rs
```
## Walk through a component implementation from scratch

### Thinks to know:
* Component channels: https://github.com/tensorflow/tfx/blob/master/tfx/types/channel.pystandard_artifacts.py
* ChannelsParameters vs ExecutionParameters
* TFX `standard_artifacts`: https://github.com/tensorflow/tfx/blob/master/tfx/types/standard_artifacts.py
* TFX `base_driver.BaseDriver`: https://github.com/tensorflow/tfx/blob/master/tfx/components/base/base_driver.py
## 4 Steps
1. Define the component specifications
2. Define the custom executor
3. Define the custom driver
4. Put the entire component together
### Custom Component Specifications
https://github.com/tensorflow/tfx/blob/master/tfx/types/standard_artifacts.py
Difference between ChannelParameter and ExecutionParameter
```
%%skip_for_export
class CustomIngestionComponentSpec(types.ComponentSpec):
"""ComponentSpec for Custom Ingestion Component."""
PARAMETERS = {
'name': ExecutionParameter(type=Text),
}
INPUTS = {
'input': ChannelParameter(type=standard_artifacts.ExternalArtifact),
}
OUTPUTS = {
'examples': ChannelParameter(type=standard_artifacts.Examples),
}
```
### Custom Component Executor
```
%%skip_for_export
from helpers import get_image_data
class CustomIngestionExecutor(base_executor.BaseExecutor):
"""Executor for CustomIngestionComponent."""
def Do(self, input_dict: Dict[Text, List[types.Artifact]],
output_dict: Dict[Text, List[types.Artifact]],
exec_properties: Dict[Text, Any]) -> None:
input_base_uri = artifact_utils.get_single_uri(input_dict['input'])
image_files = tf.io.gfile.listdir(input_base_uri)
random.shuffle(image_files)
train_images, eval_images = image_files[100:], image_files[:100]
splits = [('train', train_images), ('eval', eval_images)]
for split_name, images in splits:
output_dir = artifact_utils.get_split_uri(
output_dict['examples'], split_name)
tfrecords_filename = os.path.join(output_dir, 'images.tfrecords')
options = tf.io.TFRecordOptions(compression_type=None)
writer = tf.io.TFRecordWriter(tfrecords_filename, options=options)
for image_filename in images:
image_path = os.path.join(input_base_uri, image_filename)
example = get_image_data(image_path)
writer.write(example.SerializeToString())
```
### Custom Component Driver
```
%%skip_for_export
class CustomIngestionDriver(base_driver.BaseDriver):
"""Custom driver for CustomIngestion component.
This driver supports file based ExampleGen, it registers external file path as
an artifact, similar to the use cases CsvExampleGen and ImportExampleGen.
"""
def resolve_input_artifacts(
self,
input_channels: Dict[Text, types.Channel],
exec_properties: Dict[Text, Any],
driver_args: data_types.DriverArgs,
pipeline_info: data_types.PipelineInfo,
) -> Dict[Text, List[types.Artifact]]:
"""Overrides BaseDriver.resolve_input_artifacts()."""
del driver_args # unused
del pipeline_info # unused
input_config = example_gen_pb2.Input()
input_dict = channel_utils.unwrap_channel_dict(input_channels)
for input_list in input_dict.values():
for single_input in input_list:
self._metadata_handler.publish_artifacts([single_input])
return input_dict
```
### Component Setup
Putting all pieces together.
```
%%skip_for_export
class CustomIngestionComponent(base_component.BaseComponent):
"""CustomIngestion Component."""
SPEC_CLASS = CustomIngestionComponentSpec
EXECUTOR_SPEC = executor_spec.ExecutorClassSpec(CustomIngestionExecutor)
DRIVER_CLASS = CustomIngestionDriver
def __init__(self,
input: types.Channel = None,
output_data: types.Channel = None,
name: Text = None):
if not output_data:
examples_artifact = standard_artifacts.Examples()
examples_artifact.split_names = artifact_utils.encode_split_names(['train', 'eval'])
output_data = channel_utils.as_channel([examples_artifact])
spec = CustomIngestionComponentSpec(input=input,
examples=output_data,
name=name)
super(CustomIngestionComponent, self).__init__(spec=spec)
```
## Basic Pipeline
```
%%skip_for_export
test_context = InteractiveContext()
data_root = os.path.join("/content/", 'PetImages', 'Dog')
examples = external_input(data_root)
ingest_images = CustomIngestionComponent(
input=examples, name='ImageIngestionComponent')
test_context.run(ingest_images)
%%skip_for_export
statistics_gen = StatisticsGen(
examples=ingest_images.outputs['examples'])
test_context.run(statistics_gen)
test_context.show(statistics_gen.outputs['statistics'])
```
## Implement a component by overwriting the component executor

### Thinks to know:
* Decorator `@beam.ptransform_fn`: https://github.com/apache/beam/blob/master/sdks/python/apache_beam/transforms/ptransform.py
* `BaseExampleGenExecutor` class: https://github.com/tensorflow/tfx/blob/v0.22.1/tfx/components/example_gen/base_example_gen_executor.py#L90-L243
```
from helpers import get_image_data
@beam.ptransform_fn
def ImageToExample(
pipeline: beam.Pipeline,
input_dict: Dict[Text, List[types.Artifact]],
exec_properties: Dict[Text, Any],
split_pattern: Text) -> beam.pvalue.PCollection:
"""Read jpeg files and transform to TF examples.
Note that each input split will be transformed by this function separately.
Args:
pipeline: beam pipeline.
input_dict: Input dict from input key to a list of Artifacts.
- input_base: input dir that contains the image data.
exec_properties: A dict of execution properties.
split_pattern: Split.pattern in Input config, glob relative file pattern
that maps to input files with root directory given by input_base.
Returns:
PCollection of TF examples.
"""
input_base_uri = artifact_utils.get_single_uri(input_dict['input'])
image_pattern = os.path.join(input_base_uri, split_pattern)
absl.logging.info(
'Processing input image data {} to TFExample.'.format(image_pattern))
image_files = tf.io.gfile.glob(image_pattern)
if not image_files:
raise RuntimeError(
'Split pattern {} does not match any files.'.format(image_pattern))
return (
pipeline
| beam.Create(image_files)
| 'ConvertImagesToBase64' >> beam.Map(lambda file: get_image_data(file))
)
class ImageExampleGenExecutor(BaseExampleGenExecutor):
"""TFX example gen executor for processing jpeg format.
Example usage:
from tfx.components.example_gen.component import
FileBasedExampleGen
from tfx.utils.dsl_utils import external_input
example_gen = FileBasedExampleGen(
input=external_input("/content/PetImages/"),
input_config=input_config,
output_config=output,
custom_executor_spec=executor_spec.ExecutorClassSpec(_Executor))
"""
def GetInputSourceToExamplePTransform(self) -> beam.PTransform:
"""Returns PTransform for image to TF examples."""
return ImageToExample
```
## Building your ML Pipeline
```
%%skip_for_export
pipeline_name = "dogs_cats_pipeline"
context = InteractiveContext(pipeline_name=pipeline_name)
output = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(splits=[
example_gen_pb2.SplitConfig.Split(name='train', hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name='eval', hash_buckets=1)
]))
input_config = example_gen_pb2.Input(splits=[
example_gen_pb2.Input.Split(name='images', pattern='*/*.jpg'),
])
example_gen = FileBasedExampleGen(
input=external_input("/content/PetImages/"),
input_config=input_config,
output_config=output,
custom_executor_spec=executor_spec.ExecutorClassSpec(ImageExampleGenExecutor))
%%skip_for_export
context.run(example_gen)
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
%%skip_for_export
context.run(statistics_gen)
%%skip_for_export
context.show(statistics_gen.outputs['statistics'])
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=True)
%%skip_for_export
context.run(schema_gen)
%%skip_for_export
context.show(schema_gen.outputs['schema'])
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
%%skip_for_export
context.run(example_validator)
%%skip_for_export
%%writefile constants.py
from typing import Text
def transformed_name(key: Text) -> Text:
"""Generate the name of the transformed feature from original name."""
return key + '_xf'
# Keys
LABEL_KEY = 'label'
INPUT_KEY = 'image/raw'
# Feature keys
RAW_FEATURE_KEYS = [INPUT_KEY]
# Constants
IMG_SIZE = 160
%%skip_for_export
%%writefile transform.py
import tensorflow as tf
import tensorflow_transform as tft
import logging
from typing import Union, Dict
import constants
import numpy as np
def convert_image(raw_image: tf.Tensor) -> tf.Tensor:
if tf.io.is_jpeg(raw_image):
image = tf.io.decode_jpeg(raw_image, channels=3)
image = tf.cast(image, tf.float32)
image = (image / 127.5) - 1
image = tf.image.resize(image, [constants.IMG_SIZE, constants.IMG_SIZE])
else:
image = tf.constant(np.zeros((constants.IMG_SIZE, constants.IMG_SIZE, 3)), tf.float32)
return image
def fill_in_missing(x: Union[tf.Tensor, tf.SparseTensor]) -> tf.Tensor:
"""Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
"""
if isinstance(x, tf.sparse.SparseTensor):
default_value = "" if x.dtype == tf.string else 0
x = tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value,
)
return tf.squeeze(x, axis=1)
def preprocessing_fn(inputs: Dict[str, Union[tf.Tensor, tf.SparseTensor]]) -> Dict[str, tf.Tensor]:
"""tf.transform's callback function for preprocessing inputs.
"""
outputs = {}
for key in constants.RAW_FEATURE_KEYS:
image = fill_in_missing(inputs[key])
outputs[constants.transformed_name(key)] = tf.map_fn(convert_image, image, dtype=tf.float32)
outputs[constants.transformed_name(constants.LABEL_KEY)] = inputs[constants.LABEL_KEY]
return outputs
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath("transform.py"))
%%skip_for_export
context.run(transform)
%%skip_for_export
%%writefile {"trainer.py"}
from typing import List, Text, Dict
import os
import absl
import tensorflow as tf
import tensorflow_transform as tft
from datetime import datetime
from tfx.components.trainer.executor import TrainerFnArgs
import constants
TRAIN_BATCH_SIZE = 32
EVAL_BATCH_SIZE = 32
def _gzip_reader_fn(filenames):
"""Small utility returning a record reader that can read gzip'ed files."""
return tf.data.TFRecordDataset(filenames, compression_type='GZIP')
def _get_label_for_image(model, tf_transform_output):
"""Returns a function that parses a raw byte image and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_images_fn(image_raw):
"""Returns the output to be used in the serving signature."""
image_raw = tf.reshape(image_raw, [-1, 1])
parsed_features = {'image': image_raw}
transformed_features = model.tft_layer(parsed_features)
return model(transformed_features)
return serve_images_fn
def _get_serve_tf_examples_fn(model, tf_transform_output):
"""Returns a function that parses a serialized tf.Example and applies TFT."""
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
"""Returns the output to be used in the serving signature."""
feature_spec = tf_transform_output.raw_feature_spec()
feature_spec.pop(constants.LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
return model(transformed_features)
return serve_tf_examples_fn
def _input_fn(file_pattern: List[Text],
tf_transform_output: tft.TFTransformOutput,
batch_size: int = 32,
is_train: bool = False) -> tf.data.Dataset:
"""Generates features and label for tuning/training.
Args:
file_pattern: input tfrecord file pattern.
tf_transform_output: A TFTransformOutput.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
"""
transformed_feature_spec = (
tf_transform_output.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=file_pattern,
batch_size=batch_size,
features=transformed_feature_spec,
reader=_gzip_reader_fn,
label_key=constants.transformed_name(constants.LABEL_KEY))
return dataset
def get_model() -> tf.keras.Model:
"""Creates a CNN Keras model based on transfer learning for classifying image data.
Returns:
A keras Model.
"""
img_shape = (constants.IMG_SIZE, constants.IMG_SIZE, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=img_shape,
include_top=False,
weights='imagenet')
base_model.trainable = False
base_model.summary()
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
output = tf.keras.layers.Dense(1)
model = tf.keras.Sequential([
tf.keras.layers.Input(shape=img_shape, name=constants.transformed_name(constants.INPUT_KEY)),
base_model,
global_average_layer,
tf.keras.layers.Dropout(0.2),
output
])
model.compile(optimizer=tf.optimizers.RMSprop(lr=0.01),
loss=tf.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.metrics.BinaryAccuracy(name='accuracy')])
model.summary()
return model
def run_fn(fn_args: TrainerFnArgs):
"""Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
"""
tf_transform_output = tft.TFTransformOutput(fn_args.transform_output)
train_dataset = _input_fn(fn_args.train_files, tf_transform_output,
TRAIN_BATCH_SIZE, is_train = True)
eval_dataset = _input_fn(fn_args.eval_files, tf_transform_output,
EVAL_BATCH_SIZE)
# check for availabe tpu and gpu units
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError:
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = get_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps,
)
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
model.save(fn_args.serving_model_dir, save_format='tf', signatures=signatures)
trainer = Trainer(
module_file=os.path.abspath("trainer.py"),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=160),
eval_args=trainer_pb2.EvalArgs(num_steps=200))
%%skip_for_export
context.run(trainer)
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='label')
],
metrics_specs=[
tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='AUC'),
]
)
],
slicing_specs=[
tfma.SlicingSpec()
])
model_resolver = ResolverNode(
instance_name='latest_blessed_model_resolver',
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing))
%%skip_for_export
context.run(model_resolver)
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
%%skip_for_export
context.run(evaluator)
%%skip_for_export
context.show(evaluator.outputs['evaluation'])
_serving_model_dir = "/content/exported_model"
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
%%skip_for_export
context.run(pusher)
!ls /tmp/tfx-dogs_cats_pipeline-5n22n4m3/Pusher/pushed_model/9
```
---
## Export Pipeline to use with Apache Beam
```
components = [
example_gen,
statistics_gen,
schema_gen,
example_validator,
transform,
trainer,
model_resolver,
evaluator,
pusher,
]
_pipeline_name = "dogs_cats_pipeline"
# pipeline inputs
_base_dir = os.getcwd()
_pipeline_dir = os.path.join(_base_dir, "pipeline")
# pipeline outputs
_output_base = os.path.join(_pipeline_dir, "output", _pipeline_name)
_pipeline_root = os.path.join(_output_base, "pipeline_root")
_metadata_path = os.path.join(_pipeline_root, "metadata.sqlite")
%%skip_for_export
import sys
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/drive')
%%skip_for_export
notebook_filepath = (
'/content/drive/My Drive/Colab Notebooks/Beam Summit Workshop - Creating Custom TFX Components.ipynb')
pipeline_export_filepath = 'exported_pipeline_{}.py'.format(pipeline_name)
context.export_to_pipeline(notebook_filepath=notebook_filepath,
export_filepath=pipeline_export_filepath,
runner_type="beam")
%%skip_for_export
!python3 {pipeline_export_filepath}
```
---
## Entire End-to-End Pipeline with Apache Beam
```
%%skip_for_export
import IPython
IPython.Application.instance().kernel.do_shutdown(True)
%%skip_for_export
import base64
import logging
import os
import random
import re
import sys
from typing import Any, Dict, Iterable, List, Text
import absl
import apache_beam as beam
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tfx
from google.protobuf import json_format
from tensorflow_transform.beam.tft_beam_io import transform_fn_io
from tensorflow_transform.saved import saved_transform_io
from tensorflow_transform.tf_metadata import (dataset_metadata, dataset_schema,
metadata_io, schema_utils)
from tfx import types
from tfx.components import (Evaluator, Pusher, ResolverNode, StatisticsGen,
Trainer)
from tfx.components.base import (base_component, base_driver, base_executor,
executor_spec)
from tfx.components.example_gen import driver
from tfx.components.example_gen.base_example_gen_executor import (
INPUT_KEY, BaseExampleGenExecutor)
from tfx.components.example_gen.component import FileBasedExampleGen
from tfx.components.example_gen.import_example_gen.component import \
ImportExampleGen
from tfx.components.example_gen.utils import dict_to_example
from tfx.components.example_validator.component import ExampleValidator
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.executor import GenericExecutor
from tfx.components.transform.component import Transform
from tfx.dsl.experimental import latest_blessed_model_resolver
from tfx.orchestration import data_types, metadata, pipeline
from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner
from tfx.orchestration.experimental.interactive.interactive_context import \
InteractiveContext
from tfx.proto import evaluator_pb2, example_gen_pb2, pusher_pb2, trainer_pb2
from tfx.types import (Channel, artifact_utils, channel_utils,
standard_artifacts)
from tfx.types.component_spec import ChannelParameter, ExecutionParameter
from tfx.types.standard_artifacts import Model, ModelBlessing
from tfx.utils import io_utils
from tfx.utils.dsl_utils import external_input
%%skip_for_export
%%writefile {"component_helper.py"}
import tensorflow as tf
def _int64_feature(value):
"""Wrapper for inserting int64 features into Example proto."""
if not isinstance(value, list):
value = [value]
return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
def _bytes_feature(value):
"""Wrapper for inserting bytes features into Example proto."""
return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))
def get_label_from_filename(filename):
""" Function to set the label for each image. In our case, we'll use the file
path of a label indicator. Based on your initial data
Args:
filename: string, full file path
Returns:
int - label
Raises:
NotImplementedError if not label category was detected
"""
lowered_filename = filename.lower()
if "dog" in lowered_filename:
label = 0
elif "cat" in lowered_filename:
label = 1
else:
raise NotImplementedError("Found unknown image")
return label
def _convert_to_example(image_buffer, label):
"""Function to convert image byte strings and labels into tf.Example structures
Args:
image_buffer: byte string representing the image
label: int
Returns:
TFExample data structure containing the image (byte string) and the label (int encoded)
"""
example = tf.train.Example(
features=tf.train.Features(
feature={
'image/raw': _bytes_feature(image_buffer),
'label': _int64_feature(label)
}))
return example
def get_image_data(filename):
"""Process a single image file.
Args:
filename: string, path to an image file e.g., '/path/to/example.JPG'.
Returns:
TFExample data structure containing the image (byte string) and the label (int encoded)
"""
label = get_label_from_filename(filename)
byte_content = tf.io.read_file(filename)
rs = _convert_to_example(byte_content.numpy(), label)
return rs
%%skip_for_export
from component_helper import get_image_data
@beam.ptransform_fn
def ImageToExample(
pipeline: beam.Pipeline,
input_dict: Dict[Text, List[types.Artifact]],
exec_properties: Dict[Text, Any],
split_pattern: Text) -> beam.pvalue.PCollection:
"""Read jpeg files and transform to TF examples.
Note that each input split will be transformed by this function separately.
Args:
pipeline: beam pipeline.
input_dict: Input dict from input key to a list of Artifacts.
- input_base: input dir that contains the image data.
exec_properties: A dict of execution properties.
split_pattern: Split.pattern in Input config, glob relative file pattern
that maps to input files with root directory given by input_base.
Returns:
PCollection of TF examples.
"""
input_base_uri = artifact_utils.get_single_uri(input_dict['input'])
image_pattern = os.path.join(input_base_uri, split_pattern)
absl.logging.info(
'Processing input image data {} to TFExample.'.format(image_pattern))
image_files = tf.io.gfile.glob(image_pattern)
if not image_files:
raise RuntimeError(
'Split pattern {} does not match any files.'.format(image_pattern))
return (
pipeline
| beam.Create(image_files)
| 'ConvertImagesToBase64' >> beam.Map(lambda file: get_image_data(file))
)
class ImageExampleGenExecutor(BaseExampleGenExecutor):
"""TFX example gen executor for processing jpeg format.
Example usage:
from tfx.components.example_gen.component import
FileBasedExampleGen
from tfx.utils.dsl_utils import external_input
example_gen = FileBasedExampleGen(
input=external_input("/content/PetImages/"),
input_config=input_config,
output_config=output,
custom_executor_spec=executor_spec.ExecutorClassSpec(_Executor))
"""
def GetInputSourceToExamplePTransform(self) -> beam.PTransform:
"""Returns PTransform for image to TF examples."""
return ImageToExample
%%skip_for_export
output = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(splits=[
example_gen_pb2.SplitConfig.Split(name='train', hash_buckets=4),
example_gen_pb2.SplitConfig.Split(name='eval', hash_buckets=1)
]))
input_config = example_gen_pb2.Input(splits=[
example_gen_pb2.Input.Split(name='images', pattern='*/*.jpg'),
])
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='label')
],
metrics_specs=[
tfma.MetricsSpec(
metrics=[
tfma.MetricConfig(class_name='ExampleCount'),
tfma.MetricConfig(class_name='AUC'),
tfma.MetricConfig(class_name='BinaryAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.65}),
change_threshold=tfma.GenericChangeThreshold(
direction=tfma.MetricDirection.HIGHER_IS_BETTER,
absolute={'value': 0.01})))
]
)
],
slicing_specs=[
tfma.SlicingSpec()
])
_serving_model_dir = "/content/exported_model"
%%skip_for_export
example_gen = FileBasedExampleGen(
input=external_input("/content/PetImages/"),
input_config=input_config,
output_config=output,
custom_executor_spec=executor_spec.ExecutorClassSpec(ImageExampleGenExecutor))
statistics_gen = StatisticsGen(
examples=example_gen.outputs['examples'])
schema_gen = SchemaGen(
statistics=statistics_gen.outputs['statistics'],
infer_feature_shape=True)
example_validator = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=schema_gen.outputs['schema'])
transform = Transform(
examples=example_gen.outputs['examples'],
schema=schema_gen.outputs['schema'],
module_file=os.path.abspath("transform.py"))
trainer = Trainer(
module_file=os.path.abspath("trainer.py"),
custom_executor_spec=executor_spec.ExecutorClassSpec(GenericExecutor),
examples=transform.outputs['transformed_examples'],
transform_graph=transform.outputs['transform_graph'],
schema=schema_gen.outputs['schema'],
train_args=trainer_pb2.TrainArgs(num_steps=500),
eval_args=trainer_pb2.EvalArgs(num_steps=200))
model_resolver = ResolverNode(
instance_name='latest_blessed_model_resolver',
resolver_class=latest_blessed_model_resolver.LatestBlessedModelResolver,
model=Channel(type=Model),
model_blessing=Channel(type=ModelBlessing))
evaluator = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
baseline_model=model_resolver.outputs['model'],
eval_config=eval_config)
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
%%skip_for_export
pipeline_name = "dogs_cats_pipeline"
# pipeline inputs
base_dir = os.getcwd()
pipeline_dir = os.path.join(base_dir, "pipeline")
# pipeline outputs
output_base = os.path.join(pipeline_dir, "output", pipeline_name)
pipeline_root = os.path.join(output_base, "pipeline_root")
metadata_path = os.path.join(pipeline_root, "metadata.sqlite")
def init_beam_pipeline(
components, pipeline_root: Text, direct_num_workers: int
) -> pipeline.Pipeline:
absl.logging.info(f"Pipeline root set to: {pipeline_root}")
beam_arg = [
f"--direct_num_workers={direct_num_workers}",
]
p = pipeline.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components,
enable_cache=True,
metadata_connection_config=metadata.sqlite_metadata_connection_config(
metadata_path
),
beam_pipeline_args=beam_arg,
)
return p
%%skip_for_export
logger = logging.getLogger()
logger.setLevel(logging.INFO)
components = [
example_gen,
statistics_gen,
schema_gen,
example_validator,
transform,
trainer,
model_resolver,
evaluator,
pusher,
]
p = init_beam_pipeline(components, pipeline_root, direct_num_workers=1)
%%skip_for_export
BeamDagRunner().run(p)
```
| github_jupyter |
# Comparison of robustness curves for different models
```
import os
os.chdir("../")
import sys
import json
from argparse import Namespace
import numpy as np
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import foolbox
from sklearn import metrics
from sklearn.metrics import pairwise_distances as dist
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(context='paper')
import provable_robustness_max_linear_regions.data as dt
from provable_robustness_max_linear_regions import models
from provable_robustness_max_linear_regions.models import load_model
from robustness_curves import generate_curve_data
from utils import NumpyEncoder
```
## Plot settings:
```
SMALL_SIZE = 4.2
MEDIUM_SIZE = 5.8
BIGGER_SIZE = 6.0
TEXT_WIDTH = 4.8041
TICK_LABEL_TO_TICK_DISTANCE = -2 # the lower the closer
LINE_WIDTH = 0.6
def calc_fig_size(n_rows, n_cols, text_width=TEXT_WIDTH):
ax_width = text_width / 3
ax_height = text_width / 5
extra_height = text_width / 4 * 2 - text_width / 5 * 2
fig_width = n_cols * ax_width
fig_height = n_rows * ax_height
if fig_width > text_width:
factor = text_width / fig_width
fig_width *= factor
fig_height *= factor
fig_height += extra_height
return fig_width, fig_height
def tex_rob(sub, sup, arg):
return 'R_{{{}}}^{{{}}}({{{}}})'.format(sub, sup, arg)
X_EPS = r'perturbation size $\varepsilon$'
X_EPS_INF = r'$\ell_\infty$ perturbation size $\varepsilon$'
X_EPS_ONE = r'$\ell_1$ perturbation size $\varepsilon$'
X_EPS_TWO = r'$\ell_2$ perturbation size $\varepsilon$'
Y_ROB = '${}$'.format(tex_rob('', '', r'\varepsilon'))
Y_ROB_INF = '${}$'.format(tex_rob(r'\|\cdot\|_\infty', '', r'\varepsilon'))
Y_ROB_ONE = '${}$'.format(tex_rob(r'\|\cdot\|_1', '', r'\varepsilon'))
Y_ROB_TWO = '${}$'.format(tex_rob(r'\|\cdot\|_2', '', r'\varepsilon'))
# plt.rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('text', usetex=True)
colors = {
"orange": sns.xkcd_rgb["yellowish orange"],
"red": sns.xkcd_rgb["pale red"],
"green": sns.xkcd_rgb["medium green"],
"blue": sns.xkcd_rgb["denim blue"],
"yellow": sns.xkcd_rgb["amber"],
"purple": sns.xkcd_rgb["dusty purple"],
"cyan": sns.xkcd_rgb["cyan"]
}
```
## Calculate robustness curves:
Estimated runtime (if no file with data is present): 10 days
```
def load_from_json(file_name):
if not os.path.exists("res/" + file_name + ".json"):
return None
else:
with open("res/" + file_name + ".json", 'r') as fp:
loaded_json = json.load(fp)
for key in loaded_json.keys():
loaded_json[key]["x"] = np.array(loaded_json[key]["x"])
loaded_json[key]["y"] = np.array(loaded_json[key]["y"])
loaded_json[key]["y"][np.isnan(loaded_json[key]["x"])] = 1.0
loaded_json[key]["x"] = np.nan_to_num(loaded_json[key]["x"], nan = np.nanmax(loaded_json[key]["x"]))
return loaded_json
def save_to_json(dictionary, file_name):
if not os.path.exists("res"):
os.makedirs("res")
with open("res/" + file_name + ".json", 'w') as fp:
json.dump(dictionary, fp, cls = NumpyEncoder)
training_method_to_model_path = {"ST": "provable_robustness_max_linear_regions/models/plain/2019-02-19 01_20_16 dataset=cifar10 nn_type=cnn_lenet_small p_norm=inf lmbd=0.0 gamma_rb=0.0 gamma_db=0.0 ae_frac=0.0 epoch=100.mat",
"MMR+AT_l_inf": "provable_robustness_max_linear_regions/models/mmr+at/2019-02-17 23_20_04 dataset=cifar10 nn_type=cnn_lenet_small p_norm=inf lmbd=0.1 gamma_rb=3.0 gamma_db=3.0 ae_frac=0.5 epoch=100.mat",
"MMR+AT_l_2": "provable_robustness_max_linear_regions/models/mmr+at/2019-02-24 10_56_36 dataset=cifar10 nn_type=cnn_lenet_small p_norm=2 lmbd=0.5 gamma_rb=0.15 gamma_db=0.15 ae_frac=0.5 lr=0.001 epoch=100.mat",
"KW_l_inf": "provable_robustness_max_linear_regions/models/kw/p_norm=inf dataset=cifar10_model=cnn_lenet_small_method=robust_eps=0.007843_checkpoint.mat",
"KW_l_2": "provable_robustness_max_linear_regions/models/kw/p_norm=2 dataset=cifar10_model=cnn_lenet_small_method=robust_eps=0.1_checkpoint.mat",
"AT_l_inf": "provable_robustness_max_linear_regions/models/at/2019-02-19 01_20_16 dataset=cifar10 nn_type=cnn_lenet_small p_norm=inf lmbd=0.0 gamma_rb=0.0 gamma_db=0.0 ae_frac=0.5 epoch=100.mat",
"AT_l_2": "provable_robustness_max_linear_regions/models/at/2019-02-22 02_40_47 dataset=cifar10 nn_type=cnn_lenet_small p_norm=2 lmbd=0.0 gamma_rb=0.0 gamma_db=0.0 ae_frac=0.5 epoch=100.mat",
"MMRUNIV": "experiments/additional_models/mmr_univ_cifar10_gammas_6.0_6.0_lmabdas_1.0_6.0.mat"}
n_points = 10000
_, x_test, _, y_test = dt.get_dataset("cifar10")
x_test = x_test[:n_points]
y_test = y_test[:n_points]
x_test = x_test.reshape(n_points, 1, 32, 32, 3)
model_args = Namespace()
n_test_ex, one, model_args.height, model_args.width, model_args.n_col = x_test.shape
model_args.n_in, model_args.n_out = model_args.height * model_args.width * model_args.n_col, y_test.shape[1]
model_args.n_hs = []
model_args.seed = 1
model_args.nn_type = "cnn"
model_args.dataset = "cifar10"
robustness_curve_data = dict()
for training_method in ["ST", "MMR+AT_l_inf", "MMR+AT_l_2", "KW_l_inf", "KW_l_2", "AT_l_inf", "AT_l_2", "MMRUNIV"]:
robustness_curve_data[training_method] = load_from_json("rob_curve_data_{}_n_points={}".format(training_method, n_points))
if not robustness_curve_data[training_method]:
sess = tf.InteractiveSession()
model, _input, _logits, _ = load_model(sess, model_args, training_method_to_model_path[training_method])
f_model = foolbox.models.TensorFlowModel(_input, _logits, (0,1))
args = Namespace()
args.inputs = x_test
args.labels = y_test
args.f_model = f_model
args.norms = ["inf", "2"]
args.save = False
args.plot = False
robustness_curve_data[training_method] = generate_curve_data(args)
save_to_json(robustness_curve_data[training_method], "rob_curve_data_{}_n_points={}".format(training_method, n_points))
tf.reset_default_graph()
sess.close()
```
## Plot:
```
# name to save the plot
save_name = "fig_rc_model_comparison_crossings_transfer_cifar"
training_method_to_color = {"ST": colors["blue"],
"MMR+AT_l_inf": colors["red"],
"MMR+AT_l_2": colors["green"],
"MMRUNIV": colors["yellow"],
"KW_l_inf": colors["green"],
"KW_l_2": colors["green"],
"AT_l_inf": colors["purple"],
"AT_l_2": colors["purple"]}
training_method_to_col_title = {"ST": "Training: Standard Training",
"MMR+AT_l_inf": "Training: MMR+AT\nThreat Model: $\ell_\infty(\epsilon=2/255)$",
"MMR+AT_l_2": "Training: MMR+AT\nThreat Model: $\ell_2(\epsilon=0.1)$",
"MMRUNIV": "Training: MMR UNIVERSAL\nThreat Model: all $\ell_p$ norms",
"KW_l_inf": "Training: KW\nThreat Model: $\ell_\infty(\epsilon=2/255)$",
"KW_l_2": "Training: KW\nThreat Model: $\ell_2(\epsilon=0.1)$",
"AT_l_inf": "Training: AT\nThreat Model: $\ell_\infty(\epsilon=2/255)$",
"AT_l_2": "Training: AT\nThreat Model: $\ell_2(\epsilon=0.1)$"}
color_map = [colors["blue"], colors["red"], colors["green"]]
# number of model types and parameter combinations
n_cols = 2
n_rows = 1
fig, ax = plt.subplots(n_rows,
n_cols,
figsize=calc_fig_size(n_rows, n_cols+1))
for training_method in ["ST", "MMR+AT_l_inf", "MMR+AT_l_2", "MMRUNIV"]:
ax[0].plot(robustness_curve_data[training_method]["inf"]["x"], robustness_curve_data[training_method]["inf"]["y"], c = training_method_to_color[training_method], label = "{}".format(training_method_to_col_title[training_method]), linewidth=LINE_WIDTH)
ax[1].plot(robustness_curve_data[training_method]["2"]["x"], robustness_curve_data[training_method]["2"]["y"], c = training_method_to_color[training_method], label = "{}".format(training_method_to_col_title[training_method]), linewidth=LINE_WIDTH)
ax[0].set_ylabel(Y_ROB_INF)
ax[1].set_ylabel(Y_ROB_TWO)
ax[0].legend(loc="lower right")
ax[1].legend(loc="lower right")
ax[0].set_xlabel(X_EPS_INF)
ax[1].set_xlabel(X_EPS_TWO)
ax[0].set_xlim(right=0.105)
ax[1].set_xlim(right=0.42)
ax[0].tick_params(axis='both',
which='major',
pad=TICK_LABEL_TO_TICK_DISTANCE)
ax[1].tick_params(axis='both',
which='major',
pad=TICK_LABEL_TO_TICK_DISTANCE)
ax[0].set_ylim(-0.04, 1.04)
ax[1].set_ylim(-0.04, 1.04)
ax[0]._autoscaleXon = False
ax[0]._autoscaleYon = False
ax[1]._autoscaleXon = False
ax[1]._autoscaleYon = False
ax[0].plot(np.array([3/255, 3/255]), np.array([0.0, 1.0]), c="#000000dd", linewidth=0.3, linestyle='dashed')
ax[1].plot(np.array([3/255, 3/255]), np.array([0.0, 1.0]), c="#000000dd", linewidth=0.3, linestyle='dashed')
ax[0].text(3/255-0.003, -0.06, "$3/255$", fontsize=3)
ax[1].text(3/255-0.012, -0.06, "$3/255$", fontsize=3)
fig.tight_layout()
fig.savefig('res/{}.pdf'.format(save_name))
```
| github_jupyter |
## 1. United Nations life expectancy data
<p>Life expectancy at birth is a measure of the average a living being is expected to live. It takes into account several demographic factors like gender, country, or year of birth.</p>
<p>Life expectancy at birth can vary along time or between countries because of many causes: the evolution of medicine, the degree of development of countries, or the effect of armed conflicts. Life expectancy varies between gender, as well. The data shows that women live longer that men. Why? Several potential factors, including biological reasons and the theory that women tend to be more health conscious.</p>
<p>Let's create some plots to explore the inequalities about life expectancy at birth around the world. We will use a dataset from the United Nations Statistics Division, which is available <a href="http://data.un.org/Data.aspx?d=GenderStat&f=inID:37&c=1,2,3,4,5,6&s=crEngName:asc,sgvEngName:asc,timeEngName:desc&v=1">here</a>.</p>
```
# This sets plot images to a nice size
options(repr.plot.width = 6, repr.plot.height = 6)
# Loading packages
library("dplyr")
library("tidyr")
library("ggplot2")
# Loading data
life_expectancy <- read.csv("datasets/UNdata.csv")
# Taking a look at the first few rows
head(life_expectancy)
```
## 2. Life expectancy of men vs. women by country
<p>Let's manipulate the data to make our exploration easier. We will build the dataset for our first plot in which we will represent the average life expectancy of men and women across countries for the last period recorded in our data (2000-2005).</p>
```
# Subsetting and reshaping the life expectancy data
subdata <- life_expectancy %>%
filter(Year == "2000-2005") %>%
select (Country.or.Area, Subgroup, Value) %>%
spread (Subgroup,Value)
# Taking a look at the first few rows
head(subdata)
nrow(subdata)
```
## 3. Visualize I
<p>A scatter plot is a useful way to visualize the relationship between two variables. It is a simple plot in which points are arranged on two axes, each of which represents one of those variables. </p>
<p>Let's create a scatter plot using <code>ggplot2</code> to represent life expectancy of males (on the x-axis) against females (on the y-axis). We will create a straightforward plot in this task, without many details. We will take care of these kinds of things shortly.</p>
```
# Plotting male and female life expectancy
ggplot(data=subdata,aes(x=Male,y=Female)) +
geom_point()
```
## 4. Reference lines I
<p>A good plot must be easy to understand. There are many tools in <code>ggplot2</code> to achieve this goal and we will explore some of them now. Starting from the previous plot, let's set the same limits for both axes as well as place a diagonal line for reference. After doing this, the difference between men and women across countries will be easier to interpret.</p>
<p>After completing this task, we will see how most of the points are arranged above the diagonal and how there is a significant dispersion among them. What does this all mean?</p>
```
# Adding an abline and changing the scale of axes of the previous plots
ggplot(data=subdata,aes(x=Male,y=Female)) +
geom_point()+geom_abline(intercept = 0, slope = 1,linetype = "dashed")+
xlim(35,85)+
ylim(35,85)
```
## 5. Plot titles and axis labels
<p>A key point to make a plot understandable is placing clear labels on it. Let's add titles, axis labels, and a caption to refer to the source of data. Let's also change the appearance to make it clearer.</p>
```
# Adding labels to previous plot
ggplot(subdata, aes(x=Male, y=Female))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(35,85))+
scale_y_continuous(limits=c(35,85))+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Period: 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")
```
## 6. Highlighting remarkable countries I
<p>Now, we will label some points of our plot with the name of its corresponding country. We want to draw attention to some special countries where the gap in life expectancy between men and women is significantly high. These will be the final touches on this first plot.</p>
```
# Subseting data to obtain countries of interest
top_male <- subdata %>% arrange(Male-Female) %>% head(3)
top_female <- subdata %>% arrange(Female-Male) %>% head(3)
# Adding text to the previous plot to label countries of interest
ggplot(subdata, aes(x=Male, y=Female, label=Country.or.Area))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(35,85))+
scale_y_continuous(limits=c(35,85))+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Period: 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
geom_text(data=top_male, size=3)+
geom_text(data=top_female, size=3)+
theme_bw()
```
## 7. How has life expectancy by gender evolved?
<p>Since our data contains historical information, let's see now how life expectancy has evolved in recent years. Our second plot will represent the difference between men and women across countries between two periods: 2000-2005 and 1985-1990.</p>
<p>Let's start building a dataset called <code>subdata2</code> for our second plot. </p>
```
# Subsetting, mutating and reshaping the life expectancy data
subdata2 <- life_expectancy %>%
filter(Year %in% c("1985-1990", "2000-2005")) %>%
mutate(Sub_Year=paste(Subgroup, Year, sep="_")) %>%
mutate(Sub_Year=gsub("-", "_", Sub_Year)) %>%
select(-Subgroup, -Year) %>%
spread(Sub_Year,Value)%>%
mutate(diff_Female=Female_2000_2005 - Female_1985_1990,diff_Male=Male_2000_2005 - Male_1985_1990)
# Taking a look at the first few rows
head(subdata2)
```
## 8. Visualize II
<p>Now let's create our second plot in which we will represent average life expectancy differences between "1985-1990" and "2000-2005" for men and women.</p>
```
# Doing a nice first version of the plot with abline, scaling axis and adding labels
ggplot(subdata2, aes(x=diff_Male, y=diff_Female, label=Country.or.Area))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(-25,25)+
scale_x_continuous(-25,25)+
labs(title="Life Expectancy at Birth by Country in Years",
subtitle="Difference between 1985-1990 and 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
theme_bw()
```
## 9. Reference lines II
<p>Adding reference lines can make plots easier to understand. We already added a diagonal line to visualize differences between men and women more clearly. Now we will add two more lines to help to identify in which countries people increased or decreased their life expectancy in the period analyzed.</p>
```
# Adding an hline and vline to previous plots
ggplot(subdata2, aes(x=diff_Male, y=diff_Female, label=Country.or.Area))+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(-25,25))+
scale_y_continuous(limits=c(-25,25))+
geom_hline(yintercept = 0,linetype=2)+
geom_vline(xintercept = 0,linetype=2)+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Difference between 1985-1990 and 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
theme_bw()
```
## 10. Highlighting remarkable countries II
<p>As we did in the first plot, let's label some points. Concretely, we will point those three where the aggregated average life expectancy for men and women increased most and those three where decreased most in the period.</p>
```
# Subseting data to obtain countries of interest
top <- subdata2 %>% arrange(diff_Male+diff_Female) %>% head(3)
bottom <- subdata2 %>% arrange(-(diff_Male+diff_Female)) %>% head(3)
# Adding text to the previous plot to label countries of interest
ggplot(subdata2, aes(x=diff_Male, y=diff_Female, label=Country.or.Area), guide=FALSE)+
geom_point(colour="white", fill="chartreuse3", shape=21, alpha=.55, size=5)+
geom_abline(intercept = 0, slope = 1, linetype=2)+
scale_x_continuous(limits=c(-25,25))+
scale_y_continuous(limits=c(-25,25))+
geom_hline(yintercept=0, linetype=2)+
geom_vline(xintercept=0, linetype=2)+
labs(title="Life Expectancy at Birth by Country",
subtitle="Years. Difference between 1985-1990 and 2000-2005. Average.",
caption="Source: United Nations Statistics Division",
x="Males",
y="Females")+
geom_text(data=top,size=3)+
geom_text(data=bottom,size=3)+
theme_bw()
```
| github_jupyter |
# Post-Processing
<img src="../images/post-processing.png" alt="Drawing" style="width: 600px;"/>
```
from aif360.metrics.classification_metric import ClassificationMetric
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
import warnings
import joblib
from utils import make_dataset, display_results
warnings.filterwarnings('ignore')
BIAS_INFO = {'favorable_label':0,
'unfavorable_label':1,
'protected_columns':['race']
}
PRIVILEGED_INFO = {'unprivileged_groups':[{'race': 2},
{'race': 1},
{'race': 4},
{'race': 5},
{'race': 6}],
'privileged_groups':[{'race': 3}]
}
data = pd.read_csv('../data/processed/compas-scores-two-years-processed.csv')
DROP_COLS = ['two_year_recid','compas_score','decile_score','compas_class']
FEATURE_COLS = data.drop(DROP_COLS, axis=1).columns.tolist()
train, test = train_test_split(data, test_size=0.2, random_state=1234)
X_train, y_train = train[FEATURE_COLS], train['two_year_recid']
X_test, y_test = test[FEATURE_COLS], test['two_year_recid']
clf = LogisticRegression(random_state=1234)
clf.fit(X_train, y_train)
y_train_pred = clf.predict_proba(X_train)
train['recid_prediction_score'] = y_train_pred[:,1]
train['recid_prediction_class'] = (train['recid_prediction_score'] >0.5).astype(int)
y_test_pred = clf.predict_proba(X_test)
test['recid_prediction_score'] = y_test_pred[:,1]
test['recid_prediction_class'] = (test['recid_prediction_score'] >0.5).astype(int)
ground_truth_train = make_dataset(train[FEATURE_COLS], train['two_year_recid'], **BIAS_INFO, **PRIVILEGED_INFO)
prediction_train = make_dataset(train[FEATURE_COLS], train['recid_prediction_class'], **BIAS_INFO, **PRIVILEGED_INFO)
ground_truth_test = make_dataset(test[FEATURE_COLS], test['two_year_recid'], **BIAS_INFO, **PRIVILEGED_INFO)
prediction_test = make_dataset(test[FEATURE_COLS], test['recid_prediction_class'], **BIAS_INFO, **PRIVILEGED_INFO)
```
# Equal Odds
## Method
* Modifies scores from the classifier to optimize **equal odds** fairness metric
* Optimization is done by making ROC curve for privilaged and unprivilaged groups match
## Pros and Cons
* Directly optimizes for TPR/FPR difference so returns "fair" classifier (equal odds, equal opportunity)
* Runs on top of model predictions so no need to retrain anything
* No option to tweak fairness-accuracy tradeoff
* Returns non-calibrated scores
## Materials
* Paper ["Equality of Opportunity in Supervised Learning" by Hardt, Price and Srebro](https://papers.nips.cc/paper/6374-equality-of-opportunity-in-supervised-learning.pdf)
* Blog post ["A Tutorial on Fairness in Machine Learning" by Ziyuan Zhong](https://towardsdatascience.com/a-tutorial-on-fairness-in-machine-learning-3ff8ba1040cb)
```
from aif360.algorithms.postprocessing import EqOddsPostprocessing
calibrator = EqOddsPostprocessing(**PRIVILEGED_INFO)
calibrator.fit(ground_truth_train, prediction_train)
prediction_test = calibrator.predict(prediction_test)
acc = accuracy_score(y_test, (prediction_test.labels==0).astype(int))
clf_metric = ClassificationMetric(ground_truth_test, prediction_test,**PRIVILEGED_INFO)
joblib.dump((clf_metric,acc), '../results/1.1-equal_odds.pkl')
display_results('../results/1.1-equal_odds.pkl')
```
# Calibrated Equal Odds
## Method
* adds a calibration restriction on top of equal odds method
## Pros and Cons
* Directly optimizes for TPR/FPR difference so returns "fair" classifier
* Runs on top model predictions so no need to retrain anything
* No option to tweak fairness-accuracy tradeoff
* Improves calibration
* Calibration/Equal odds trade-off -> cannot be perfect on both
## Materials
* Paper ["On Fairness and Calibration" by Pleiss, Raghavan, Wu, Kleinberg, Weinberger](https://papers.nips.cc/paper/7151-on-fairness-and-calibration.pdf)
```
from aif360.algorithms.postprocessing import CalibratedEqOddsPostprocessing
calibrator = CalibratedEqOddsPostprocessing(**PRIVILEGED_INFO)
calibrator.fit(ground_truth_train, prediction_train)
prediction_test = calibrator.predict(prediction_test)
acc = accuracy_score(y_test, prediction_test.labels)
clf_metric = ClassificationMetric(ground_truth_test, prediction_test,**PRIVILEGED_INFO)
joblib.dump((clf_metric,acc), '../results/1.1-calibrated_equal_odds.pkl')
display_results('../results/1.1-calibrated_equal_odds.pkl')
```
# Rejection Option
## Method
* based on the idea that predictions close to the classification boundary (critical region) are more likely to be biased
* observations under chosen certainty threshold are rejected or predicted labels are flipped in favour of unprivilaged group
## Pros and Cons
* Runs on top model predictions so no need to retrain anything
* We can decide on the threshold under which the observations can be rejected
## Materials
* Paper ["Decision Theory for Discrimination-aware Classification" by Kamiran, Karim, and Zhang](https://mine.kaust.edu.sa/Documents/papers/ICDM_2012.pdf)
```
from aif360.algorithms.postprocessing import RejectOptionClassification
calibrator = RejectOptionClassification(**PRIVILEGED_INFO)
calibrator.fit(ground_truth_train, prediction_train)
prediction_test = calibrator.predict(prediction_test)
acc = accuracy_score(y_test, prediction_test.labels)
clf_metric = ClassificationMetric(ground_truth_test, prediction_test, **PRIVILEGED_INFO)
joblib.dump((clf_metric, acc), '../results/1.1-rejection_option.pkl')
display_results('../results/1.1-rejection_option.pkl')
```
## Questions
* Check what happens with different metrics used for optimizing. Check ?RejectionOption?
| github_jupyter |
# 自动数据增强
## 概述
MindSpore除了可以让用户自定义数据增强的使用,还提供了一种自动数据增强方式,可以基于特定策略自动对图像进行数据增强处理。
自动数据增强主要分为基于概率的自动数据增强和基于回调参数的自动数据增强。
## 基于概率的自动数据增强
MindSpore提供了一系列基于概率的自动数据增强API,用户可以对各种数据增强操作进行随机选择与组合,使数据增强更加灵活。
关于API的详细说明,可以参见[API文档](https://www.mindspore.cn/doc/api_python/zh-CN/master/mindspore/mindspore.dataset.transforms.html)。
### RandomApply
API接收一个数据增强操作列表`transforms`,以一定的概率顺序执行列表中各数据增强操作,默认概率为0.5,否则都不执行。
在下面的代码示例中,以0.5的概率来顺序执行`RandomCrop`和`RandomColorAdjust`操作,否则都不执行。
```
import mindspore.dataset.vision.c_transforms as c_vision
from mindspore.dataset.transforms.c_transforms import RandomApply
rand_apply_list = RandomApply([c_vision.RandomCrop(512), c_vision.RandomColorAdjust()])
```
### RandomChoice
API接收一个数据增强操作列表`transforms`,从中随机选择一个数据增强操作执行。
在下面的代码示例中,等概率地在`CenterCrop`和`RandomCrop`中选择一个操作执行。
```
import mindspore.dataset.vision.c_transforms as c_vision
from mindspore.dataset.transforms.c_transforms import RandomChoice
rand_choice = RandomChoice([c_vision.CenterCrop(512), c_vision.RandomCrop(512)])
```
### RandomSelectSubpolicy
API接收一个预置策略列表,包含一系列子策略组合,每一子策略由若干个顺序执行的数据增强操作及其执行概率组成。
对各图像先等概率随机选择一种子策略,再依照子策略中的概率顺序执行各个操作。
在下面的代码示例中,预置了两条子策略,子策略1中包含`RandomRotation`、`RandomVerticalFlip`和`RandomColorAdjust`三个操作,概率分别为0.5、1.0和0.8;子策略2中包含`RandomRotation`和`RandomColorAdjust`两个操作,概率分别为1.0和0.2。
```
import mindspore.dataset.vision.c_transforms as c_vision
from mindspore.dataset.vision.c_transforms import RandomSelectSubpolicy
policy_list = [
[(c_vision.RandomRotation((45, 45)), 0.5), (c_vision.RandomVerticalFlip(), 1.0), (c_vision.RandomColorAdjust(), 0.8)],
[(c_vision.RandomRotation((90, 90)), 1.0), (c_vision.RandomColorAdjust(), 0.2)]
]
policy = RandomSelectSubpolicy(policy_list)
```
## 基于回调参数的自动数据增强
MindSpore的`sync_wait`接口支持按batch或epoch粒度在训练过程中动态调整数据增强策略,用户可以设定阻塞条件触发特定的数据增强操作。
`sync_wait`将阻塞整个数据处理pipeline直到`sync_update`触发用户预先定义的`callback`函数,两者需配合使用,对应说明如下:
- sync_wait(condition_name, num_batch=1, callback=None)
该API为数据集添加一个阻塞条件`condition_name`,当`sync_update`调用时执行指定的`callback`函数。
- sync_update(condition_name, num_batch=None, data=None)
该API用于释放对应`condition_name`的阻塞,并对`data`触发指定的`callback`函数。
下面将演示基于回调参数的自动数据增强的用法。
1. 用户预先定义`Augment`类,其中`preprocess`为自定义的数据增强函数,`update`为更新数据增强策略的回调函数。
```
import mindspore.dataset.vision.py_transforms as transforms
import mindspore.dataset as ds
import numpy as np
class Augment:
def __init__(self):
self.ep_num = 0
self.step_num = 0
def preprocess(self, input_):
return (np.array((input_ + self.step_num ** self.ep_num - 1), ))
def update(self, data):
self.ep_num = data['ep_num']
self.step_num = data['step_num']
```
2. 数据处理pipeline先回调自定义的增强策略更新函数`update`,然后在`map`操作中按更新后的策略来执行`preprocess`中定义的数据增强操作。
```
arr = list(range(1, 4))
dataset = ds.NumpySlicesDataset(arr, shuffle=False)
aug = Augment()
dataset = dataset.sync_wait(condition_name="policy", callback=aug.update)
dataset = dataset.map(operations=[aug.preprocess])
```
3. 在每个step中调用`sync_update`进行数据增强策略的更新。
```
epochs = 5
itr = dataset.create_tuple_iterator(num_epochs=epochs)
step_num = 0
for ep_num in range(epochs):
for data in itr:
print("epcoh: {}, step:{}, data :{}".format(ep_num, step_num, data))
step_num += 1
dataset.sync_update(condition_name="policy", data={'ep_num': ep_num, 'step_num': step_num})
```
| github_jupyter |
# Modification of object properties
### Customize simulation
The aim of this session is to give a better understanding of how our solver works.
Most of all we will present different properties of the classes. This allows to set up very individual simulations, according to the interests one might have.
We recommend to have a look at the other example sessions in the [wiki](https://github.com/udcm-su/NTMpy/wiki) in specific at the *Getting Started* one, before one starts reading through this session.
### General work flow
The software is set up in a way that there are 3 different classes:
* Source **S**
* Simulation **Sim**
* Visualization **V**
Such that **S $\rightarrow$ Sim** $\rightarrow$ **V**
First a source has to be defined, and then passed on as an input argument to the simulation then the simulation should be passed on as an input argument to the visualization.
In this session we will go through the properties of each class giving a brief description on how to use them and what they represent physically.
Note that the numerical units package is not necessarily needed but we are using it here since it comes handy and better shows physical dimensions.
```
from NTMpy import NTMpy as ntm
from matplotlib import pyplot as plt
import numpy as np
import numericalunits as u
u.reset_units('SI')
#Creating the Source object with default configuration
s = ntm.source()
```
In order to access the properties of this class one can execute `s.getProperties()`.
```
s.getProperties()
```
The default options for the `spaceprofile = "TMM"`, i.e. Transfer matrix method to calculate the absorption in space. That is multiple reflections on the layer boundaries as well as different incident angles and the wavelength of the laser are under consideration.
For the `timeprofile = "Gaussin"` is the default option.
I.e. a gaussian distribution around $t_0$ of the amplitude with respect to the time dimension.
For the space dimension we are considering Lambert Beer´s law. That is an exponential decay in space.
* `spaceprofile` lets the user select one out of two absorption options: `"TMM"` or `"LB"`. Where "LB" stands for the Lambert Beer law. (Exponetially decaying according to 1/optical penetration depth)
* For the `timeprofile` we independently select an option from: `"Gaussian"`, `"repGaussian"` or `"Custom"`
* `"repGaussian"` takes a series of repeated pulses under consideration.
* `"Custom"` allows the user to provide the software with their own data in order to simulate the shape of the heating source in time, see the example *Custom time source* in the [wiki](https://github.com/udcm-su/NTMpy/wiki).
* `fluence` is a number and determines the fluence (Energy/Area) of the laser, responsible for the heating. Typically given in mJ/cm^2
* `t0` is the time around which the Gaussian is centered (in s). It is a number.
* `FWHM`is the width of the Gaussian, i.e. at which length is the amplitude half (in m). It is a number
Note, that in the program the Gaussian source is characterized with respect to the fluence and FullWidthHalfMax.
Therefor in the program we compute:
* $A = \frac{\mathcal{F}}{\sqrt{2\pi\sigma^2}}$ , where $\mathcal{F}$ is the fluence $\left(\frac{J}{m^2}\right)$
* $\sigma^2 = \frac{\mathcal{FWHM}^2}{2log(2)}$ , where $\mathcal{FWHM}^2$ is the FullWidthHalfMax (s)
* `loadData` provide a data array with the time profile of the pulse under consideration. (Only relevant, if the option for the `timeprofile = "Custom"`)
* `multipulse` is an option to give multiple pulse excitations as shown [here](https://nbviewer.jupyter.org/github/udcm-su/NTMpy/blob/master/Examples/CostumTimePulse2.ipynb). It can be either `"on"` or `False`.
If `multipulse = "on"`, the user can decide wether they want to give a puls frequency or set a fixed number of pulses.
* `frequency` is the frequency with which the pulses should reapear (in 1/s)
* `num_of_pulses` is the number of pulses which should be placed between time 0 and the end of the simulation.
* `adjusted_grid` is an option which injects more points in the timegrid for the simulation.
The use of it is shown in the *Custom time source* example in the [wiki](https://nbviewer.jupyter.org/github/udcm-su/NTMpy/blob/master/Examples/CostumTimePulse2.ipynb). The reason why this is important is to make sure the shape of the pulse is correctly captured in time. This option is **important if pulses with small FWHM are under consideration**.
* `dt0` is the $\Delta t0$, around which area one wants to inject an extra time grid. (in s). Only works if `adjusted_grid = "on"`.
* `extra_points` defines how many extra points should be injected. This has to be an integer. Note that if it is a high number the shape of the pulse will be captured extremely well but might slow down the simulation.
The following two parameters are only relevant if the `spaceprofile = "TMM"`. Then the user has to provide
* `lambda_vac` is the wave length in vacuum of the incident laser **given in nm!**
* `polarization` is either `"s"` or `"p"`.
```
#Modifying the source properties
s.fluence = 15*10
s.FWHM = 0.1*1e-12
s.t0 = 5*1e-12
s.polarization = "p"
s.theta_in = np.pi/4 #rad (0 is grazing)
s.lambda_vac = 400 #nm
s.getProperties()
```
Next we will have a look at the simulation class.
In order to initialize it we will have to give two input arguments:
`sim = simulation(number of systems, source object)`
where `number of system` can be either
* 1 => only the electron system will be taken under consideration
* 2 => electron and lattice system will be taken into consideration
* 3 => electron - lattice - spin
```
#1 Temperature model: Set up simulation
sim = ntm.simulation(1,s)
sim.getProperties()
```
We can see that currently the `num_of_temp` is set to 1. (Only electron system will be under consideration).
Other than that we can modify
* `start_time` = starting time of simulation in s
* `final_time` = ending time of the simulation in s
* `time_step` = if someone wants to use a specific time step in the simulation. If no specific time step is given, the program will automatically do an estimate for the stability region in the explicit euler loop and choose a time step accordingly. If the user is interested in the fast dynamics in specific then it is recommended to choose a time step well below the stability limit.
* `left_BC` = boundary condition on the left end of the space grid. That is $T(x = 0,t) = f(t)$. I.e. the value at the left end of the entire material under consideration is fixed for all times. Note: $f(t)$ can be a `lambda` function (`sim.left_BC = lambda x: 2+x**2`) or simply a constant (`sim.left_BC = 0).
* `right_BC` = boundary condition on the right end of the space grid. that is $T(x = L, t) = g(t)$
An alternative way to modify the boundary conditions using the `changeBC_Type()` and the `changeBC_Value()`- functions, is shown [here](https://nbviewer.jupyter.org/github/udcm-su/heat-diffusion-1D/blob/master/Examples/BCN1.ipynb)
Arguments:
1. Number of subsystem to be considered (int)
2. Side where we want to modify the BC. ("left", "right")
3. Which type of BC to be considered ("neumann", "dirichlet")
3. --> For `changeBC_value`: value to change it to. (either a function or a constant)
`
sim.changeBC_Type(1,"left","dirichlet")
sim.changeBC_Type(1,"right","neumann")
sim.changeBC_Value(1,"left",0)
sim.changeBC_Value(1,"right",0)
`
* `stability_lim` is a parameter which is used to determine the stability limit. It should reach from the expected minimum temperature of the sample to the expected maximum temperature. The input has to be a list and the temperature should be given in K.
The more specific simulation parameters for the electron system are accessible in
`sim.temp_data.getProperties`. (Note that the lattice system has not been initialized, since `num_of_temp = 1`, therefore the parameter `temp_data_Lat`, corresponding to lattice specific simulation parameters is currently empty.)
```
sim.temp_data.getProperties()
```
* `plt_points` corresponds to the number of points per layer in the plot. (The more, the higher the more points get interpolated between the actual solutions.)
* `length` is the length of each layer (in m)
* `Left_BC_Type` and `Right_BC_Type` corresponds to the two different types of boundary conditions we are considering, as mentioned above:
`left_BC_Type = "0"` => Dirichlet boundary condition. The temperature at the left/right end is know and fixed for all times. Note, to modify this option it is more easy to follow the instructions given above.
$T(x=0,t) = f(t)$ for the left- or $T(x=L,t) = g(t)$ for the right hand side.
`left_BC_Type = 1` => Neumann boundary condition. The temperature flux at the left/right end is know and fixed for all times.
$\frac{\partial T(x=0,t)}{\partial x} = f(t)$ for the left- or $\frac{\partial T(x=L,t)}{\partial x} = f(t)$ for the right hand side.
Note that further information can be found [here](https://nbviewer.jupyter.org/github/udcm-su/heat-diffusion-1D/blob/master/Examples/BCN1.ipynb)
* `init` is the initial condition of the system. It can be a lambda function `sim.temp_data.init = lambda x : 1+x` or a constant `sim.temp_data.init = 300` K (setting it to room temperature.) However, we recommend to use the `sim.changeInit(systemnumber,function(x))` change init function.
* `conductivity` is the conductivity $k_i(T)$ of each layer. (in $\frac{W}{mK}$)
It can be a constant value, over the entire layer (in space), or a lambda function with respect to the temperature $T$.
The following parameters are modified by `sim.addLayer(length,n_index,[k],[C_e],density)` and so represent the corresponding parameters of the layer stack under consideration.
* `heatCapacity` is the heat capacity $C_i(T)$ of each layer. (in $\frac{J}{kgK}$)
it can be a constant value, over the entire layer, or a lambda function with respect to the temperature $T$.
* `rho` is the density $\rho_i$ of each layer. It is a constant. (in $\frac{kg}{m^3}$)
* `collocpts` is the number of collocation points used to approximate the solution in space. In between those the the solution gets interpolated. If one wants to do simulations with high resolution in space this value can be changed by:
`sim.temp_data.collocpts = integer`. Note that this will slow down the simulation!
Note that each parameter can be changed for the electron and for the lattice system individually.
To add the parameters one can simply use the `sim.addLayer(length,refractive_index,conductivity,heatCapacity,density)` function.
This is the recommended way and also the way how it is shown in all the other example sessions.
We are now leaving the boundary conditions on default => Neumann boundary conditions and the initial temperature on default => 300K over the entire space.
```
#Platinum
length_Pt = 30*u.nm #Length of the Material
n_Pt = 1.0433+3.0855j
k_el_Pt = 73*u.W/(u.m*u.K);#Heat conductivity
rho_Pt = 1e3*21*u.kg/(u.m**3)#Density
C_el_Pt = lambda Te: (740*u.J/(u.m**3*u.K**2))/(1e3*21*u.kg/(u.m**3)) *Te #Electron heat capacity
C_lat_Pt = 2.8e6*u.J/(u.m**3*u.K**2)/rho_Pt#Lattice heat capacity
G_Pt = 1e16*25*u.W/(u.m**3*u.K) #Lattice electron coupling constant
#Cobalt
length_Co = 50*u.nm;
n_Co = 1.0454+3.2169j
k_el_Co = 100*u.W/(u.m*u.K);
rho_Co = 1e3*8.86*u.kg/(u.m**3)
C_el_Co = lambda Te: (704*u.J/(u.m**3*u.K**2))/(1e3*8.86*u.kg/(u.m**3)) *Te
C_lat_Co = 4e6*u.J/(u.m**3*u.K**2)/rho_Co
G_Co = 1e16*93*u.W/(u.m**3*u.K)
#Adding parameters for two layers
sim.addLayer(length_Pt,n_Pt,[k_el_Pt,k_el_Pt],[C_el_Pt,C_lat_Pt],rho_Pt,G_Pt) #Platinum
sim.addLayer(length_Co,n_Co,[k_el_Co,k_el_Co],[C_el_Co,C_lat_Co],rho_Co,G_Co) #Cobalt
#adjusting the time span
sim.final_time = 50*1e-12
#setting an initial temperature for the electron system
sim.temp_data.init = 300
```
After we initialized and passed on the source in the very beginning we created a one temperature simulation object and have just been adding two layers, with the `.addLayer()` function, to our simulation.
We are keeping the boundary conditions in isolator configuration on both sides, i.e.$\frac{\partial T(x=0,t)}{\partial x} = 0$ and $\frac{\partial T(x=L,t)}{\partial x} = 0$. Corresponding to ' no heat is escaping the material'.
Also we set the initial temperature of the entire system to be 300K, i.e. room temperature.
#### T(x,t)
In order to access $T(x,t)$ the dynamics of the temperature in time and space, one has to execute
`[x,t,T] = sim.run()`
* `T` matrix of the temperature evolution in space and time
* `x` = x- grid.
* `t` = time grid
```
[Te,x,t] = sim.run()
```
Now the matrix `T` contains all the information on the system and together with the `x` and `t` grid one can be visualize the result in many different ways.
In order to make this easier, we have a few pre made visualization tools.
The visual object can be created, using the simulation object as an input argument.
```
v = ntm.visual(sim)
```
Plotting the source we created in the beginning.
```
#output of v.source is the full matrix of the source(x,t)
so = v.source()
#A contour plot of the dynamics
v.contour('1')
#weighted average in space
[tt,avT] = v.average()
#An animation of the dynamics.
#where the input argument is an integer and corresponds to the speed of the animation
v.animation(1)
```
## Premade visualization functions
* `[T,R,A,absorption,xflat,grid] = v.loalAbsorption()` This will show the local absorption profile of the source in the material. And give back values like the total transmission `T`, the total reflectance `R` and absorption `A`. The absorption profile $a(t)$ can be plottet against `xflat`.
* `[so] = v.source()` shows a 3D- plot of the injected heat and gives back an array of how much heat is injected in time and space.
* `contour("systemnumber")`, where systemnumber is a string and can be `"1"`, `"2"` or `"3"`.
* `[t,avT] = v.average()`
| github_jupyter |
```
import numpy as np
DHESN_PCA = np.genfromtxt("DHESN_RESULTS/DHESN_data_VARIOUS_DHESN_WITH_PCA_2__2018-03-21.csv", delimiter=',', skip_header=1)
print(DHESN_PCA)
import pandas as pd
data_vae_2 = pd.read_csv("DHESN_RESULTS/DHESN_data_DHESN_WITH_VAE_GRID_SEARCH_epochfix_3_nostd__2018-03-23.csv", delimiter=',')
data_vae_2_no_zeros = data_vae_2.drop(data_vae_2[data_vae_2[data_vae_2.columns[0]] == 0].index)
len(data_vae_2_no_zeros)
data_vae_2_no_zeros.sort_values(by=[data_vae_2_no_zeros.columns[-2]])
data_pca = pd.read_csv("DHESN_RESULTS/DHESN_data_VARIOUS_DHESN_WITH_PCA_2__2018-03-21.csv", delimiter=',')
data_pca.sort_values(by=[data_pca.columns[-2]])
data_vae = pd.read_csv("DHESN_RESULTS/DHESN_data_VARIOUS_DHESN_WITH_VAE_GRID_SEARCH_3__2018-03-20.csv", delimiter=',')
data_vae_no_zeros = data_vae.drop(data_vae[data_vae[data_vae.columns[0]] == 0].index)
len(data_vae_no_zeros)
data_vae_no_zeros.sort_values(by=[data_vae_no_zeros.columns[-2]])
%load_ext autoreload
from ESN.ESN import DHESN
from Helper.utils import nrmse
from MackeyGlass.MackeyGlassGenerator import run
data = np.array([run(10100)]).reshape(-1, 1)
print(data)
MEAN_OF_DATA = np.mean(data)
split = 9100
X_train = np.array(data[:split-1])
y_train = np.array(data[1:split])
X_valid = np.array(data[split-1:-1])
y_valid = np.array(data[split:])
# data_pca.sort_values(by=[data_pca.columns[-2]])
data_vae_no_zeros.sort_values(by=[data_vae_no_zeros.columns[-2]])
import time
data_vae_2_no_zeros.sort_values(by=[data_vae_2_no_zeros.columns[-2]])
runs = 200
_errs = []
_times = []
for l in range(runs):
n=4
start_time = time.time()
dhesn = DHESN(1, 1, num_reservoirs = n,
reservoir_sizes=np.linspace(200, 400, n, endpoint=True).astype(int),
echo_params=np.linspace(0.5, 0.1, n, endpoint=True),
regulariser=1e-2,
init_echo_timesteps=100,
dims_reduce=np.linspace(30, 80, n-1, endpoint=True).astype(int).tolist(),
encoder_type='VAE', train_epochs=8)
dhesn.initialize_input_weights(
scales=np.linspace(0.5, 0.5, n, endpoint=True).tolist(),
strategies='uniform')
dhesn.initialize_reservoir_weights(spectral_scales=np.linspace(1.0, 0.7, n, endpoint=True).tolist(),
strategies=['uniform']*n,
sparsity=0.1)
# dhesn = DHESN(1, 1, num_reservoirs = n,
# reservoir_sizes=np.linspace(300, 300, n, endpoint=True).astype(int),
# echo_params=[0.2618, 0.6311, 0.2868, 0.6311, 0.2868, 0.6311, 0.2868, 0.6311],
# regulariser=1e-6,
# init_echo_timesteps=100,
# dims_reduce=np.linspace(30, 30, n-1, endpoint=True).astype(int).tolist(),
# encoder_type='VAE', train_epochs=4)
# dhesn.initialize_input_weights(
# scales=[0.7726, 0.4788, 0.6535, 0.4788, 0.6535, 0.4788, 0.6535, 0.4788], strategies='uniform')
# dhesn.initialize_reservoir_weights(
# spectral_scales=[0.8896, 0.8948, 0.3782, 0.8948, 0.3782, 0.8948, 0.3782, 0.8948],
# strategies=['uniform']*n,
# sparsity=0.1)
# dhesn = DHESN(1, 1, num_reservoirs = n,
# reservoir_sizes=np.linspace(100, 500, n, endpoint=True).astype(int),
# echo_params=np.linspace(0.5, 0.1, n, endpoint=True),
# regulariser=1e-6,
# init_echo_timesteps=100,
# dims_reduce=np.linspace(100, 10, n-1, endpoint=True).astype(int).tolist(),
# encoder_type='VAE', train_epochs=4)
# dhesn.initialize_input_weights(scales=np.linspace(0.5, 1.0, n, endpoint=True).tolist(), strategies='binary',
# sparsity=0.1)
# dhesn.initialize_reservoir_weights(spectral_scales=np.linspace(0.9, 0.3, n, endpoint=True).tolist(),
# strategies=['uniform']*n,
# sparsity=0.1)
# start_time = time.time()
# dhesn = DHESN(1, 1, num_reservoirs = n,
# reservoir_sizes=np.linspace(200, 400, n, endpoint=True).astype(int),
# echo_params=np.linspace(0.5, 0.1, n, endpoint=True),
# regulariser=1e-6,
# init_echo_timesteps=100,
# dims_reduce=np.linspace(30, 80, n-1, endpoint=True).astype(int).tolist(),
# encoder_type='PCA')
# dhesn.initialize_input_weights(scales=np.linspace(0.5, 0.5, n, endpoint=True).tolist(), strategies='uniform')
# dhesn.initialize_reservoir_weights(spectral_scales=np.linspace(0.4, 1.2, n, endpoint=True).tolist(),
# strategies=['uniform']*n,
# sparsity=1.0)
# start_time = time.time()
# dhesn = DHESN(1, 1, num_reservoirs = n,
# reservoir_sizes=np.linspace(200, 400, n, endpoint=True).astype(int),
# echo_params=np.linspace(0.5, 0.1, n, endpoint=True),
# regulariser=1e-6,
# init_echo_timesteps=100,
# dims_reduce=np.linspace(30, 80, n-1, endpoint=True).astype(int).tolist(),
# encoder_type='PCA')
# dhesn.initialize_input_weights(scales=np.linspace(0.5, 0.5, n, endpoint=True).tolist(), strategies='uniform')
# dhesn.initialize_reservoir_weights(spectral_scales=np.linspace(0.4, 1.2, n, endpoint=True).tolist(),
# strategies=['uniform']*n,
# sparsity=1.0)
# dhesn = DHESN(1, 1, num_reservoirs = n,
# reservoir_sizes=np.linspace(300, 300, n, endpoint=True).astype(int),
# echo_params=np.linspace(0.5, 0.1, n, endpoint=True),
# regulariser=1e-6,
# init_echo_timesteps=100,
# dims_reduce=np.linspace(60, 60, n-1, endpoint=True).astype(int).tolist(),
# encoder_type='PCA')
# dhesn.initialize_input_weights(scales=np.linspace(0.5, 0.5, n, endpoint=True).tolist(), strategies='uniform')
# dhesn.initialize_reservoir_weights(spectral_scales=np.linspace(1.2, 0.4, n, endpoint=True).tolist(),
# strategies=['uniform']*n,
# sparsity=0.1)
dhesn.train(X_train, y_train)
# generate
outs = []
u_n = X_valid[0]
print(u_n)
for _ in range(len(data[split:])):
u_n = dhesn.forward(u_n)
outs.append(u_n)
outs = np.array(outs).squeeze()
y_vals = y_valid.squeeze()
err = nrmse(y_vals, outs, MEAN_OF_DATA)
_errs.append(err)
print("({}) NRMSE: {}".format(l, err))
print("CULM MEAN: {}".format(np.mean(_errs)))
total_time = time.time() - start_time
_times.append(total_time)
print("({}) TIME: {}".format(l, total_time))
print(np.mean(_times))
print(np.max(_times), np.min(_times))
print(np.mean(_errs))
import matplotlib.pyplot as plt
import pickle as pkl
import seaborn as sns
pkl.dump((_errs, _times), open("RUNS_200_FOR_DHESN_VAE_batchshuff_sparsity1_4ep.pkl", "wb"))
sns.set_style("whitegrid")
sns.set_context('notebook', font_scale=1.5)
sns.despine()
_data = []
_title = [r'$\rho = 0.1$', r'$\rho = 1.0$']
_bar_clrs = [sns.color_palette("Blues")[2], sns.color_palette("Blues")[4]]
_ln_clrs = [sns.color_palette("Reds")[2], sns.color_palette("Reds")[4]]
_data1, _times1 = pkl.load(open("RUNS_200_FOR_DHESN_PCA_newData_2.pkl", "rb"))
_data2, _times2 = pkl.load(open("RUNS_200_FOR_DHESN_PCA_newData_sparsity1.pkl", "rb"))
_data.append(_data1)
_data.append(_data2)
print(len(_data1))
print(len(_data2))
f, ax = plt.subplots(1, 1, sharey=True, figsize=(6, 4))
for i,d in enumerate(_data):
# i = 0
hist, bins = np.histogram(d, bins=35)
centres = (bins[1:] + bins[:-1])/2.
width = (bins[1] - bins[0])
m = np.mean(d)
print(m)
ax.bar(centres, hist, width=width, label=_title[i], color=_bar_clrs[i])
# ymin, ymax = ax[i].get_ylim()
ax.plot([m]*2, [0, 32], linestyle='--', color=_ln_clrs[i])
ax.text(
0.223+0.02, ymax-3*(i+1)+5, '%.3f' % m, color=_ln_clrs[i], fontsize=14
)
ax.set_xlim((0.0, 1.0))
ax.set_ylim((0, 32))
ax.set_xlabel('NRMSE', fontsize=14)
# ax.set_title(_title[i], fontsize=30)
plt.legend()
# f.tight_layout()
plt.show()
# f.savefig("DHESN_PCA_distribution_newData.pdf")
# _data, _times = pkl.load(open("RUNS_200_FOR_DHESN_PCA_2.pkl", "rb"))
f, ax = plt.subplots(figsize=(6, 4))
hist, bins = np.histogram(_errs, bins=35)
centres = (bins[1:] + bins[:-1])/2.
width = (bins[1] - bins[0])
m = np.mean(_errs)
print(m)
ax.bar(centres, hist, width=width)
ymin, ymax = ax.get_ylim()
ax.plot([m]*2, [ymin, ymax], linestyle='--', color='red', label='mean')
ax.text(
m+0.02, ymax-3, '%.3f' % m, color='red', fontsize=14
)
ax.set_xlim((0.0, 1.0))
ax.set_ylim((ymin, ymax))
ax.set_xlabel('NRMSE', fontsize=14)
plt.legend()
plt.show()
# f.savefig("DHESN_PCA_distribution.pdf")
```
| github_jupyter |
# BIDMC Datathon Question #1
# English vs. Non-English Speaker MIMIC-III Cohort
# Notebook 2: Exploratory Analysis
In this notebook, we want to walk you through some basic steps on how to analyze the cohort which we generated in the first notebook. This notebook is meant to simply introduce a few first steps towards performing an exploratory analysis of the data as you begin building models as performing inference in your teams.
This notebook is just a quick introduction to analyzing the cohort in Python. It's up to you and your team to decide what and how you want to analyze the data!
This is where our paths diverge! Looking forward to seeing where we all end up in the next 3 hours.
# Setup
## Prerequisites
- If you do not have a Gmail account, please create one at http://www.gmail.com.
- If you have not yet signed the data use agreement (DUA) sent by the organizers, please do so now to get access to the dataset.
## Load libraries
Run the following cells to import some modules that we'll be using for analysis.
```
from google.colab import drive
import pandas as pd
import os
```
## Connect to Drive
We will mount our Google Drive to access the CSV file created in the earlier notebook.
```
drive.mount('/content/gdrive')
```
# Loading Data
The last notebook focused on extracting the actual cohort and various covariates of potential interest.
## Load from Drive
Here, we can just import the generated CSV file and directly start working with it.
```
FILE_NAME = 'dataset_datathon_28022020.csv'
MAIN_PATH = '/content/gdrive/My Drive/'
final_path = os.path.join(MAIN_PATH, FILE_NAME)
dataset = pd.read_csv(final_path)
```
(This is so much simpler than R!)
# Exploratory analysis
The first step in any data analysis should be **exploratory analysis** of the dataset. When performing an exploratory analysis, you should be checking high-level distributions of various covariates, marginal associations, and on the look out for potential messy data issues.
We can start by just taking a look at the first few rows in the table.
```
dataset.head(5)
```
We can rename the columns to all be uppercase for consistency and so we don't have to worry about remember the case of different columns.
```
dataset.columns = [_.upper() for _ in dataset.columns]
```
## Language
We should start to analyze our cohort and find out if there is a significant relationship between end-of-life treatment between English and non-English speaking patients.
Let's start by just taking a look at how many of the patients in our cohort were labeled as English speakers.
```
english_col = 'ENGLISH'
dataset[english_col].value_counts()
```
We can also plot this to take a look.
```
# Count number of admissions for English vs. Non-English speakers
dataset[english_col].value_counts().plot(kind='bar', title='English-speaker admissions')
```
It looks like most patients in our cohort spoke English.
From this point, you can try to stratify by outcomes.
- What outcomes do you define as as invasive procedures in our dataset?
- How do you want to group those invasive procedures? Maybe adding a boolean (0/1) column to the dataset which indicates procedures?
- Do the same statistics for each different type of invasive procedure (ventilation, CRRT etc.)
_Tip_: You can use both R and Python in Colab notebooks, so pick your favorite language to get started quickly!
# Creating a "Table 1"
In reports or papers for clinical trials or retrospective data studies, you'll often find a cohort summary table presented as _Table 1_. Table 1 describes the cohort's characteristics, such as age, sex, and ethnicity, stratified by the exposure under investigation. Often, the exposure groups are shown as two columns in the table (e.g. group A and group B). Creating these tables can be fairly tedious, especially when the list of potentially confounding covariates is long.
The good news: there are plenty of packages for generating these tables automatically (in both Python _and_ R), so you only need to reshape your dataset, handle missing data (depending on the package) and call the corresponding functions in the respective Python and R libraries.
The following 2 are examples of packages that can be used to generate a "Table 1" in Python and R:
- Python: [Table One](https://pypi.org/project/tableone/)
- R: [Table One](https://cran.r-project.org/web/packages/tableone/vignettes/introduction.html)
In this section of the notebook, consider generating a "Table 1", analyze the results and include it in your final presentation. To do this, however, you first need to define the stratifying exposure or variable (e.g. English vs. non-English speaking) and define the set of covariates (rows) you want to display.
Keep these things in mind while performing your analysis and consider creating Table 1 and performing exploratory analysis extensively *early on*!
```
```
| github_jupyter |
# Load ranked hyper-params and join selections
### Import/init
```
import os
import csv
import numpy as np
import pandas as pd
from collections import defaultdict
import matplotlib.pyplot as plt
%matplotlib inline
from notebook_helpers import load_params
# Shared base path
path = "/Users/type/Code/azad/data/wythoff/"
```
### Load param data
```
# Stumbler
exp6 = load_params(os.path.join(path, "exp6_ranked.csv"))
# Stumbler-strategist
# learning_rate_stumbler, num_stumbles, num_strategies
exp7 = load_params(os.path.join(path, "exp7_ranked.csv"))
# H/C thresholds
exp8 = load_params(os.path.join(path, "exp8_ranked.csv"))
# learning_rate_influence, num_hidden1, num_hidden2
exp12 = load_params(os.path.join(path, "exp12_ranked.csv"))
# exp6
```
# Select top_n rows from hand-picked columns
- Rename cols as needed so they match the `wythoff_stumbler_strategist` call signature.
```
top_n = 20
print(6, exp6.keys())
print(7, exp7.keys())
print(8, exp8.keys())
print(12, exp12.keys())
# Select params from each exp
# old : new name
exp6_cols = {
'gamma' : 'gamma',
'epsilon' : 'epsilon',
'learning_rate' : 'learning_rate_stumbler'
}
exp7_cols = {
'learning_rate_strategist' : 'learning_rate_strategist',
'num_stumbles' : 'num_stumbles',
'num_strategies' : 'num_strategies',
}
exp8_cols = {
'hot_threshold' : 'hot_threshold',
'cold_threshold' : 'cold_threshold',
}
exp12_cols = {
'learning_rate' : 'learning_rate_influence',
'num_hidden1' : 'num_hidden1',
'num_hidden2' : 'num_hidden2'
}
joint = defaultdict(list)
for k, new_k in exp6_cols.items():
joint[new_k] = exp6[k][0:top_n]
for k, new_k in exp7_cols.items():
joint[new_k] = exp7[k][0:top_n]
for k, new_k in exp8_cols.items():
joint[new_k] = exp8[k][0:top_n]
for k, new_k in exp12_cols.items():
joint[new_k] = exp12[k][0:top_n]
```
# Save the joint data to file
```
table = []
head = sorted(joint.keys())
values = [joint[k] for k in head]
f_name = os.path.join(path, "joint_ranked.csv")
head = ("row_code", *head)
with open(f_name, "w") as csv_file:
writer = csv.writer(csv_file, delimiter=',')
writer.writerow(head)
for i, row in enumerate(zip(*values)):
row = (i, *row)
writer.writerow(row)
```
# Plot each of the selected params
```
for k, v in joint.items():
plt.figure(figsize=(3, 2))
_ = plt.hist(np.asarray(v), color='black', bins=10)
_ = plt.tight_layout()
_ = plt.ylabel("Counts")
_ = plt.xlabel(k)
# _ = plt.close()
```
| github_jupyter |
<h1><center>Global stuff</center></h1>
```
# Eases updating libs
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Imports
import sys
sys.path.append('../')
from IPython.display import clear_output
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
from google.colab import drive
drive.mount('/content/gdrive')
pwd = "/content/gdrive/My Drive/S5 Project: Air polution/air-polution-sensor"
%cd $pwd
!pip install -r requirements.txt
clear_output(wait = True)
from libraries.global_functions import *
else:
from libraries.global_functions import *
import numpy
import scipy.signal
import pygsp
import ipywidgets
# Random seed
numpy.random.seed(0)
# Useful constants
OUTPUT_DIR = "output/first_tests/"
```
<h1><center>Recreation of https://arxiv.org/pdf/1307.5708.pdf</center></h1>
<h2>Temporal version</h2>
```
# Constants
GRAPH_ORDER = 100
KERNEL_SCALE = 300
# Let's work on a path graph
graph = create_path_graph(GRAPH_ORDER)
# Plot
plot_graph(graph)
# We create 3 groups of vertices
groups = numpy.array([10] * (graph.N//3) +
[60] * (graph.N//3) +
[30] * (graph.N-2*graph.N//3))
# Plot
plot_graph(graph, groups)
# Signal to analyze is a mix between some frequencies
x = numpy.array([graph.U[i, int(groups[i])] for i in range(graph.N)])
x /= numpy.linalg.norm(x) ## Fourier function (f_hat)
# Plot
plot_graph(graph, x)
plot_stem(graph.igft(x))
plot_stem(graph.gft(x))
# Proof: Change f(i) = sin(i)
# y = numpy.sin(range(graph.N))
# plot_stem(y)
# x = graph.gft(y)
# # Plot
# plot_graph(graph, x)
# plot_stem(graph.igft(x))
# We use a window defined by a heat kernel
# Needs to be instanciated on a particular vertex to be the object we want
window_kernel = create_heat_kernel(graph, KERNEL_SCALE)
localized_kernel = window_kernel.localize(int(graph.N/2))
# Plot
plot_graph(graph, localized_kernel)
# Graph spectrogram of the signal
spectrogram = compute_graph_spectrogram(graph, x, window_kernel)
# Plot
plot_matrix(spectrogram,
cols_title="Vertex",
cols_labels=range(graph.N),
rows_title="Eigenvalue index",
rows_labels=range(graph.N),
title="Spectrogram",
colorbar=True)
```
<h2>Spatial version</h2>
```
# Constants
GRAPH_ORDER = 100
KERNEL_SCALE = 10
# Let's work on a SBM of 3 blocks
groups = numpy.array([0] * (GRAPH_ORDER//3) + [1] * (GRAPH_ORDER//3) + [2] * (GRAPH_ORDER-2*GRAPH_ORDER//3))
graph = pygsp.graphs.StochasticBlockModel(GRAPH_ORDER, k=3, z=groups, p=[0.4, 0.6, 0.3], q=0.02)
graph.set_coordinates(kind="spring", seed=numpy.random.randint(2**32))
graph.compute_fourier_basis()
# Plot
plot_graph(graph)
# We create 3 groups of vertices
# Same as those in SBM definition
groups = numpy.array([10] * (graph.N//3) + [60] * (graph.N//3) + [30] * (graph.N-2*graph.N//3))
# Plot
plot_graph(graph, groups)
# Signal to analyze is a mix between some frequencies
x = numpy.array([graph.U[i, int(groups[i])] for i in range(graph.N)])
x /= numpy.linalg.norm(x)
# Plot
plot_graph(graph, x)
# We use a window defined by a heat kernel
# Needs to be instanciated on a particular vertex to be the object we want
window_kernel = create_heat_kernel(graph, KERNEL_SCALE)
localized_kernel = window_kernel.localize(graph.N//2)
# Plot
plot_graph(graph, localized_kernel)
# Graph spectrogram of the signal
spectrogram = compute_graph_spectrogram(graph, x, window_kernel)
# Plot
plot_matrix(spectrogram,
cols_title="Vertex",
cols_labels=range(graph.N),
rows_title="Eigenvalue index",
rows_labels=range(graph.N),
title="Spectrogram",
colorbar=True)
```
<h1><center>Analysis of spatio-temporal signals</center></h1>
<h2>Create dataset with something varying in space across time</h2>
```
# Constants
SPACE_GRAPH_ORDER = 150
TIME_GRAPH_ORDER = 100
LOCAL_OBJECT_SIZE = 5
COMEBACK_PENALTY = 3
SPACE_KERNEL_SCALE = 300
TIME_KERNEL_SCALE = 10
# Create some graphs
g_space = create_sensor_graph(SPACE_GRAPH_ORDER)
g_time = create_path_graph(TIME_GRAPH_ORDER)
plot_graph(g_space)
plot_graph(g_time)
# Function to generate a local thing around a vertex
# Here we just consider a polynom of the graph
def local_object (graph, center, width=LOCAL_OBJECT_SIZE) :
dirac = scipy.signal.unit_impulse(graph.N, center)
signal = numpy.sum([numpy.power(graph.W, i).dot(dirac) for i in range(width)], axis=0)
signal /= numpy.linalg.norm(signal)
return signal
# We move an object around randomly
# We decrease the probability to go to already visited places
signals = numpy.zeros((g_space.N, g_time.N))
next_center = numpy.random.randint(g_space.N)
visited_counts = numpy.array([1 for i in range(g_space.N)])
for t in range(g_time.N) :
signals[:, t] = local_object(g_space, next_center)
visited_counts[next_center] += COMEBACK_PENALTY
neighbors = get_neighbors(g_space, next_center)
probabilities = 1.0 / visited_counts[neighbors]
probabilities /= numpy.linalg.norm(probabilities, 1)
next_center = numpy.random.choice(neighbors, p=probabilities)
# Plot
plot_matrix(signals,
rows_title="Vertex",
rows_labels=range(g_space.N),
cols_title="Instant",
cols_labels=range(g_time.N),
title="Dataset of spatio-temporal signals",
colorbar=True)
```
<h2>Study it with spectrogram considering dimensions independently</h2>
```
# Kernel for the spectrogram
window_space_kernel = create_heat_kernel(g_space, SPACE_KERNEL_SCALE)
# Update function for the slider
def update (instant) :
# Compute spectrogram
spectrogram = compute_graph_spectrogram(g_space, signals[:, instant], window_space_kernel)
# Plot
plot_matrix(spectrogram,
cols_title="Vertex",
cols_labels=range(g_space.N),
rows_title="Eigenvalue index",
rows_labels=range(g_space.N),
title="Graph spectrogram of all values observed at time " + str(instant),
colorbar=True)
# Slider
ipywidgets.widgets.interact(update, instant=range(g_time.N))
# Kernel for the spectrogram
window_time_kernel = create_heat_kernel(g_time, TIME_KERNEL_SCALE)
# Update function for the slider
def update (vertex) :
# Compute spectrogram
spectrogram = compute_graph_spectrogram(g_time, signals[vertex, :], window_time_kernel)
# Plot
plot_matrix(spectrogram,
cols_title="Instant",
cols_labels=range(g_time.N),
rows_title="Eigenvalue index",
rows_labels=range(g_time.N),
title="Time spectrogram of all values observed at vertex " + str(vertex),
colorbar=True)
# Slider
ipywidgets.widgets.interact(update, vertex=range(g_space.N))
```
<h2>Study it with spectrogram considering dimensions jointly</h2>
```
# Graphs used
graphs = [g_space, g_time]
kernel_scales = [SPACE_KERNEL_SCALE, TIME_KERNEL_SCALE]
# Compute JFT of all signals
spectrums = compute_jft(graphs, signals)
# Plot
plot_matrix(spectrums,
rows_title="Space eigenvalue index",
rows_labels=range(g_space.N),
cols_title="Time eigenvalue index",
cols_labels=range(g_time.N),
title="Spectrum of spatio-temporal signals",
colorbar=True)
# We localize a heat kernel
window_kernel = create_joint_heat_kernel(graphs, kernel_scales)
localized_kernel = localize_joint_heat_kernel(graphs, window_kernel, [graphs[i].N//2 for i in range(len(graphs))])
# Update function for the slider
def update (instant, vertex) :
# Plot
plot_graph(graphs[0],
localized_kernel[:, instant],
title="Looking at spatial graph for fixed instant " + str(instant))
plot_graph(graphs[1],
localized_kernel[vertex, :],
title="Looking at time graph for fixed vertex " + str(vertex))
# Slider
ipywidgets.widgets.interact(update, instant=range(g_time.N), vertex=range(g_space.N))
# We localize a heat kernel
window_kernels = create_joint_heat_kernel(graphs, kernel_scales)
# Spectrogram joint estimate (hard procedure)
spectogram_joint = compute_joint_graph_spectrogram(graphs, signals, window_kernels)
# Update function for the slider
def update (instant, vertex) :
# Plot
plot_matrix(spectogram_joint[:,:, vertex, instant],
cols_title="Eigenvalue Vertex-graph",
cols_labels=range(graphs[1].N),
rows_title="Eigenvalue Time-graph",
rows_labels=range(graphs[0].N),
title="Graph spectrogram of all values observed at time " + str(instant) + " and vertex " + str(vertex),
colorbar=True)
# Slider
ipywidgets.widgets.interact(update, instant=range(g_time.N), vertex=range(g_space.N))
```
| github_jupyter |
```
# This notebook generates barplot with evaluation metrics for all groups specified in groups_eval variable.
basic_metrics = {('wtkappa', 'trim'): [0.7],
('corr', 'trim'): [0.7],
('DSM', 'trim_round'): [0.1, -0.1],
('DSM', 'trim'): [0.1, -0.1],
('R2', 'trim'): [],
('RMSE', 'trim'): []}
colprefix = 'scale' if use_scaled_predictions else 'raw'
metrics = dict([('{}.{}_{}'.format(k[0], colprefix, k[1]), v) for k,v in basic_metrics.items()])
num_metrics = len(metrics)
for group in groups_eval:
display(Markdown('### Evaluation by {}'.format(group)))
eval_group_file = join(output_dir, '{}_eval_by_{}.{}'.format(experiment_id, group, file_format))
df_eval_group_all = DataReader.read_from_file(eval_group_file, index_col=0)
df_eval_group_all.index.name = group
df_eval_group_all.reset_index(inplace=True)
# If we have threshold per group, apply it now. Keep "All data" in any case.
if group in min_n_per_group:
display(Markdown("The report only shows the results for groups with "
"at least {} responses in the evaluation set.".format(min_n_per_group[group])))
df_eval_group = df_eval_group_all[(df_eval_group_all['N'] >= min_n_per_group[group]) |
(df_eval_group_all[group] == 'All data')].copy()
else:
df_eval_group = df_eval_group_all.copy()
# Define the order of the bars: put 'All data' first and 'No info' last.
group_levels = list(df_eval_group[group])
group_levels = [level for level in group_levels if level != 'All data']
# We only want to show the report if we have anything other than All data
if len(group_levels) > 0:
if 'No info' in group_levels:
bar_names = ['All data'] + [level for level in group_levels if level != 'No info'] + ['No info']
else:
bar_names = ['All data'] + group_levels
fig = plt.figure()
(figure_width,
figure_height,
num_rows,
num_columns,
wrapped_bar_names) = compute_subgroup_plot_params(bar_names, num_metrics)
fig.set_size_inches(figure_width, figure_height)
with sns.axes_style('white'), sns.plotting_context('notebook', font_scale=1.2):
for i, metric in enumerate(sorted(metrics.keys())):
df_plot = df_eval_group[[group, metric]]
ax = fig.add_subplot(num_rows, num_columns, i + 1)
for lineval in metrics[metric]:
ax.axhline(y=float(lineval), linestyle='--', linewidth=0.5, color='black')
sns.barplot(x=df_plot[group], y=df_plot[metric], color='grey', ax=ax, order=bar_names)
ax.set_xticklabels(wrapped_bar_names, rotation=90)
ax.set_xlabel('')
ax.set_ylabel('')
# set the y-limits of the plots appropriately
if metric.startswith('corr') or metric.startswith('wtkappa'):
if df_plot[metric].min() < 0:
y_limits = (-1.0, 1.0)
ax.axhline(y=0.0, linestyle='--', linewidth=0.5, color='black')
else:
y_limits = (0.0, 1.0)
ax.set_ylim(y_limits)
elif metric.startswith('R2'):
min_value = df_plot[metric].min()
if min_value < 0:
y_limits = (min_value - 0.1, 1.0)
ax.axhline(y=0.0, linestyle='--', linewidth=0.5, color='black')
else:
y_limits = (0.0, 1.0)
ax.set_ylim(y_limits)
elif metric.startswith('RMSE'):
max_value = df_plot[metric].max()
y_limits = (0.0, max(max_value + 0.1, 1.0))
ax.set_ylim(y_limits)
elif metric.startswith('DSM'):
min_value = df_plot[metric].min()
if min_value < 0:
ax.axhline(y=0.0, linestyle='--', linewidth=0.5, color='black')
# set the title
ax.set_title('{} by {}'.format(metric, group))
with warnings.catch_warnings():
warnings.simplefilter('ignore')
plt.tight_layout(h_pad=1.0)
imgfile = join(figure_dir, '{}_eval_by_{}.svg'.format(experiment_id, group))
plt.savefig(imgfile)
if use_thumbnails:
show_thumbnail(imgfile, next(id_generator))
else:
plt.show()
else:
display(Markdown("None of the groups in {} had {} or more responses.".format(group,
min_n_per_group[group])))
```
| github_jupyter |
```
# default_exp models
```
# Models
> Tree ensemble and decision tree models.
```
#hide
def extra_model_fn():
pass
#export
from decision_tree.imports import *
from decision_tree.core import *
from decision_tree.data import *
```
## Decision Tree
```
#export
class Node():
def __init__(self, depth, pred, sample_idxs, split_score=np.inf):
self.depth, self.pred, self.sample_idxs, self.split_score = \
depth, pred, sample_idxs, split_score
def __repr__(self):
res = f'Node({self.depth}, {r3(self.pred)}, {self.sample_idxs}'
if self.split_score != np.inf:
res += f', {r3(self.split_score)}, {self.split_col_idx}, {r3(self.split_values)}, {self.split_idxs}, {r3(self.split_preds)}'
return res + ')'
assert np.inf == Node(1, 0.9, [1,2,3]).split_score
#export
def best_split_for_col(data, node, col_idx, min_leaf_samples=None):
"Returns the best split that can be made for this column/node"
_min_leaf = min_leaf_samples if min_leaf_samples else 1
x, y = data.get_sample(node.sample_idxs, col_idx)
sort_idx = np.argsort(x)
x, y = x[sort_idx], y[sort_idx]
aggs = Aggs(y)
stop = len(x) - _min_leaf
for i in range(stop):
aggs.upd(y[i])
if x[i] == x[i+1] or i < _min_leaf-1: continue
score = aggs.score()
if score < node.split_score:
node.split_score, node.split_col_idx = score, col_idx
node.split_values = x[i], x[i+1]
node.split_idxs = split_array(node.sample_idxs[sort_idx], i+1)
node.split_preds = tuple(arr.mean() for arr in split_array(y, i+1))
test_x = np.array(
[[23.2, 44.4], #0
[ 2. , 2. ], #1
[34.3, 77.3], #2
[-1.5, -0.5], #3
[ 1.5, 1.5], #4
[ 1.5, 9.2], #5
[ 2. , -2. ]])#6
test_y = np.array([0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6])
test_data = DataWrapper.from_pandas(pd.DataFrame(test_x), pd.Series(test_y))
test_node = Node(0, 0, np.arange(7))
best_split_for_col(test_data, test_node, 0, 3)
le_split, gt_split = test_node.split_idxs
assert np.array_equal(test_y[le_split], [3.3, 4.4, 5.5])
assert np.array_equal(test_y[gt_split], [1.1, 6.6, 0. , 2.2])
test_node = Node(0, 0, np.arange(7))
best_split_for_col(test_data, test_node, 1)
le_split, gt_split = test_node.split_idxs
assert np.array_equal(test_y[le_split], [6.6])
assert np.array_equal(test_y[gt_split], [3.3, 4.4, 1.1, 5.5, 0. , 2.2])
# not enough data to split with at least 4 values in each leaf
test_node = Node(0, 0, np.arange(7))
best_split_for_col(test_data, test_node, 1, 4)
assert test_node.split_score == np.inf
test_node = Node(0, 0, np.arange(7)[4:])
test_split = best_split_for_col(test_data, test_node, 0, 1)
le_split, gt_split = test_node.split_idxs
assert np.array_equal(test_y[le_split], [4.4, 5.5])
assert np.array_equal(test_y[gt_split], [6.6])
assert test_node.split_values == (1.5, 2.0)
#export
def best_split(data, node, col_idxs, min_leaf_samples=None):
for col_idx in col_idxs: best_split_for_col(data, node, col_idx, min_leaf_samples)
test_node = Node(0, 0, np.arange(7))
best_split(test_data, test_node, [0,1]); test_split
assert test_node.split_col_idx == 1
assert test_node.split_preds == (6.6, 2.75)
#export
class DecisionTree():
def __init__(self, data, max_depth=None, min_leaf_samples=3, col_idxs_fn=None):
self.data, self.max_depth, self.min_leaf_samples, self.col_idxs_fn = \
data, max_depth, min_leaf_samples, col_idxs_fn
def _col_idxs(self):
return self.col_idxs_fn(self.data.all_x_col_idxs) if self.col_idxs_fn else self.data.all_x_col_idxs
def _recursive_split(self, node):
best_split(self.data, node, self._col_idxs(), self.min_leaf_samples)
if node.split_score == np.inf: return
for op, value, idxs, pred in zip(['le', 'gt'], node.split_values, node.split_idxs, node.split_preds):
setattr(node, op, Node(node.depth+1, pred, idxs))
self._recursive_split(getattr(node, op))
def fit(self):
self.node = Node(1, self.data.y.mean(), self.data.all_x_row_idxs)
self._recursive_split(self.node)
return self
def predict_row(self, row):
return predict_row(row, self.node)
def predict(self, rows):
return np.array([self.predict_row(rows[i]) for i in range(len(rows))])
def __repr__(self):
return f'dTree(data={self.data} max_depth={self.max_depth} min_leaf_samples={self.min_leaf_samples})'
#export
def print_tree(tree):
"print tree with splits, depth first"
print(tree)
print('col_idxs_fn', tree.col_idxs_fn if tree.col_idxs_fn else 'default')
if not hasattr(tree, 'node'): return
queue = [tree.node]
while len(queue) != 0:
node = queue.pop(0)
print(node)
for k in ['le', 'gt']:
if getattr(node, k, False): queue.append(getattr(node, k))
print(test_y)
print_tree(DecisionTree(test_data, min_leaf_samples=2).fit())
print_tree(DecisionTree(test_data, min_leaf_samples=1).fit())
```
TODO: xxxx clean-up
Might we be able to generalize better by ;
- adding a little randomness by using a split value that lies somewhere between the lower and upper boundary of the split. See np.random.uniform ...
- use the average of the lower and upper boundary values
both of these could be done at prediction time
```
#export
def predict_row(row, node):
"make a prediction for the specified row, using the specified node"
if node.split_score == np.inf: return node.pred
split_value = node.split_values[0] # TODO: use just lower value for now
split_col_idx = node.split_col_idx
row_value = row[split_col_idx]
next_node = node.le if row_value<=split_value else node.gt
return predict_row(row, next_node)
```
When we make a prediction for a row that was used in training and we have only 1 sample in each leaf, the tree should predict exactly the right answer
- grab a single row of data
- make a prediction for this row
- assert that the prediction we made matches the actual for this row
```
test_tree = DecisionTree(test_data, min_leaf_samples=1).fit()
for i in range(test_data.x_rows):
test_sample = test_data.get_sample(i)
assert predict_row(test_sample[0], test_tree.node) == test_sample[1]
```
Set-up some data for testing. This data is copied from the final model used in https://github.com/fastai/fastai/tree/master/courses/ml1/lesson2-rf_interpretation.ipynb
```
bulldozers_data = np.load('test/data/bulldozers.npy', allow_pickle=True)
train_data = DataWrapper(*bulldozers_data[:4])
valid_data = DataWrapper(*bulldozers_data[4:])
train_data, valid_data
```
Use a very small amount of data to train a decision tree, then print the root node so we can see how the data has been split.
It's interesting that the depth of this tree is greater than the expected `np.log2(test_tree.data.x_rows)` - because it's unbalanced.
```
test_tree = DecisionTree(train_data.tail(10), min_leaf_samples=1).fit()
print_tree(test_tree)
```
Get predictions for all of the data we trained on.
Although we set `min_leaf_samples=1`, not every sample has it's own leaf. If 2 or more samples have;
- the same values for all independent variables and
- different values for the dependent variable,
- they will end up in the same leaf (because we can't find a value to split on) that will predict the mean of the dependent variables for all samples in the leaf
So we expect preds to be nearly 100% correct;
- loss to be nearly zero
- predictions vs actual plots a strait line with just a little variation
```
test_tree = DecisionTree(train_data.tail(2000), min_leaf_samples=1).fit()
test_preds = test_tree.predict(test_tree.data.x)
loss = rmse(test_preds, test_tree.data.y); print('loss', loss)
import matplotlib.pyplot as plt
plt.scatter(test_preds, test_tree.data.y, alpha=.1);
```
Get predictions for all of the data we trained on again - but allow a minimum of 5 items in each leaf. So we see;
- a non-zero loss
- some variance in the predictions vs actual plot
```
test_tree = DecisionTree(train_data.tail(2000), min_leaf_samples=5).fit()
test_preds = test_tree.predict(test_tree.data.x)
loss = rmse(test_preds, test_tree.data.y); print('loss', loss)
plt.scatter(test_preds, test_tree.data.y, alpha=.1);
```
Get predictions for the validation data - we don't expect a single tree to be very good
```
test_preds = test_tree.predict(valid_data.x)
loss = rmse(test_preds, valid_data.y); print('loss', loss)
plt.scatter(test_preds, valid_data.y, alpha=.1);
```
## Tree Ensemble (AKA Random Forest)
```
#export
class TreeEnsemble():
def __init__(self, data, sample_size, max_depth=None, min_leaf_samples=3, n_trees=10, col_idxs_fn=None):
self.data, self.sample_size, self.max_depth, self.min_leaf_samples, self.n_trees = \
data, sample_size, max_depth, min_leaf_samples, n_trees
if col_idxs_fn is None:
n_cols = int(data.x_cols*0.5)
self.col_idxs_fn = partial(np.random.choice, size=n_cols, replace=False)
self.trees = []
for i in range(n_trees):
sample_idxs = np.random.permutation(data.x_rows)[:sample_size]
sample_data = DataWrapper.from_data_wrapper(data, sample_idxs)
self.trees.append(DecisionTree(sample_data, max_depth, min_leaf_samples, col_idxs_fn))
def fit(self, max_workers=12):
if max_workers == 0:
[t.fit() for t in self.trees]
else:
with ProcessPoolExecutor(max_workers=max_workers) as executor:
self.trees = list(executor.map(DecisionTree.fit, self.trees))
return self
def predict_row(self, row):
return np.array([t.predict_row(row) for t in self.trees]).mean()
def predict(self, rows):
return np.array([self.predict_row(rows[i]) for i in range(len(rows))])
def __repr__(self):
return f'tEnsemble(data={self.data} n_trees={self.n_trees} sample_size={self.sample_size} max_depth={self.max_depth} min_leaf_samples={self.min_leaf_samples})'
```
Create a tree ensemble and check that it has initialized correctly
```
test_ensemble = TreeEnsemble(train_data.tail(2000), sample_size=750, min_leaf_samples=5)
assert test_ensemble.sample_size == 750 == test_ensemble.trees[0].data.x_rows == len(test_ensemble.trees[0].data.y)
assert len(test_ensemble.col_idxs_fn(test_ensemble.data.all_x_col_idxs)) == 9
assert len(test_ensemble.trees) == 10
```
Fit the ensemble and get predictions for all of the data we trained on - TODO: I'd expect this to be better than a single tree but the loss is the same.
```
test_ensemble.fit()
test_preds = test_ensemble.predict(test_ensemble.data.x)
loss = rmse(test_preds, test_ensemble.data.y); print('loss', loss)
plt.scatter(test_preds, test_ensemble.data.y, alpha=.1);
```
Get predictions for the validation data - expect this to be better than a single tree.
```
test_preds = test_ensemble.predict(valid_data.x)
loss = rmse(test_preds, valid_data.y); print('loss', loss)
plt.scatter(test_preds, valid_data.y, alpha=.1);
```
TODO
- add classification capability
- confidence based on tree pred variance
- feature importance
- jumble single column -> create preds - which column makes preds the worst when jumbled
- WHAT are you forgetting?
- is it which split/feature contributes the biggest change from "bias" to "pred"
- avg depth of feature in tree <- i just made this up
- show dendogram of rank correlation
- partial dependence
- ggplot if monotonic relationship
- do the "what if" preds - i.e. change year of sale to 1960 and see what things would have sold for
- pdp plot
- tree interpret
- for single row pred: print contribution of each split (feature) to final result
- waterfall chart
| github_jupyter |
# Scipy基本使用
本文主要介绍numpy之外的scipy的使用,参考:
- [浅尝则止 - SciPy科学计算 in Python](https://zhuanlan.zhihu.com/p/102395401)
SciPy以NumPy为基础,提供了众多数学、科学、工程计算用的模块,包括但不限于:线性代数、常微分方程求解、信号处理、图像处理、稀疏矩阵处理。
安装:
```Shell
conda install -c conda-forge scipy
```
## 常数
首先,看看物理常数,scipy包括了众多的物理常数。
```
#Constants.py
from scipy import constants as C
print("c =",C.c) #光在真空中的传播速度
print("g =",C.g) #重力常数
```
其中,physical_constants是一个字典,它以物理常量名为键,对应值为一个三元素元组,分别是常数值、单位以及误差。下述程序可以打印其中的全部物理常数。比如其中的electron volt常数表明了电子伏特与焦耳的换算关系。
```
#EnumConstants.py
from scipy import constants as C
for k,v in C.physical_constants.items():
print(k,v)
```
constants模块还可以帮助进行单位转换,其中的单位转换常量可将英制单位以及公制非标准单位全部转换成公制标准单位:
```
#Unit.py
from scipy import constants as C
print("C.mile =",C.mile) #一英里等于多少米
print("C.gram =",C.gram) #一克等于多少千克
print("C.pound =",C.pound) #一磅等于多少千克
print("C.gallon =",C.gallon) #一加仑等于多少立方米
```
## 插值
本节还主要参考了:
- [Interpolation (scipy.interpolate)](https://docs.scipy.org/doc/scipy/reference/tutorial/interpolate.html)
- [样条插值](https://zh.wikipedia.org/wiki/%E6%A0%B7%E6%9D%A1%E6%8F%92%E5%80%BC)
- [Scipy插值](https://www.yiibai.com/scipy/scipy_interpolate.html?app=post&act=new)
- [Scipy 学习 第一篇:插补](https://www.cnblogs.com/ljhdo/p/4531844.html)
插值和拟合(如最小二乘拟合)都试图通过已知的实验离散数据求未知数据。与拟合不同,插值要求曲线通过所有已知数据点。interpolate模块用于此目的。
插值是在直线或曲线上的两点之间找到值的过程。 为了帮助记住它的含义,我们应该将“inter”这个词的第一部分想象为“输入”,表示要查看原来数据的“内部”。 这种插值工具不仅适用于统计学,而且在科学,商业或需要预测两个现有数据点内的值时也很有用。
从最简单的一维插值说起。例子生成了[0,10]上共10个元素的等差数列及其正弦值,模拟所谓的10个实验数据点。这10个实验数据点的散点图如下图中的points子图。
画图需要matplotlib库,后面介绍可视化时会更详细,这里安装使用即可:
```Shell
conda install -c conda-forge matplotlib
```
```
#Interpolate.py
from scipy import interpolate
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
x10 = np.linspace(0,10,10)
y10 = np.sin(x10)
plt.figure(figsize=(12,6))
ax = plt.subplot(231) #2行3列,在位置1创建子图
ax.scatter(x10,y10,c='black') #画散点图
ax.set_title("points")
```
假设在某项科学试验中,我们测试并记录了上述10个试验点。现在要从这10个试验推测y和x之间的函数关系,并以此为依据,计算其它x所对应的 y值。这个过程就是所谓的“插值”。一眼望去,这些点是散乱没有规律的,难以推断背后的函数关系。在一元函数关系中,该插值可以通过interp1d类型来完成。注意**interp1d是一个类型,不是函数**。interp1d()是这个类的构造函数的调用形式。
```
#Interpolate.py
x100 = np.linspace(0,10,100) #[0,10]的包含100个元素的等差数列
colors = ['red','green','blue','purple']
for i,kind in enumerate(['nearest','zero','slinear','quadratic']):
f = interpolate.interp1d(x10,y10,kind=kind) #从10个实验数据点插值
print("type of f:",type(f))
y100 = f(x100) #应用插值结果计算100个数的“函数”值
ax = plt.subplot(232+i) #2行3列,2+i位置建子图
ax.scatter(x10,y10)
ax.plot(x100,y100,c=colors[i]) #画100点的折线图以反应“函数”关系
ax.set_title(kind)
plt.subplots_adjust(left=0.05,right=0.95,bottom=0.05,top=0.95,
wspace=0.2,hspace=0.2) #调整子图间距等
```
interpolate.interp1d类型的构造函数接受(x,y,kind)等参数。其中,x,y提供了实验数据点,**kind则指明了插值类型**。该构造函数返回一个对象f,这个对象f内部包括了插值后的“函数关系”。f对象是callable-可调用的,也就是说它也是一个函数。f(x100)将[0,10]的包含100个数的等差数列交给f“函数”进行计算,得y100,y100中的数值就是插值推测的结果。
除了最基本的一维插值,还有样条插值很常用。样条插值是使用一种名为样条的特殊分段多项式进行插值的形式。由于样条插值可以**使用低阶多项式样条实现较小的插值误差**,这样就避免了使用高阶多项式所出现的龙格现象,所以样条插值得到了流行。
为了通过数据点画出平滑的曲线,绘图员曾经使用薄的柔性木条,硬橡胶,金属或塑料(称为机械样条)。 为了使用机械花键,在设计中沿着曲线明确选择了一些销钉,然后将花键弯曲,以便它们接触到每个销钉。
显然,在这种结构下,样条曲线在这些引脚上插入曲线。 它可以用来在其他图纸中重现曲线。 引脚所在的点称为结。 可以通过调整结点的位置来改变样条线所定义的曲线的形状。
一维spline的处理过程分为两个基本步骤:计算曲线的spline表示,对目标点进行评估。scipy有两种方式来计算曲线的spline表示和平滑系数:直接方式和参数化方式。
直接方式使用**splrep()函数**从2维平面中查找曲线的spline表示,x和y是曲线在二维坐标系中的坐标。
splrep()函数的作用是对于给定数据点集(x [i],y [i]),确定在间隔xb <= x <= xe上平滑度为k的近似值。
该函数输出的结果是一个三元组tck,这个三元组就是曲线的spline表示,用于表示曲线的节点向量、系数和spline序号,默认的spline orde是cubic,这可以通过k参数来修改。一旦确定了曲线的spline表示,就可以使用splev()函数对x进行评估.
```
import matplotlib.pyplot as plt
from scipy.interpolate import splev, splrep
x = np.linspace(0, 10, 10)
y = np.sin(x)
spl = splrep(x, y)
x2 = np.linspace(0, 10, 200)
y2 = splev(x2, spl)
plt.plot(x, y, 'o', x2, y2)
```
参数化方式对于在N维空间中的区间,使用函数splprep()来定义曲线的参数,默认输出包含两个对象:第一个对象是一个三元祖(t,c,k),分别表示曲线的节点向量、系数和spline序号;第二个对象是一个参数变量u。对于函数splprep() 返回的spline表示,使用splev()函数来评估
```
from scipy.interpolate import splprep, splev
tck, u = splprep([x, y], s=0)
new_points = splev(u, tck)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x, y, 'ro')
ax.plot(new_points[0], new_points[1], 'r-')
```
interp1d的插值曲线要求经过所有实验数据点,并且,不能进行外推:即计算实验数据点范围之外的函数值。UnivariateSpline类的插值比interp1d更高级,允许外推和拟合(曲线不经过实验数据点)。
一维样条线是UnivariateSpline类的对象,并使用曲线的x和y作为构造函数的参数。该类定义__call__,因此允许使用x轴值调用对象,在该轴上评估样条线,并返回插值的y值。
通过提供平滑参数s的非零值,UnivariateSpline类也可以用于平滑数据,其含义与splrep函数的s关键字相同。这样就产生了结节数少于数据点的数目,因此不再严格地是一个插补样条,而是一个平滑的样条。如果不希望这样做,则可以使用InterpolatedUnivariateSpline类。它是UnivariateSpline始终贯穿所有点的子类(等同于将平滑参数强制为0)。
该LSQUnivariateSpline班是其他子类UnivariateSpline。它允许用户使用参数t显式指定内部结的数量和位置。这允许创建具有非线性间距的自定义样条曲线,以便在某些域中进行插值,而在其他域中进行平滑处理,或者更改样条线的特性。
在没有噪声的标准正弦数据点上进行插值和外推如下图所示
```
#UnivariateSpline.py
import numpy as np
from scipy import interpolate
from matplotlib import pyplot as plt
plt.figure(figsize=(12,4))
x = np.linspace(0,10,20) #[0,10]含20个值的等差数列
y = np.sin(x) #y = x的正弦
plt.scatter(x,y,s=20,label="points") #画散点图
xs = np.linspace(0,12,100) #[0,12]含100个值的等差数列,12>10,外推
ys = interpolate.UnivariateSpline(x,y,s=0)(xs) #由(x,y)插值,函数应用于xs
plt.plot(xs,ys,lw=2,label="spline,s=0") #画(xs,ys),ys由插值函数计算而来
plt.plot(xs,np.sin(xs),lw=2,label="sin(x)") #画标准正弦函数(xs,np.sin(xs))
plt.legend() #显示图示
plt.show()
```
ys = interpolate.UnivariateSpline(x,y,s=0)(xs)。类似于interp1d,UnivariateSpline也是一个类型,其构造函数接受(x,y)作为实验数据点,返回一个插值对象。这个对象类型为UnivariateSpline,同时也是一个可调用对象-函数。为帮助理解,这行代码可以拆成两行:
```Python
f = interpolate.UnivariateSpline(x,y,s=0)
ys = f(xs)
```
试验数据点中x的值域为[0,10],而xs的值域为[0,12],超出了原有范围。
构造函数的完整调用形式为: UnivariateSpline(x, y, w=None, bbox=[None, None], k=3, s=None, ext=0, check_finite=False) , w可以为每个数据指定权重;k默认为3,指定样条曲线的阶;s则是平滑系数
当s>0时,样条曲线-spl不一定通过实验数据点,可视为曲线拟合。当**s=0,样条曲线必须通过实验数据点**。在本例中,s=0,我们看到所有的实验数据点都在样条曲线"spline,s=0"上。
在实践当中,误差永远存在,可以认为实验数据点永远都包括噪声。接下来,我们给实验数据点加入一些噪声,再试图进行插值拟合。
```
#UnivariateSpline2.py
import numpy as np
from scipy import interpolate
from matplotlib import pyplot as plt
plt.figure(figsize=(12,4))
x = np.linspace(0,20,200) #[0,20]等差数列,200个值
y = np.sin(x) + np.random.standard_normal(len(x))*0.16 #带噪声正弦
plt.scatter(x,y,s=3,label="noisy points") #散点图:噪声实验数据点
xs = np.linspace(0,23,2000) #[0,23]等差数列,2000个值
ys = interpolate.UnivariateSpline(x,y,s=8)(xs) #由(x,y)插值,函数应用于xs
plt.plot(xs,ys,lw=2,label="spline,s=8") #画(xs,ys),ys由插值函数计算而来
plt.plot(xs,np.sin(xs),lw=2,label="sin(x)") #画标准正弦函数(xs,np.sin(xs))
plt.legend()
plt.show()
```
指定平滑参数s=8,这将允许样条曲线不经过实验数据点。可以看到,由于噪声的存在,即便在试验数据点的值域[0,20]范围内,插值函数未能与标准正弦曲线完全重合。在外推的值域部分,即[20,23],则差得更多。
## 排列组合
调用 scipy 计算排列组合的具体数值很方便
```
from scipy.special import comb, perm
perm(3, 2)
comb(3, 2)
```
如果是想要列出具体的排列组合项可以调用 itertools:
```
from itertools import combinations, permutations
permutations([1, 2, 3], 2)
list(permutations([1, 2, 3], 2))
list(combinations([1, 2, 3], 2))
```
字符串项也是可以轻松实现的
```
list(combinations(["a", "b", 2], 2))
```
各种数据类型之间也可以组合
```
list(combinations([{"a":2}, "b", 2], 2))
```
## 简单统计
比如histogram。
```
from scipy import stats
```
计算histogram function的函数是scipy.stats.relfreq(a, numbins=10, defaultreallimits=None, weights=None),计算的是 relative frequency histogram.
relative frequency histogram 就是每个 bin 中数的个数相对于总数个数的比例。
```
import numpy as np
a = np.array([2, 4, 1, 2, 3, 2])
res = stats.relfreq(a, numbins=4)
res
```
可以看到,数据是按照下限0.5,然后每个bin加1来统计的
```
res.frequency
```
也可以自己设置区间
```
res = stats.relfreq(a, numbins=4, defaultreallimits=(0,6))
res
```
当设置的区间小于数的范围时,会自动插值:
```
res = stats.relfreq(a, numbins=4, defaultreallimits=(2,3))
res
freq = stats.relfreq(a, numbins=4, defaultreallimits=(2,3)).frequency
freq
```
另外计算数据的histogram的还有binned_statistic函数。 一个 histogram 将空间分为几个 bins,然后给出每个bin中点的计数。scipy中该函数能够计算每个bin内的数之和,均值,中位数或者其他统计值。
该函数的参数 scipy.stats.binned_statistic(x, values, statistic='mean', bins=10, range=None) 中,x表示要被binned的值,values表示 统计计算针对的对象. 它必须是和x相同shape的
```
values = [1.0, 1.0, 2.0, 1.5, 3.0]
stats.binned_statistic([1, 1, 2, 5, 7], values, 'sum', bins=2)
```
| github_jupyter |
```
%cd -q data/actr_reco
import pandas as pd
import datetime
import numpy as np
data = [
["user1", "song1", datetime.datetime(2000, 1, 1, 0)],
["user1", "song1", datetime.datetime(2000, 1, 1, 0)],
["user1", "song2", datetime.datetime(2000, 1, 1, 1)],
["user1", "song2", datetime.datetime(2000, 1, 1, 1)],
["user1", "song2", datetime.datetime(2000, 1, 1, 1)],
["user1", "song2", datetime.datetime(2000, 1, 1, 1)],
["user1", "song1", datetime.datetime(2000, 1, 1, 2)],
["user1", "song2", datetime.datetime(2000, 1, 1, 2)],
["user1", "song1", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
["user2", "song3", datetime.datetime(2000, 1, 1, 2)],
]
events = pd.DataFrame(data, columns=["user", "item", "timestamp"]).set_index("user")
events["v"] = np.random.rand(len(events))
events["a"] = np.random.rand(len(events))
events["d"] = np.random.rand(len(events))
events["reward"] = np.random.choice([-1] + 9*[1], len(events))
events["session"] = np.random.choice([1] + 2*[0], len(events)).cumsum()
events.head()
example_user = "user1"
user_events = events.loc[example_user]
```
# Algorithms
```
%%writefile baseline_models.py
import numpy as np
class MostRecent:
def __str__(self):
return type(self).__name__
def recommend_next(self, user_events):
return user_events["item"].values[-1]
def recommend(self, user_events, topn):
return user_events["item"].iloc[::-1].unique().tolist()[:topn]
%run baseline_models.py
mr = MostRecent()
# Next item prediction
assert mr.recommend_next(user_events) == user_events["item"].values[-1]
mr.recommend_next(user_events)
# TopN item predictions
assert mr.recommend(user_events, 3)[0] == mr.recommend_next(user_events)
mr.recommend(user_events, 3)
%%writefile transition_models.py
class UserBasedTransitionProbability:
def __str__(self):
return type(self).__name__
def recommend_next(self, user_events):
cur_item = user_events["item"].iloc[-1]
events_on_cur_item = user_events[(user_events["item"] == cur_item).shift().fillna(False)]
if not events_on_cur_item.empty:
return events_on_cur_item["item"].mode().values[-1]
else:
# Return no recommendation
return -1
def recommend(self, user_events, topn):
cur_item = user_events["item"].iloc[-1]
events_on_cur_item = user_events[(user_events["item"] == cur_item).shift().fillna(False)]
return events_on_cur_item["item"].value_counts().index.tolist()[:topn]
%run transition_models.py
ubtp = UserBasedTransitionProbability()
assert ubtp.recommend_next(user_events)
ubtp.recommend_next(user_events)
assert ubtp.recommend(user_events, 3)[0] == ubtp.recommend_next(user_events)
ubtp.recommend(user_events, 3)
%%writefile emomem_model.py
import numpy as np
import pandas as pd
from scipy import stats, special
import operator
class DecayFitterMixin:
def fit(self, events):
delta = events.groupby(["user", "item"])["timestamp"].diff().dropna().dt.total_seconds() / 3600
delta = delta[delta != 0]
delta_bins = delta.value_counts()
log_x = np.log10(delta_bins.index.tolist())
log_y = np.log10(delta_bins.values.tolist())
slope, intercept, r_value, p_value, std_err = stats.linregress(log_x, log_y)
self.decay = -slope
return slope
class ScoreToRecommenderMixin:
"""Requires a score(self, user_events) function."""
def recommend_next(self, user_events):
item_scores = self.score(user_events)
return item_scores.idxmax()
def recommend(self, user_events, topn):
item_scores = self.score(user_events)
return item_scores.nlargest(topn).index.tolist()
class BaseLevelComponent(ScoreToRecommenderMixin, DecayFitterMixin):
"""Models occurence."""
def __init__(self, decay=0.5, time_col="timestamp"):
self.decay = decay
self.time_col = time_col
def __str__(self):
if self.decay == 0.5:
return type(self).__name__
else:
return type(self).__name__ + str(self.decay)
def score(self, user_events):
user_events = user_events.copy()
ts_ref = user_events["timestamp"].iloc[-1]
user_events["ts_diff"] = (-(user_events[self.time_col] - ts_ref) + pd.Timedelta("1hour")).dt.total_seconds()/3600
bll_scores = user_events.groupby("item", sort=False)["ts_diff"].apply(lambda x: np.sum(np.power(x.values, -self.decay)))
return bll_scores
class AssociativeComponent(ScoreToRecommenderMixin):
"""Models co-occurence."""
def __init__(self, session_col="session"):
self.session_col = session_col
def __str__(self):
return type(self).__name__
def score(self, user_events):
context_item = user_events["item"].iloc[-1]
context_sessions = set(user_events[user_events["item"] == context_item][self.session_col].unique())
num_sessions = user_events[self.session_col].nunique()
probability_of_item = user_events.groupby("item")[self.session_col].nunique() / num_sessions
def overlap(sessions):
return len(set(sessions.unique()).intersection(context_sessions))
overlap_sessions = user_events.groupby("item")[self.session_col].apply(overlap)
condidtional_probability = overlap_sessions/len(context_sessions)
return condidtional_probability/probability_of_item
class PartialMatchingComponent(ScoreToRecommenderMixin):
"""Models similarity."""
def __init__(self, name=None, feature_cols=None, similarity_function=np.dot):
self.name = name if name else type(self).__name__
self.feature_cols = feature_cols
self.similarity_function = similarity_function
def __str__(self):
return self.name
def score(self, user_events):
context_features = user_events[self.feature_cols].iloc[-1]
items = user_events.drop_duplicates(subset=["item"])
item_index = items["item"].values
cand_features = items[self.feature_cols].values
pm_scores = self.similarity_function(cand_features, context_features)
return pd.Series(data=pm_scores, index=item_index)
class ValuationComponent(ScoreToRecommenderMixin):
"""Models affect."""
def __init__(self, name=None, learning_rate=0.2, initial_valuation=0, reward_col="reward"):
self.name = name if name else type(self).__name__
self.learning_rate = learning_rate
self.initial_valuation = initial_valuation
self.reward_col = reward_col
def __str__(self):
return self.name
def score(self, user_events):
def update_valuation(prev, reward=1, lr=0.05):
return prev + lr * (reward - prev)
def aggreagte_valuation(reward_s):
valuation = self.initial_valuation
for reward in reward_s.values:
valuation = update_valuation(valuation, reward, self.learning_rate)
return valuation
valuation_scores = user_events.groupby("item")[self.reward_col].apply(aggreagte_valuation)
return valuation_scores
class NoiseComponent(ScoreToRecommenderMixin):
"""Adds randomnes."""
def __init__(self, seed=42):
self.rng = np.random.default_rng(seed)
def __str__(self):
return type(self).__name__
def score(self, user_events):
return pd.Series(data=self.rng.random(user_events["item"].nunique()), index=user_events["item"].unique())
class ActrRecommender(ScoreToRecommenderMixin):
"""Combines multiple components."""
def __init__(self, components, weights=None, softmax=True, name=None, use_normalize_trick=False):
self.components = components
self.weights = weights if weights else [1]*len(components)
self.softmax = softmax
self.name = name if name else type(self).__name__ + "(" + ",".join(map(str, self.components)) + ")"
self.use_normalize_trick = use_normalize_trick
def __str__(self):
return self.name
def score(self, user_events):
scores = pd.Series()
for comp, w_c in zip(self.components, self.weights):
comp_scores = comp.score(user_events)
if self.softmax:
if self.use_normalize_trick:
# https://timvieira.github.io/blog/post/2014/02/11/exp-normalize-trick/
comp_scores = comp_scores - np.max(comp_scores)
comp_scores = special.softmax(comp_scores)
comp_scores = comp_scores * w_c
scores = scores.combine(comp_scores, operator.add, 0)
return scores
%run emomem_model.py
bll_new = BaseLevelComponent(decay=2)
assert bll_new.recommend_next(user_events) == bll_new.recommend(user_events, 3)[0]
assoc = AssociativeComponent()
assert assoc.recommend_next(user_events) == assoc.recommend(user_events, 3)[0]
emo_new = PartialMatchingComponent(feature_cols=["v", "a", "d"])
assert emo_new.recommend_next(user_events) == emo_new.recommend(user_events, 3)[0]
valu = ValuationComponent()
assert valu.recommend_next(user_events) == valu.recommend(user_events, 3)[0]
noise = NoiseComponent()
assoc.recommend(user_events, 3), bll_new.recommend(user_events, 3), emo_new.recommend(user_events, 3), valu.recommend(user_events, 3), noise.recommend(user_events, 3),
actr = ActrRecommender([bll_new, assoc, emo_new, valu], weights=[2, 1, 1, 1], softmax=True)
print(actr)
assert actr.recommend_next(user_events) == actr.recommend(user_events, 3)[0]
actr.recommend(user_events, 3)
def valuation(prev, reward=1, lr=0.05):
return prev + lr * (reward - prev)
val = 0
normal_reward = 1
alt_reward = -1
alt_sim = False
# alt_sim = 8*[0]+2*[1]
for i in range(100):
if alt_sim and np.random.choice(alt_sim): # simulate negative rewards
val = valuation(val, alt_reward)
print("alt: " + str(val))
continue
val = valuation(val, normal_reward)
print(val)
ts = range(1, 1000)
tas = 0
for t in ts:
a = np.power(t, -0.5)
tas += a
print(tas)
print(np.log(tas))
np.power(3, -np.log(tas)), tas
def valuation(prev, reward):
return prev + 0.05 * (reward - prev)
rew_list = np.random.choice([-1]+[1]*9, 100)
valuation_ufunc = np.frompyfunc(valuation, 2, 1)
valuation_ufunc.reduce(rew_list)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm as tqdm
%matplotlib inline
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import random
from torch.utils.data import Dataset, DataLoader
from google.colab import drive
drive.mount('/content/drive')
x11 = np.random.uniform(low=[0,0], high = [0.5,0.5],size =(15,2) )
x12 = np.random.uniform(low=[0.5,0.5], high = [1,1],size =(15,2) )
x2 = np.random.uniform(low = [0,1.5] , high = [1,2.5],size=(30,2))
x3 = np.random.uniform(low = [0,3] , high = [1,4],size=(30,2))
x4 = np.random.uniform(low = [2,0] , high = [3,1],size=(30,2))
x5 = np.random.uniform(low = [2,1.5] , high = [3,2.5],size=(30,2))
x6 = np.random.uniform(low = [2,3] , high = [3,4],size=(30,2))
x7 = np.random.uniform(low = [4,0] , high = [5,1],size=(30,2))
x8 = np.random.uniform(low = [4,1.5] , high = [5,2.5],size=(30,2))
x9 = np.random.uniform(low = [4,3] , high = [5,4],size=(30,2))
plt.scatter(x11[:,0],x11[:,1])
plt.scatter(x12[:,0],x12[:,1])
plt.scatter(x2[:,0],x2[:,1])
plt.scatter(x3[:,0],x3[:,1])
plt.scatter(x4[:,0],x4[:,1])
plt.scatter(x5[:,0],x5[:,1])
plt.scatter(x6[:,0],x6[:,1])
plt.scatter(x7[:,0],x7[:,1])
plt.scatter(x8[:,0],x8[:,1])
plt.scatter(x9[:,0],x9[:,1])
y11 = np.zeros(15)
y12 = np.ones(15)
Y2_ = []
for i in range(8):
idx = np.random.randint(0,30,size=15)
y2 = np.ones(30)
y2[idx] = 0
Y2_.append(y2)
Y2_ = np.concatenate(Y2_,axis=0)
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
X_train = np.concatenate((x11,x12,x2,x3,x4,x5,x6,x7,x8,x9))
Y_train = np.concatenate((y11,y12,Y2_))
X_train.shape,Y_train.shape
plt.scatter(X_train[Y_train==0,0],X_train[Y_train==0,1],label = "class_0")
plt.scatter(X_train[Y_train==1,0],X_train[Y_train==1,1],label = "class_1")
plt.legend()
x11 = np.random.uniform(low=[0,0], high = [0.5,0.5],size =(15,2) )
x12 = np.random.uniform(low=[0.5,0.5], high = [1,1],size =(15,2) )
x2 = np.random.uniform(low = [0,1.5] , high = [1,2.5],size=(30,2))
x3 = np.random.uniform(low = [0,3] , high = [1,4],size=(30,2))
x4 = np.random.uniform(low = [2,0] , high = [3,1],size=(30,2))
x5 = np.random.uniform(low = [2,1.5] , high = [3,2.5],size=(30,2))
x6 = np.random.uniform(low = [2,3] , high = [3,4],size=(30,2))
x7 = np.random.uniform(low = [4,0] , high = [5,1],size=(30,2))
x8 = np.random.uniform(low = [4,1.5] , high = [5,2.5],size=(30,2))
x9 = np.random.uniform(low = [4,3] , high = [5,4],size=(30,2))
plt.scatter(x11[:,0],x11[:,1])
plt.scatter(x12[:,0],x12[:,1])
plt.scatter(x2[:,0],x2[:,1])
plt.scatter(x3[:,0],x3[:,1])
plt.scatter(x4[:,0],x4[:,1])
plt.scatter(x5[:,0],x5[:,1])
plt.scatter(x6[:,0],x6[:,1])
plt.scatter(x7[:,0],x7[:,1])
plt.scatter(x8[:,0],x8[:,1])
plt.scatter(x9[:,0],x9[:,1])
yt11 = np.zeros(15)
yt12 = np.ones(15)
Yt2_ = []
for i in range(8):
idx = np.random.randint(0,30,size=15)
yt2 = np.ones(30)
yt2[idx] = 0
Yt2_.append(yt2)
Yt2_ = np.concatenate(Yt2_,axis=0)
Y_test = np.concatenate((yt11,yt12,Yt2_))
X_test = np.concatenate((x11,x12,x2,x3,x4,x5,x6,x7,x8,x9))
class Grid_data(Dataset):
def __init__(self,x,y):
self.x = torch.Tensor(x)
self.y = torch.Tensor(y).type(torch.LongTensor)
def __len__(self):
return len(self.x)
def __getitem__(self,idx):
self.dx = self.x[idx,:]
self.dy = self.y[idx]
self.dx = self.dx
self.dy = self.dy
return self.dx, self.dy
trainset = Grid_data(X_train,Y_train)
trainloader = DataLoader(trainset,batch_size=10,shuffle = False)
inputs,label = iter(trainloader).next()
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.linear1 = nn.Linear(2,24)
self.linear2 = nn.Linear(24,64)
self.linear3 = nn.Linear(64,128)
self.linear4 = nn.Linear(128,256)
self.linear5 = nn.Linear(256,128)
self.linear6 = nn.Linear(128,64)
self.linear7 = nn.Linear(64,32)
self.linear8 = nn.Linear(32,16)
self.linear9 = nn.Linear(16,2)
def forward(self,x):
x = F.relu(self.linear1(x))
x = F.relu(self.linear2(x))
x = F.relu(self.linear3(x))
x = F.relu(self.linear4(x))
x = F.relu(self.linear5(x))
x = F.relu(self.linear6(x))
x = F.relu(self.linear7(x))
x = F.relu(self.linear8(x))
x = self.linear9(x)
return x
net = Net()
# net(inputs)
net = net.to("cuda")
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)#, momentum=0.9)
loss_curi = []
epochs_nos= 6000
for epoch in range(epochs_nos): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2 == 1: # print every 50 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss/2 ))
ep_lossi.append(running_loss/2) # loss per minibatch
running_loss = 0.0
loss_curi.append(np.mean(ep_lossi)) #loss per epoch
# if (epoch%5 == 0):
# _,actis= inc(inputs)
# acti.append(actis)
print('Finished Training')
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
out.append(labels.cpu().numpy())
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 60000 train images: %d %%' % (
100 * correct / total))
total,correct
out = np.concatenate(out,axis=0)
pred = np.concatenate(pred,axis=0)
out[:30]
pred[:30]
X_axis,Y_axis = np.meshgrid(np.arange(0,5,0.01),np.arange(0,5,0.01))
X_mesh = np.concatenate( (X_axis.reshape((-1,1)), Y_axis.reshape(-1,1)), axis=1 )
Y_mesh = np.zeros(X_mesh.shape[0])
mesh_set = Grid_data(X_mesh,Y_mesh)
meshloader = DataLoader(mesh_set,batch_size=1000,shuffle=False)
total = 0
mesh_pred = []
with torch.no_grad():
for data in meshloader:
images, _ = data
images = images.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
mesh_pred.append(predicted.cpu().numpy())
total += labels.size(0)
print("finished")
mesh_pred = np.concatenate(mesh_pred,axis=0)
# mesh_pred = mesh_pred.reshape(X.shape)
# plt.axis('equal')
plt.scatter(X_mesh[:,0],X_mesh[:,1],c= mesh_pred, cmap = 'RdGy' )
plt.scatter(X_train[Y_train==1,0], X_train[Y_train==1,1] ,c="red")
plt.scatter(X_train[Y_train==0,0], X_train[Y_train==0,1],c= "green" )
X_test.shape
testset = Grid_data(X_test,Y_test)
testloader = DataLoader(testset,batch_size=10,shuffle=False)
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
out.append(labels.cpu().numpy())
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %d %%' % (270, 100 * correct / total))
plt.scatter(X_mesh[:,0],X_mesh[:,1],c= mesh_pred, cmap = "Greys")
plt.scatter(X_test[Y_test==1,0], X_test[Y_test==1,1] ,c="Blue")
plt.scatter(X_test[Y_test==0,0], X_test[Y_test==0,1],c= "green" )
torch.save(net.state_dict(),"/content/drive/My Drive/Research/confounded_noise/weights/model_"+str(epochs_nos)+".pkl")
```
| github_jupyter |
# Trigonometry
```
import numpy as np
import matplotlib.pyplot as plt
```
## Contents
- [Sine, cosine and tangent](#Sine_cosine_and_tangent)
- [Measurements](#Measurements)
- [Small angle approximation](#Small_angle_approximation)
- [Trigonometric functions](#Trigonometric_functions)
- [More trigonometric functions](#More_trigonometric_functions)
- [Identities](#Identities)
- [Compound angles](#Compound_angles)
<a id='Sine_cosine_and_tangent'></a>
### Sine, cosine and tangent
Sine:
- $\sin\theta = \frac{opp}{hyp}$
- with triangle with angles A, B and C, and lines a, b and c opposite their respective angles
- $\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$
Cosine:
- $\cos\theta = \frac{adj}{hyp}$
- with triangle with angles A, B and C, and lines a, b and c opposite their respective angles
- $a^2 = b^2 + c^2 - 2bc \cos A$
- $b^2 = a^2 + c^2 - 2ac \cos B$
- $c^2 = a^2 + b^2 - 2ab \cos C$
- $\cos A = \frac{b^2 + c^2 - a^2}{2bc}$
Tangent:
- $\tan\theta = \frac{opp}{adj}$
Area of triangle:
- $\frac{1}{2}ab\sin C$
<a id='Measurements'></a>
### Measurements
Radians:
- 1 radian = angle when the arc opposite the angle = r (the 2 points on the circumference)
- since $c = 2\pi r$, 1 circumference = 2 pi radians
Arc Length:
- length, s, of arc on circumference with angle $\theta$ in radians
- $s = r\theta$
Area of Sector:
- area, a, of sector with angle $\theta$ in radians
- $\frac{1}{2} r^2\theta$
<a id='Small_angle_approximation'></a>
### Small angle approximation
when $\theta \approx 0$ (in radians)
or $\theta = \lim_{\theta\to0}$
$\sin \theta \approx \theta$
$\cos \theta \approx 1 - \frac{\theta^2}{2} \approx 1$
$\tan \theta \approx \theta$
<a id='Trigonometric_functions'></a>
### Trigonometric functions
arcsin, arcos and arctan are the inverse (from length to angle in circle)
Domains and ranges:
sin:
- $\theta = \mathbb{R}$
- $-1 \le \sin\theta \le 1$
- $-\frac{\pi}{2} \le \arcsin x \le \frac{\pi}{2}$
cos:
- $\theta = \mathbb{R}$
- $-1 \le \cos\theta \le 1$
- $0 \le \arccos x \le \pi$
tan:
- $\theta \not= \frac{\pi}{2}, \frac{3\pi}{2} \dots$
- $\tan$ range is undefined
- $-\pi \le \arctan x \le \pi$
#### Graphing:
```
fig, ax = plt.subplots(1, 3, figsize=(13,4))
x = np.linspace(0, 2*np.pi, 30*np.pi).astype(np.float32)
ax[0].plot(x, np.sin(x), label='sin')
ax[1].plot(x, np.cos(x), label='cos')
ax[2].plot(x, np.tan(x), label='tan')
ax[0].plot(x, np.arcsin(np.sin(x)), label='arcsin')
ax[1].plot(x, np.arccos(np.cos(x)), label='arccos')
ax[2].plot(x, np.arctan(np.tan(x)), label='arctan')
for axes in ax:
axes.grid(True)
axes.legend()
plt.show()
```
<a id='More_trigonometric_functions'></a>
### More trigonometric functions
Secant:
- $\sec \theta = \frac{1}{\cos \theta}$
Cosecant:
- $\mathrm{cosec} \theta = \frac{1}{\sin \theta}$
Cotangent:
- $\cot \theta = \frac{1}{\tan\theta} = \frac{\cos\theta}{\sin\theta}$
#### Graphing:
```
fig, ax = plt.subplots(1, 3, figsize=(13,4))
x = np.linspace(0, 2*np.pi, 20*np.pi)
ax[0].plot(x, 1/np.cos(x), label='$sec$')
ax[1].plot(x, 1/np.sin(x), label='cosec')
ax[2].plot(x, np.cos(x)/np.sin(x), label='cot')
for axes in ax:
axes.grid(True)
axes.set_ylim([-20,20])
axes.legend()
plt.show()
```
<a id='Identities'></a>
### Identities
$\tan\theta = \frac{\sin\theta}{\cos\theta}$
$\sin^2\theta + \cos^2\theta = 1$
$\sec^2\theta = 1 + \tan^2\theta$
$\mathrm{cosec} \theta = 1 + \cot^2\theta$
<a id='Compound_angles'></a>
### Compound angles
Sin:
- $\sin(A+B) = \sin A\cos B + \cos A\sin B$
- $\sin(2A) = 2\sin A\cos A$
Cos:
- $\cos(A+B) = \cos A\cos B - \sin A\sin B$
- $\cos(2A) = \cos^2A - 2\sin^2B$
$= 2\cos^2x - 1$
$= 1 - 2\sin^2x$
Tan:
- $\tan(A+B) = \frac{\tan A + \tan B}{1 - \tan A\tan B}$
- $\tan(2A) = \frac{2\tan A}{1 - \tan^2A}$
### $r\cos(\theta+a)$
useful to reformat:
$a\cos \theta + b\sin \theta = r\cos(\theta+a)$
$r\cos(\theta + a) = r \cos a \cos \theta - r \sin a \sin \theta$
so:
$r \cos a \cos \theta = a\cos \theta$
$\therefore$ $r \cos a = a$
$- r \sin a \sin \theta = b\sin \theta$
$\therefore$ $r \sin a = -b$
Then solve as simultaneous equations:
solving a:
$\frac{\sin a}{\cos a} = \frac{-b}{a}$
$a \tan a = -b$
solving r:
$r^2\cos^2a + r^2\sin^2a = a^2+b^2$
$r^2 = a^2+b^2$
| github_jupyter |
## __INTRODUCTION__
### __ARTIFICIAL NEURAL NETWORKS__
* ML models that have a graph structure,inspired by the brain structure, with many interconnected units called artificial naurons https://www.youtube.com/watch?v=3JQ3hYko51Y
* ANN have the ability to learn from raw data imputs, but it also makes them slower
### __TENSORFLOW__
* CREATED AND MAINTAINED BY GOOGLE,
* Different APIs (Application Programming Interface)
* (a) low level graph API
* (b) High level Keras API
* TF on GPUs
* it requires different version of a library,
* available in conda,
> conda install tensorflow-gpu==1.12.0 #or newer version
* requires: compatible NVIDIA graphic card
* list is available cards is here: https://developer.nvidia.com/cuda-gpus
### __COMPUTATION GRAPHS__
* basic concept used in TF to specify how different elements interact with eaxch other
- example:
+ we wish to implement liear regression
y = ax + b, where a, abd b are the sloe and intercept parameters,
x asre imput data,
y are predictions, that weill be used to compare with the output
y (y without a hat) usunig huber loss
\
a loss - each node of the grath is a step in our computation
\ / - in TF data values are called TENSORS (3D matrices)
* -> + -> y^ - in TF we first define a graph, and then we feed the data flows
/ | through the graph
x b
### __LOSS FUNCTIONS__
* TF implements only basic set ot loss funcitons
* more can be added by hand, using numpy-like functions eg ,ean, sqrt etc..., chekck for names becuase these are a bit different then in numpy
* https://www.tensorflow.org/api_docs/python/tf/keras/losses
### __TF OPTIMAZERS__
* https://www.tensorflow.org/api_docs/python/tf/keras/optimizers#top_of_page
### __Code Examples__
* lin regression with tf2, from scratch https://towardsdatascience.com/get-started-with-tensorflow-2-0-and-linear-regression-29b5dbd65977
* classificaiton example: https://stackabuse.com/tensorflow-2-0-solving-classification-and-regression-problems/
```
import matplotlib.pyplot as plt # for making plots,
import matplotlib as mpl # to get some basif functions, heping with plot mnaking
import numpy as np # support for multi-dimensional arrays and matrices
import pandas as pd # library for data manipulation and analysis
import random # functions that use and generate random numbers
import glob # lists names in folders that match Unix shell patterns
import re # module to use regular expressions,
import os # allow changing, and navigating files and folders,
import seaborn as sns # advance plots, for statistics,
import scipy.stats as stats # library for statistics and technical programming,
%matplotlib inline
%config InlineBackend.figure_format ='retina' # For retina screens (mac)
import tensorflow as tf
print(tf.__version__)
```
## Example 1. implement linear regression with TF
```
from sklearn.datasets import make_regression
# create the data
X, y = make_regression(
n_samples=1000,
n_features=2,
n_informative=2
)
# chek the data
print("data: ", X.shape)
print("labels: ", y.shape)
# plot the data
''' i tested different number of features,
thus this funciton handles them all
sorry for small complications,
'''
if X.shape[1]==1:
plt.scatter(X,y, s=0.1, c="black")
else:
fig, axs = plt.subplots(nrows=1, ncols=2)
i=-1
for ax in axs.flat:
i+=1
if i<X.shape[1]:
ax.scatter(X[:,i],y, s=0.1, c="black")
ax.set_title(f'y ~ feature {i}')
else: pass
plt.show()
```
### Part 1. DEFINE THE MODEL FOR TF
#### Step 1. Define Variables
- def. dtype is tf.int32
- variables are provided to session in list with operations
- they can be modified by the operations
- variables are returned at each session, even if not chnaged
- they need an initial value
```
a0 = tf.Variable(initial_value=0, dtype=tf.float32) # Feature 0 coeff.
a1 = tf.Variable(initial_value=0, dtype=tf.float32) # Feature 1 coeff.
b = tf.Variable(initial_value=0, dtype=tf.float32) # Intercept
```
#### Step 2. Define Placeholders
A TensorFlow placeholder is simply a variable that we will assign data to at a later date. It allows us to create our operations and build our computation graph, without needing the data
- Must be provided externally to the session,
- IT WILL NOT BE CHNAGED by the operations,
- NOT RETURNED,
- given to tf session as dictionary:
* {key:value}
* where key is as in below,
* value is name of df, array, constant, list etc,
```
# Step 2. Define Placeholders
"""https://indianaiproduction.com/create-tensorflow-placeholder/"""
# placeholders are not executable immediately so we need to disable eager exicution in TF 2 not in 1
tf.compat.v1.disable_eager_execution()
x = tf.compat.v1.placeholder(dtype=tf.float32) # Input
y = tf.compat.v1.placeholder(dtype=tf.float32) # Target
lr = tf.compat.v1.placeholder(dtype=tf.float32) # Learning rate for optimizer
```
#### Step 3. Define Operations, in sub-steps a-d
* Four items are required:
* (a) Define how do we make predicitons, eg: y_hat = 2a + 1
* (b) Define Loss Function eg: MSE
* (c) Define How you will optimaze the parameters in a) eg. with SGD
* (d) Define training operation on a loss function
* eg: minimaze, maximaze etc..
* important:
* a, b, and d must be given to session,
* d, is defined on d, so c, doenst have to given, or even change,
PROBLEM WITH EAGER TENSOR
* __Eager execution__ is a powerful execution environment that evaluates operations immediately. It does not build graphs, and the operations return actual values instead of computational graphs to run later. With Eager execution, TensorFlow calculates the values of tensors as they occur in your code.
* problem that is often happenign when using code from tf1, is that this code is not compatible any more with the eager enviroment used by tf.v2
* se more here: https://stackoverflow.com/questions/57968999/runtimeerror-attempting-to-capture-an-eagertensor-without-building-a-function
* potential solutions:
> tf.compat.v1.disable_eager_execution()
> tf.compat.v1.disable_v2_behavior()
```
# (a) Define how do we make predicitons
y_hat = a0*x + a1*x + b
# (b) Define Loss Function
loss = tf.compat.v1.losses.huber_loss(y, y_hat, delta=1.0)
# (c) Create/select the optimizer
gd = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=lr)
# (d) Define training operation on a loss function
train_op = gd.minimize(loss)
# important comments:
#. - operations such as 1, and 2 will retunr results, in session
#. - operation 3 will affect a, and b variables, ie, no returned values
#. - because variables (a, b) are given in fetch list, these will be also
#. returned at each session iteration
#. - some operations, such as tf.train.GradientDescentOptimizer,
# may require new placeholders, eg ls. that we could change$¨
```
### __Part 2. Run TF session__
#### Step 1. Prepare for tf session,
* python lists, or arrays to store loss values, coefficinets etc..
* nr, of iterations,
```
# Create lists to store a/b, and loss values from each iteration
loss_values = []
a0_values = []
a1_values = []
b_values = []
# Number of iterations
n = 100
```
# Step 2. Run Session,
Session: perfomes n iterations with training variables,
using training operation, and loss function createt in Step 2.
Returns: - 4 objects,
- "_" - is for training op, that are None,
- loss_val, a_val, b_val - for variables retunrned
by each of the other operations/varinables
Inputs:
- [train_op, loss, a, b]
list with operations, & varinables,
y_hat, not one of them, because loss has its derivative
- Placeholders in distionary,
```
# Initialization operation,
initialization_op = tf.compat.v1.global_variables_initializer()
# run session,
with tf.compat.v1.Session() as sess:
# Initialize the graph - always with new session !
sess.run(initialization_op)
# Run n(times)
for _ in range(n):
# Run training operations and collect a/b and loss values
_, loss_val, a0_val, a1_val, b_val = sess.run(
[train_op, loss, a0, a1, b],
feed_dict={
x: X,
y: y,
lr: [1]
}
) # NOTE: loss, a and b do not have to be provided
# Save values at each iteration,
loss_values.append(loss_val)
a0_values.append(a0_val)
a1_values.append(a1_val)
b_values.append(b_val)
from sklearn.datasets import make_regression
# create the data
X, y = make_regression(
n_samples=1000,
n_features=5,
n_informative=5
)
# chek the data
print("data: ", X.shape)
print("labels: ", y.shape)
# create train/test datasets
from sklearn.model_selection import train_test_split
X_tr , X_te ,y_tr, y_te = train_test_split( X , y , test_size=0.3)
# Function, ................................
def train_test_scatterplots(X_tr , X_te ,y_tr, y_te, max_features=5, figsize=(10,4)):
''' creates scatter plots for features
in a train and test dataset
. max_features - max nr of scatters plotted in one row
. X_tr , X_te ,y_tr, y_te - input data, numpy arrays and vectors,
. figsize - for plt.sunplots() function
'''
if X_tr.shape[1]==1:
plt.scatter(X_tr,y_tr, s=1, c="black", marker="o", label="train")
plt.scatter(X_te,y_te, s=1, c="red", marker="*", label="test")
plt.legend()
else:
# find out how many cols/rows of axis objects to plot
nrows = int(np.ceil(X_tr.shape[1]/max_features))
if X_tr.shape[1]<=max_features: ncols=X_tr.shape[1]
else: ncols=max_features
# create a figure and axes,
fig, axs = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize)
i=-1
# plot scatters,
for ax in axs.flat:
i+=1
if i<X.shape[1]:
ax.set_title(f'y ~ feature {i}')
ax.scatter(X_tr[:,i],y_tr, s=1, c="forestgreen", marker="o", label="train")
ax.scatter(X_te[:,i],y_te, s=1, c="red", marker="*", label="test")
ax.legend()
else: pass
plt.tight_layout()
plt.show()
train_test_scatterplots(X_tr , X_te ,y_tr, y_te, figsize=(15,3))
X_train = tf.constant( X_tr , dtype=tf.float32 )
y_train = tf.constant( y_tr , dtype=tf.float32 )
X_test = tf.constant( X_te , dtype=tf.float32 )
y_test = tf.constant( y_te , dtype=tf.float32 )
```
Creating the model in TF 2.0
We define 3 methods with TensorFlow’s low-level APIs for :
Mean Squared Error function
The derivative of Mean Squared Error function
Hypothesis function/ Regression function
which we have discussed earlier in raw Math.
```
def mean_squared_error( y , y_hat ):
return tf.reduce_mean( tf.square( y_hat - y ) )
def mean_squared_error_deriv( y , y_hat ):
return tf.reshape( tf.reduce_mean( 2 * ( y_hat - Y ) ) , [ 1 , 1 ] )
def h ( X , weights , bias ):
return tf.tensordot( X , weights , axes=1 ) + bias
num_epochs = 10
num_samples = X_tr.shape[0]
batch_size = 10
learning_rate = 0.001
dataset = tf.data.Dataset.from_tensor_slices(( X_tr , y_tr )) # allows trainformaitons on tensor,
dataset = dataset.shuffle( 100 ).repeat( num_epochs ).batch( batch_size )
#iterator = dataset.__iter__()
num_features = X_tr.shape[1]
weights = tf.random.normal( ( num_features , 1 ) )
bias = 0
epochs_plot = list()
loss_plot = list()
for i in range( num_epochs ) :
epoch_loss = list()
for b in range( int(num_samples/batch_size) ):
x_batch , y_batch = iterator.get_next()
output = h( x_batch , weights , bias )
loss = epoch_loss.append( mean_squared_error( y_batch , output ).numpy() )
dJ_dH = mean_squared_error_deriv( y_batch , output)
dH_dW = x_batch
dJ_dW = tf.reduce_mean( dJ_dH * dH_dW )
dJ_dB = tf.reduce_mean( dJ_dH )
weights -= ( learning_rate * dJ_dW )
bias -= ( learning_rate * dJ_dB )
loss = np.array( epoch_loss ).mean()
epochs_plot.append( i + 1 )
loss_plot.append( loss )
print( 'Loss is {}'.format( loss ) )
```
| github_jupyter |
```
import random
```
The first parameter, learn_speed, is used to control how fast our perceptron will learn. The lower the value, the longer it will take to learn, but the less one value will change each overall weight. If this parameter is too high, our program will change its weights so quickly that they are inaccurate. On the other hand, if learn_speed is too low, it will take forever to train the perceptron accurately. A good value for this parameter is about 0.01-0.05.
The second parameter, num_weights, controls how many weights the perceptron will have. Our perceptron will also have the same number of inputs as it does weights, because each input has its own weight.
Next, we need to create a function in our class to take in inputs, and turn them into an output. We do this by multiplying each input by its corresponding weight, summing all those together, and then checking if the sum is greater than 0.
The first function, feed_forward, is used to turn inputs into outputs. The term feed forward is commonly used in neural networks to describe this process of turning inputs into outputs. This method weights each input based on each corresponding weights. It sums them up, and then uses the activate function to return either 1 or -1.
The activate function is used to turn a number into 1 or -1. This is implemented because when we use a perceptron, we want to classify data. We classify it into two groups, one of which is represented by 1, and the other is represented by -1.
You might be wondering, "What's the use of this if the weights are random?" That's why we have to train the perceptron before we use it. In our train function, we want to make a guess based on the inputs provided, and then see how our guess compared to the output we wanted.
```
class Perceptron:
def __init__(self, learn_speed, num_weights):
self.speed = learn_speed
self.weights = []
for x in range(0, num_weights):
self.weights.append(random.random()*2-1)
def feed_forward(self, inputs):
sum = 0
# multiply inputs by weights and sum them
for x in range(0, len(self.weights)):
sum += self.weights[x] * inputs[x]
# return the 'activated' sum
return self.activate(sum)
def activate(self, num):
# turn a sum over 0 into 1, and below 0 into -1
if num > 0:
return 1
return -1
def train(self, inputs, desired_output):
guess = self.feed_forward(inputs)
error = desired_output - guess
# loop through each weight and adjust it by how much error we had.
for x in range(0, len(self.weights)):
self.weights[x] += error*inputs[x]*self.speed
```
### Training the Perceptron
Our perceptron has no use if we don't actually train it. We will do this by coding a quick Trainer class. In this example, we will train our perceptron to tell us whether a point is above a line or below a line. Our line, in this case, is represented by the equation y = 0.5x + 10. Once you know how to train a perceptron to recognize a line, you can represent x and y as different attributes, and above or below the line as results of those attributes.
For example, if you had a dataset on the GPAs and ACT scores of Harvard applicants, and whether they got accepted or not, you could train a perceptron to find a line on a graph where x=GPA score and y=ACT score. Above the line would be students that got accepted, and below the line would be students that got rejected. You could then use this perceptron to predict whether or not a student will get accepted into Harvard based on their GPA and ACT scores.
In this example, we'll stick with recognizing a line. To do this, we will create a Trainer class that trains a perceptron with points, and whether or not they are above the line. Below is the code for our Trainer class:
```
class Trainer:
def __init__(self):
self.perceptron = Perceptron(0.01, 3)
def f(self, x):
return 0.5*x + 10 # line: f(x) = 0.5x + 10
def train(self):
for x in range(0, 1000000):
x_coord = random.random()*500-250
y_coord = random.random()*500-250
line_y = self.f(x_coord)
if y_coord > line_y: # above the line
answer = 1
self.perceptron.train([x_coord, y_coord,1], answer)
else: # below the line
answer = -1
self.perceptron.train([x_coord, y_coord,1], answer)
return self.perceptron # return our trained perceptron
```
As you can see, the initializer for the Trainer class creates a perceptron with three inputs and a learning speed of 0.01. The first two inputs are x and y, but what is the last input? This is another core concept of neural networks and machine learning. That last input will always set to 1. The weight that corresponds to it will determine how it affects our line. For example, if you look back at our equation: y = 0.5x + 10, we need some way of representing the y-intercept, 10. We do this by creating a third input that increases or decreases based on the weight that the perceptron trains it to have. Think of it as a threshold that helps the perceptron understand that the line is adjusted 10 units upward.
In our f function, we take in an x coordinate and return a y coordinate. This is used to find points on the line based on their x coordinate, which will come in handy in the next function.
This train function for the Trainer class is where all the magic happens, and we actually get to train our perceptron. We start off by looping 1 million times. Remember how we had a learning speed for our perceptron? The more times that we train our perceptron (in this case, 1 million times), the more accurate it will become, even with a low learning speed.
In each iteration of the loop, we create a point, determine if it is above or below the line, and then feed those inputs into the perceptron's train method. First, x and y coordinates are randomly generated between -250 and 250. Next, we find where the y coordinate would be on the line for that x value to see if our point is above the line. For example, if we picked a point at (1, 3), then we should get the y coordinate of the line for the x value of 3. We do this with our f function. If our random y coordinate is higher than the corresponding y coordinate on the line, we know that our random coordinate is above the line.
That's what we do in the if...else statement. If our point is above the line, we set the expected output, stored in answer to be 1. If our point is below the line, our expected output is -1. We then train our perceptron based on the x coordinate, the y coordinate, and our expected output. After the whole loop is done, we return our newly trained perceptron object.
```
trainer = Trainer()
p = trainer.train()
```
Let's pick two points, (-7, 9) and (3, 1). The first point is above the line, so it should return 1, and the second is below the line, so it should return -1. Let's see how we would run our perceptron:
```
print("(-7, 9): " + str(p.feed_forward([-7,9,1])))
print("(3, 1): " + str(p.feed_forward([3,1,1])))
```
| github_jupyter |
```
import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import logit
from IPython.display import display
from keras.layers import (Input, Dense, Lambda, Flatten, Reshape, BatchNormalization, Layer,
Activation, Dropout, Conv2D, Conv2DTranspose,
Concatenate, Add, Multiply)
from keras.engine import InputSpec
from keras.losses import sparse_categorical_crossentropy
from keras.optimizers import RMSprop, Adam
from keras.models import Model
from keras import metrics
from keras import backend as K
from keras_tqdm import TQDMNotebookCallback
from keras.datasets import cifar10
from realnvp_helpers import Mask
%matplotlib inline
shape = (4, 4, 3)
samples = 10
train_data = np.random.normal(0.5, 3, size=(samples,) + (shape))
def conv_block(input_tensor, kernel_size, filters, stage, block):
''' Adapted from resnet50 implementation in Keras '''
filters1, filters2, filters3 = filters
if K.image_data_format() == 'channels_last':
bn_axis = 3
else:
bn_axis = 1
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
x = Conv2D(filters1, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2a')(input_tensor)
x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2a')(x)
x = Activation('relu')(x)
x = Conv2D(filters2, kernel_size,
padding='same',
kernel_initializer='he_normal',
name=conv_name_base + '2b')(x)
x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2b')(x)
x = Activation('relu')(x)
x = Conv2D(filters3, (1, 1),
kernel_initializer='he_normal',
name=conv_name_base + '2c')(x)
x = BatchNormalization(axis=bn_axis, name=bn_name_base + '2c')(x)
#x = add([x, input_tensor])
x = Activation('relu')(x)
return x
def coupling_step(input_tensor, mask_type, stage):
''' Implements (as per paper):
y = b * x + (1 - b) * [x * exp(s(b * x)) + t(b * x)]
'''
assert mask_type in ['check_even', 'check_odd', 'channel_even', 'channel_odd']
mask_prefix = 'check' if mask_type.startswith('check') else 'channel'
mask_opposite = 'odd' if mask_type.endswith('even') else 'even'
b0 = Mask(mask_type)
b1 = Mask(mask_prefix + '_' + mask_opposite)
mask_even_in = b0(input_tensor)
mask_odd_in = b1(input_tensor)
s = conv_block(mask_even_in, (3, 3), (32, 32, 3), stage, '_s')
t = conv_block(mask_even_in, (3, 3), (32, 32, 3), stage, '_t')
coupling = Lambda(lambda ins: ins[0] * K.exp(ins[1]) + ins[2])([input_tensor, s, t])
coupling_mask = b1(coupling)
# Return result + masked scale for loss function
return Add()([mask_even_in, coupling_mask]), b1(s)
def coupling_layer(input_tensor, mask_type, stage):
assert mask_type in ['check_even', 'check_odd', 'channel_even', 'channel_odd']
mask_prefix = 'check' if mask_type.startswith('check') else 'channel'
x, s1 = coupling_step(input_tensor, mask_prefix + '_even', stage=str(stage) + 'a')
x, s2 = coupling_step(x, mask_prefix + '_odd', stage=str(stage) + 'b')
x, s3 = coupling_step(x, mask_prefix + '_even', stage=str(stage) + 'c')
return x, [s1, s2, s3]
def realnvp_loss(target, output, shape):
# Extract x's and s's
print(output.shape)
z = output[:, :, :, :shape[-1]]
print(z.shape)
s = output[:, :, :, shape[-1]:]
print(s.shape)
# Prior is standard normal(mu=0, sigma=1)
z_loss = -0.5 * np.log(math.pi) - 0.5 * z**2
# Determinant is just sum of "s" params (already log-space)
det_loss = K.sum(s)
return -z_loss - det_loss
input_tensor = Input(shape=shape)
x, s = coupling_layer(input_tensor, 'check_even', stage=1)
out = Concatenate()([x] + s)
model = Model(inputs=input_tensor, outputs=out)
optimizer = Adam(lr=0.01)
model.compile(optimizer=optimizer,
loss=lambda target, output: realnvp_loss(target, output, shape=shape))
model.summary()
#early_stopping = keras.callbacks.EarlyStopping('val_loss', min_delta=50.0, patience=5)
#reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=2, min_lr=0.0001)
history = model.fit(
train_data, train_data,
batch_size=5,
epochs=50,
callbacks=[TQDMNotebookCallback()], #, early_stopping, reduce_lr],
verbose=0
)
df = pd.DataFrame(history.history)
display(df.describe(percentiles=[0.25 * i for i in range(4)] + [0.95, 0.99]))
col = 'val_loss' if 'val_loss' in df else 'loss'
df[col][-25:].plot(figsize=(8, 6))
```
# 2019-07-28
* Got some framework up to do coupling layers but having trouble passing the scale parameter to the loss function, getting some weird tensorflow error, needs more debugging
* Without the determinant in the loss function, it looks like loss goes down, so maybe on the right track?
* It's actually weird that we're not using the image in the output, but I guess that's what's great about this reversible model!
* TODO:
* Debug scale function in loss
* Add reverse (generator) network to functions above.
# 2019-07-29
* Explanation of how to estimate probability of continuous variables (relevant for computing bits/pixel without an explicit discrete distribution): https://math.stackexchange.com/questions/2818318/probability-that-a-sample-is-generated-from-a-distribution
* Idea for a post, explain likelihood estimation of discrete vs. continuous distributions (like pixels), include:
* Probability of observing a value from continuous distribution = 0
* https://math.stackexchange.com/questions/920241/can-an-observed-event-in-fact-be-of-zero-probability
* Probability of observing a value from a set of discrete hypthesis (models) is non-zero using epsilon trick (see above link):
* https://math.stackexchange.com/questions/920241/can-an-observed-event-in-fact-be-of-zero-probability
* Explain Equation 3 from "A NOTE ON THE EVALUATION OF GENERATIVE MODELS"
* Also include an example using a simpler case, like a bernoulli variable that we're estimating using a continuous distribution
* Bring it back to modelling pixels and how they usually do it
| github_jupyter |
*Analytical Information Systems*
# Descriptive Statistics in R - Baseball Salaries
Prof. Christoph M. Flath<br>
Lehrstuhl für Wirtschaftsinformatik und Informationsmanagement
SS 2019
<h1>Agenda<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-packages" data-toc-modified-id="Load-packages-1">Load packages</a></span></li><li><span><a href="#Download-and-preprocess-data" data-toc-modified-id="Download-and-preprocess-data-2">Download and preprocess data</a></span></li><li><span><a href="#Central-Tendency" data-toc-modified-id="Central-Tendency-3">Central Tendency</a></span></li><li><span><a href="#Variability" data-toc-modified-id="Variability-4">Variability</a></span></li><li><span><a href="#Shape" data-toc-modified-id="Shape-5">Shape</a></span></li></ul></div>
## Load packages
```
library(tidyverse)
library(moments)
```
## Download and preprocess data
```
file_url <- "https://www.dropbox.com/s/ysd0zljicq5yqfo/baseball.csv?dl=1"
file_url %>%
read_csv2() %>%
mutate(Salary = str_replace_all(Salary,"\\$","")) %>%
mutate(Salary = str_replace_all(Salary,",","")) %>%
mutate(Salary = as.numeric(Salary) / 1000000) -> salaries
```
Have a quick look at the data
```
glimpse(salaries)
```
## Central Tendency
```
salaries %>%
summarise(mean=mean(Salary),
median=median(Salary))
```
no direct function for mode
```
salaries %>%
group_by(Salary) %>%
summarize(count = n()) %>%
arrange(-count) %>%
head(5)
```
## Variability
```
salaries %>%
summarise(range=max(Salary)-min(Salary),
var=var(Salary),
CoV=sd(Salary)/mean(Salary))
```
Tukey's five number summary (minimum, lower-hinge, median, upper-hinge, maximum)
```
fivenum(salaries$Salary)
```
Summary function
```
summary(salaries$Salary)
```
#### not meaningful without comparisons - let's do on team level
- range
```
salaries %>%
group_by(Team) %>%
summarize(range = diff(range(Salary))) %>%
arrange(range)
```
- covariance
```
salaries %>%
group_by(Team) %>%
summarize(cov = sd(Salary)/mean(Salary)) %>%
arrange(cov)
```
## Shape
```
salaries %>%
summarise(skew=skewness(Salary),
kurt=kurtosis(Salary))
salaries %>%
group_by(Team) %>%
summarize(skew = skewness(Salary)) %>%
arrange(-skew)
salaries %>%
group_by(Team) %>%
summarize(skew = skewness(Salary)) %>%
arrange(skew)
salaries %>%
group_by(Team) %>%
summarize(kurt = kurtosis(Salary)) %>%
arrange(-kurt)
salaries %>%
group_by(Team) %>%
summarize(kurt = kurtosis(Salary)) %>%
arrange(kurt)
```
| github_jupyter |
```
import pandas as pd
import os, sys
import numpy as np
os.environ["KERAS_BACKEND"] = 'tensorflow'
from keras.utils import np_utils
from sklearn.ensemble import RandomForestClassifier
import pickle
from sklearn.externals import joblib
pd.options.mode.chained_assignment = None # default='warn'
import re
from floor.data.dataset_loader import TestData
import json
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from peakdetect import peakdetect
from scipy.spatial import distance
from sklearn.metrics.pairwise import cosine_similarity
import math
from keras.models import load_model
import seaborn as sns
%matplotlib inline
```
---
# Load models
```
random_forest = joblib.load('../best_model_weights/random_forest/weights/random_forest_final_1_0_trial_144.pkl')
svm = joblib.load('../best_model_weights/svm/weights/svm_final_1_0_trial_85.pkl')
log_reg = joblib.load('../best_model_weights/logistic_regression/weights/logistic_regression_final_1_0_trial_179.pkl')
hmm = joblib.load('../best_model_weights/hmm/weights/hmm_final_1_0_trial_59.pkl')
nn = load_model('../best_model_weights/fc_nn/weights/fc_nn_final_1_18_trial_18.h5')
lstm = load_model('../best_model_weights/lstm/weights/lstm_final_1_188_trial_188.h5')
lstm_window_size = 3
use_model = 'lstm' # 'lstm', 'nn', 'lg', 'svm', 'rf', 'hmm'
# True = m will be based on each building. False = m will be 4.02 (mean from bldgs dataset)
use_building_based_m_val = False
plot_classifications = False
```
---
# Load dataset
Load data from all training runs in array of pandas
```
def load_data(exp_name='_', data_path='../data/floor_prediction_test_data/data'):
frames = []
frames_names = []
for file_name in os.listdir(data_path):
if 'csv' in file_name and exp_name in file_name:
in_path = '%s/%s' % (data_path, file_name)
df = pd.read_csv(in_path)
df = df.fillna(0)
frames.append(df)
frames_names.append(file_name)
# add weather data
for df in frames:
df['weather_pressure'] = [100] * len(df)
# run predictions on each frame
rf_frames, rf_accuracies = predict_dfs_sklearn_model(random_forest, frames, 'RF')
svm_frames, svm_accuracies = predict_dfs_sklearn_model(svm, frames, 'SVM')
lg_frames, lg_accuracies = predict_dfs_sklearn_model(log_reg, frames, 'LG')
hmm_frames, hmm_accuracies = predict_dfs_sklearn_hmm(hmm, frames, 'HMM')
nn_frames, accuracies = predict_dfs_nn(frames)
lstm_frames, accuracies = predict_dfs_lstm(frames)
result_frames = None
if use_model == 'rf':
result_frames = rf_frames
if use_model == 'svm':
result_frames = svm_frames
if use_model == 'lg':
result_frames = lg_frames
if use_model == 'hmm':
result_frames = hmm_frames
if use_model == 'nn':
result_frames = nn_frames
if use_model == 'lstm':
result_frames = lstm_frames
return result_frames, frames_names
def filter_exps(exps, names, allowed):
allowed = []
allowed_names = []
for n in names:
if n in allowed:
allowed.append(exps)
allowed_names.append(names)
return allowed, allowed_names
```
---
# Extract Features
Define fx to do a window around the point
This means that X will become 3 points concatenated as a feature vector
```
def create_window_features_lstm(X, Y, window_length = lstm_window_size):
# make odd so we can take half on left, half on right of point
new_X = []
new_Y = np.zeros((len(X) - window_length, 1))
arr_i = 0
side_size = int((window_length - 1) /2)
for i in range(side_size, len(X)):
i_start = i - side_size
i_end = i + side_size + 1
y_i = i
dps = X[i_start:i_end, :]
new_x = dps
new_y = Y[y_i]
if i_end >= len(X):
break
new_X.append(new_x)
new_Y[arr_i] = new_y
arr_i += 1
new_X = np.asarray(new_X)
return new_X, new_Y
def create_window_features(X, Y, window_length = 3):
# make odd so we can take half on left, half on right of point
new_X = np.zeros((len(X) - window_length, window_length * len(X[0])))
new_Y = np.zeros((len(X) - window_length, 1))
arr_i = 0
side_size = int((window_length - 1) /2)
for i in range(side_size, len(X)):
i_start = i - side_size
i_end = i + side_size + 1
y_i = i
dps = X[i_start:i_end, :]
new_x = dps.flatten()
new_y = Y[y_i]
if i_end >= len(X):
break
new_X[arr_i] = new_x
new_Y[arr_i] = new_y
arr_i += 1
return new_X, new_Y
```
Pick out the features we care about
```
def extract_features(df, features, y_label):
x_df = df[features]
y_df = df[[y_label]]
return x_df, y_df
```
---
# Format data
```
def convert_test_data_to_nn_format(X_train, Y_train):
nb_classes = 2
X_train = X_train.astype('float32')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(Y_train.flatten(), nb_classes)
return X_train, Y_train
def create_test_data(df, X_features, Y_label, nn_format=False, hmm_format=False):
x_df, y_df = extract_features(df, X_features, Y_label)
X = x_df.as_matrix()
Y = y_df.as_matrix()
if not hmm_format:
X, Y = create_window_features(X, Y)
if nn_format:
X, Y = convert_test_data_to_nn_format(X, Y)
return X, Y
def create_lstm_test_data(df, X_features, Y_label):
x_df, y_df = extract_features(df, X_features, Y_label)
X = x_df.as_matrix()
Y = y_df.as_matrix()
X, Y = create_window_features_lstm(X, Y)
return X, Y
```
---
# Build predict FX helpers
Neural networks predict helpers
```
def predict_nn(nn, df, X, Y, window_length=3):
df_with_window = df[0:-window_length]
preds = nn.predict(X)
results = np.argmax(preds, axis=1).reshape(len(preds), 1)
# save to new df
df_with_window['indoors_prediction'] = results
# print acc stats
accuracy = nn.evaluate(X, Y, verbose=0)[1]
return df_with_window, accuracy
def predict_lstm(nn, df, X, Y, window_length=lstm_window_size):
df_with_window = df[0:-window_length]
preds = nn.predict(X)
results = np.argmax(preds, axis=1).reshape(len(preds), 1)
# save to new df
df_with_window['indoors_prediction'] = results
# print acc stats
accuracy = nn.evaluate(X, Y, verbose=0)[1]
return df_with_window, accuracy
def predict_dfs_lstm(frames):
accuracies = []
predicted = []
for df in frames:
X, Y = create_lstm_test_data(df, X_features =
['gps_vertical_accuracy',
'gps_horizontal_accuracy',
'gps_speed',
'rssi_strength',
'magnet_total'],
Y_label='indoors')
df, accuracy = predict_lstm(lstm, df, X, Y)
predicted.append(df)
accuracies.append(float("{0:.2f}".format(accuracy)))
avg_acc = np.mean(accuracies)
print('avg acc NN: ', float("{0:.3f}".format(avg_acc)))
return predicted, accuracies
def predict_dfs_nn(frames):
accuracies = []
predicted = []
for df in frames:
X, Y = create_test_data(df, X_features =
['gps_vertical_accuracy',
'gps_horizontal_accuracy',
'gps_speed',
'rssi_strength',
'magnet_total'],
Y_label='indoors', nn_format=True)
df, accuracy = predict_nn(nn, df, X, Y)
predicted.append(df)
accuracies.append(float("{0:.2f}".format(accuracy)))
avg_acc = np.mean(accuracies)
print('avg acc NN: ', float("{0:.3f}".format(avg_acc)))
return predicted, accuracies
```
Random forest predict helpers
```
def predict_model(model, df, X, Y, window_length=3):
df_with_window = df[0:-window_length]
preds = model.predict(X)
results = preds.reshape(len(X), 1)
# save to new df
df_with_window['indoors_prediction'] = results
# print acc stats
accuracy = df_with_window[df_with_window.indoors == df_with_window.indoors_prediction].count()['indoors'] / float(len(df_with_window))
return df_with_window, accuracy
def predict_model_hmm(model, df, X, Y):
df_with_window = df
Y_hat = model.predict(X)
results = Y_hat.reshape(len(X), 1)
# save to new df
df_with_window['indoors_prediction'] = results
# print acc stats
accuracy = np.equal(Y_hat.flatten(), Y.flatten()).sum() / float(Y.shape[0])
return df_with_window, accuracy
def predict_dfs_sklearn_hmm(model, frames, model_name):
accuracies = []
predicted = []
for df in frames:
X, Y = create_test_data(df, X_features =
['gps_vertical_accuracy',
'gps_horizontal_accuracy',
'gps_speed',
'rssi_strength',
'magnet_total'],
Y_label='indoors', nn_format=False, hmm_format=True)
df, accuracy = predict_model_hmm(model, df, X, Y)
predicted.append(df)
accuracies.append(float("{0:.2f}".format(accuracy)))
avg_acc = np.mean(accuracies)
print('avg acc {}: '.format(model_name), float("{0:.3f}".format(avg_acc)))
return predicted, accuracies
def predict_dfs_sklearn_model(model, frames, model_name):
accuracies = []
predicted = []
for df in frames:
X, Y = create_test_data(df, X_features =
['gps_vertical_accuracy',
'gps_horizontal_accuracy',
'gps_speed',
'rssi_strength',
'magnet_total'],
Y_label='indoors', nn_format=False)
df, accuracy = predict_model(model, df, X, Y)
predicted.append(df)
accuracies.append(float("{0:.2f}".format(accuracy)))
avg_acc = np.mean(accuracies)
print('avg acc {}: '.format(model_name), float("{0:.3f}".format(avg_acc)))
return predicted, accuracies
def ground_truth_floor(test_name):
real_floor = test_name.split('_')
print(real_floor)
real_floor_start = int(real_floor[2])
real_floor_end = int(real_floor[-1].split('.')[0])
floor_delta = real_floor_end - real_floor_start
#floor_delta = floor_delta + 1 if real_floor_start < real_floor_end else floor_delta - 1
return floor_delta, real_floor_start, real_floor_end
```
---
# Find I/O intervals
Find in/out intervals by looking at the classification results. Use a rolling window that checks on cosine distance to find the best matches
```
def find_io_intervals(dfa, min_similarity):
# ---------------
# define IO vector mask
target_vector_in_out = [1, 1, 1, 1, 1, 0, 0, 0, 0, 0]
target_vector_in_out = np.add(target_vector_in_out, 1).tolist()
target_vector_out_in = target_vector_in_out[::-1]
window_size = len(target_vector_in_out)
preds = dfa['indoors_prediction'].tolist()
# ---------------
# find matches
matches = []
for i in range(0, len(preds) - window_size):
vec = preds[i: i + window_size]
vec = np.add(vec, 1)
dist_a = distance.jaccard(target_vector_in_out, vec)
dist_b = distance.jaccard(vec, target_vector_out_in)
if dist_a >= min_similarity:
matches.append(i)
elif dist_b >= min_similarity:
matches.append(i)
sorted(matches)
# ---------------
# Group matches
merged = [(matches[0], matches[0] + 2)]
for start in matches[1:]:
end = start+2
merged_start, merged_end = merged[-1]
if (start <= merged_end):
merged[-1] = (merged_start, max(merged_end, end))
else:
merged.append((start, end))
# --------------
# FIND PEAKS
#print('Merged interval groups:')
#print(merged)
stack = []
for x, y in merged:
avg = np.mean([x,y])
stack.append(avg)
#print('\nFinal Transition locations (ith datapoint):')
#print(stack)
peaks_detected_by_classifier = stack
return peaks_detected_by_classifier
def find_io_intervals_cosine(dfa, min_similarity):
# ---------------
# define IO vector mask
target_vector_in_out = [1, 1, 1, 1, 1, 0, 0, 0, 0, 0]
target_vector_in_out = np.add(target_vector_in_out, 1).tolist()
window_size = len(target_vector_in_out)
target_vector_out_in = target_vector_in_out[::-1]
target_vector_in_out = np.asarray(target_vector_in_out).reshape(1,-1)
target_vector_out_in = np.asarray(target_vector_out_in).reshape(1,-1)
preds = dfa['indoors_prediction'].tolist()
# ---------------
# find matches
matches = []
for i in range(0, len(preds) - window_size):
vec = np.asarray(preds[i: i + window_size]).reshape(1,-1)
vec = np.add(vec, 1)
dist_a = cosine_similarity(target_vector_in_out, vec)
dist_b = cosine_similarity(vec, target_vector_out_in)
if dist_a >= min_similarity:
matches.append(i)
elif dist_b >= min_similarity:
matches.append(i)
sorted(matches)
# ---------------
# Group matches
merged = [(matches[0], matches[0] + 2)]
for start in matches[1:]:
end = start+2
merged_start, merged_end = merged[-1]
if (start <= merged_end):
merged[-1] = (merged_start, max(merged_end, end))
else:
merged.append((start, end))
# --------------
# FIND PEAKS
#print('Merged interval groups:')
#print(merged)
stack = []
for x, y in merged:
avg = np.mean([x,y])
stack.append(avg)
#print('\nFinal Transition locations (ith datapoint):')
#print(stack)
peaks_detected_by_classifier = stack
return peaks_detected_by_classifier
```
---
## Find floor location using baro pressure
Plot barometric timeseries
---
## Calculate floor level change from last IN/OUT transition
Pull the datapoint at the peak index
```
def get_last_transition_dp(dfa, peaks_detected_by_classifier):
last_transition_index = int(peaks_detected_by_classifier[-1])
dp = dfa.iloc[[last_transition_index]]
return dp, last_transition_index
```
Pull the datapoint at the last known index (where user is)
---
### Determine peak at min/max if delta is +-
We look at a window before and after the last transition point and average these points to determine the direction of change. Then we'll find the max if the direction is negative or the min if the direction is positive
```
def find_optimal_transition_point(dfa, last_transition_index):
num_points_around_transition = 10
lookback_window = 20
#print('last transition idx: ', last_transition_index)
before = dfa[last_transition_index - num_points_around_transition : last_transition_index]
after = dfa[last_transition_index : last_transition_index + num_points_around_transition]
before_mean_pressure = np.mean(before.baro_pressure)
after_mean_pressure = np.mean(after.baro_pressure)
direction_change = "pos" if before_mean_pressure < after_mean_pressure else "neg"
if direction_change is "pos":
optimal_point = np.argmin(dfa.baro_pressure[last_transition_index - lookback_window : last_transition_index])
else:
optimal_point = np.argmax(dfa.baro_pressure[last_transition_index - lookback_window : last_transition_index])
#print('optimal transition idx: ', optimal_point)
dp = dfa.iloc[[optimal_point]]
return dp
```
---
Calculate current floor from rel altitude change
```
def predict_current_floor(dp, last_dp, floor_height_meters):
# pressure readings at time of last transition
transition_device_pressure = dp.baro_pressure.values[0]
weather_pressure_at_transition = dp.weather_pressure.values[0]
# pressure readings at current point
current_device_pressure = last_dp.baro_pressure.values[0]
current_weather_pressure = last_dp.weather_pressure.values[0]
# calculate weather pressure delta
weather_pressure_delta = current_weather_pressure - weather_pressure_at_transition
# calculate device pressure delta
weather_adjusted_start_device_pressure = transition_device_pressure + weather_pressure_delta
# this is the difference between pressures formula
# the answer is in meters
total_meter_change = 44330 * (1 - (current_device_pressure/weather_adjusted_start_device_pressure)**(1/5.255))
# floor detection rule
floor_delta = int(total_meter_change / floor_height_meters)
if floor_delta >= 0:
current_floor = 1 + floor_delta
else:
current_floor = floor_delta
return current_floor, total_meter_change, floor_delta
def calculate_total_meter_change(dp, last_dp):
# pressure readings at time of last transition
transition_device_pressure = dp.baro_pressure.values[0]
weather_pressure_at_transition = dp.weather_pressure.values[0]
# pressure readings at current point
current_device_pressure = last_dp.baro_pressure.values[0]
current_weather_pressure = last_dp.weather_pressure.values[0]
# calculate weather pressure delta
weather_pressure_delta = current_weather_pressure - weather_pressure_at_transition
# calculate device pressure delta
weather_adjusted_start_device_pressure = transition_device_pressure + weather_pressure_delta
# this is the difference between pressures formula
# the answer is in meters
total_meter_change = 44330 * (1 - (current_device_pressure/weather_adjusted_start_device_pressure)**(1/5.255))
return total_meter_change
```
---
Determine if outside or inside right now
```
def get_io_status(dfa, last_transition_index):
after_last_transition_points = dfa[last_transition_index : ]
indoor_preds = after_last_transition_points.indoors_prediction
pcnt_inside = indoor_preds[indoor_preds == 1].sum() / float(len(indoor_preds))
inside_status = 'indoors' if pcnt_inside > 0.5 else 'outdoors'
return inside_status
# colors
non_primary_alpha = 0.3
C_1 = (0,0,0,non_primary_alpha)
C_2 = (1.0, 0.0, 0.0, non_primary_alpha)
C_3 = (0.2,0.59,0.85,non_primary_alpha)
C_4 = (0.6,0.35,0.71,non_primary_alpha)
C_5 = (0.9,0.5,0.13,non_primary_alpha)
C_primary = 'blue'
C_primary_2 = 'orange'
C_secondary = (0, 0, 1, 0.3)
def plot_baro(dfa, dataset_name):
# -----------
# predicted in out
# Plot the indoor vs outdoor true state
plt.ylabel('pressure reading')
plt.plot(dfa['baro_pressure'].tolist(), label='Baro pressure', color=C_primary)
#plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
#plt.ylim((-1,2))
savefig('./images/{}_baro_pressure.png'.format(dataset_name), bbox_inches='tight')
plt.show()
# -----------
# predicted in out
# Plot the indoor vs outdoor true state
plt.ylabel('Rel altitude reading')
plt.plot(dfa['baro_relative_altitude'].tolist(), label='Relative altitude', color=C_primary)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylim((-1,2))
savefig('./images/{}_relative_altitude.png'.format(dataset_name), bbox_inches='tight')
plt.show()
def find_true_transition(dfa):
for i, row in dfa.iterrows():
if row.indoors == 1:
return i
def plot_classification(dfa, peaks_detected_by_classifier, optimal_trans_dp, dataset_name):
# -----------------------
# FEAUTRES GRAPH
# vertical accuracy graph
hor_accu = (dfa['gps_horizontal_accuracy'] / max(dfa['gps_horizontal_accuracy']) ).tolist()
ver_accu = (dfa['gps_vertical_accuracy'] / max(dfa['gps_vertical_accuracy']) ).tolist()
magnet = (dfa['magnet_total'] / max(dfa['magnet_total']) ).tolist()
rssi = (dfa['rssi_strength'] / max(dfa['rssi_strength']) ).tolist()
gps_speed = (dfa['gps_speed'] / max(dfa['gps_speed']) ).tolist()
plt.plot(hor_accu, label='GPS Hor Accuracy', color=C_1)
plt.plot(ver_accu, label='GPS Vert Accuracy', color=C_2)
plt.plot(magnet, label='Magnetometer strength', color=C_3)
plt.plot(rssi, label='Cell RSSI strength', color=C_4)
plt.plot(gps_speed, label='GPS Speed', color=C_5)
plt.ylabel('Transitions via NNIO Classifier')
# These are peaks discovered via the classifier
# plot peaks via classifier method
for i, peak in enumerate(peaks_detected_by_classifier):
label = 'InOut transition via NNIO' if i == 0 else None
plt.axvline(x=peak, color=C_secondary, label=label, linestyle='--')
# plot the optimal transition point
plt.axvline(x=optimal_trans_dp, color=C_primary, label='Estimated InOut transition', linestyle='--')
# find true transition
true_transition = find_true_transition(dfa)
plt.axvline(x=true_transition, color=C_primary_2, label='True transition', linestyle='-')
plt.legend(loc=1)
plt.ylim((-1,2))
savefig('./images/{}_features.png'.format(dataset_name), bbox_inches='tight')
plt.show()
# ----------------------------
# IN/OUT GRAPH
# true in/out
plt.subplot(212)
plt.ylabel('1=in 0=out')
plt.plot(dfa['indoors'].tolist(), label='In/Out Truth')
plt.legend(loc=1)
plt.subplot(212)
# predicted in out
plt.ylabel('1=in 0=out')
plt.plot(dfa['indoors_prediction'].tolist(), label='In/Out prediction NNIO')
plt.legend(loc=0)
plt.ylim((-1,3))
savefig('./images/{}_IO_class.png'.format(dataset_name), bbox_inches='tight')
plt.show()
```
---
## TEST CLASSIFICATION
---
```
def run_flr_find(frames, frames_names):
meter_changes = []
for i, dfa in enumerate(frames):
dfa = frames[i]
test_name = frames_names[i]
# detect peaks
# peaks_detected_by_classifier = find_io_intervals_cosine(dfa, min_similarity=0.7) #0.1
peaks_detected_by_classifier = find_io_intervals(dfa, min_similarity=0.4) #0.1
# find last point a transition happened
last_transition_dp, last_transition_index = get_last_transition_dp(dfa, peaks_detected_by_classifier)
# last dp of user loc
last_dp = dfa.iloc[[-1]]
# find point where the optimal transition happened
# this is the lowest point around a transition window
optimal_trans_dp = find_optimal_transition_point(dfa, last_transition_index)
# make floor pred
total_meter_change = calculate_total_meter_change(optimal_trans_dp, last_dp)
meter_changes.append(abs(total_meter_change))
df = pd.DataFrame(meter_changes)
df.to_csv('/Users/waf/Desktop/meters.csv')
return df
focus_exp_name = ''
frames, frames_names = load_data(focus_exp_name)
#print('num experiments: ', len(frames))
meters = run_flr_find(frames,frames_names).values.flatten()
#print(meters)
def run(frames, frames_names, floor_height_meters=3.55, heights=None, plot=plot_classifications):
preds = []
flr_heights = []
for i, dfa in enumerate(frames):
dfa = frames[i]
test_name = frames_names[i]
if heights is not None:
test_start = test_name.split('_')[0]
floor_height_meters = heights[test_start]
# detect peaks
# peaks_detected_by_classifier = find_io_intervals_cosine(dfa, min_similarity=0.7) #0.1
peaks_detected_by_classifier = find_io_intervals(dfa, min_similarity=0.4) #0.1
# find last point a transition happened
last_transition_dp, last_transition_index = get_last_transition_dp(dfa, peaks_detected_by_classifier)
# last dp of user loc
last_dp = dfa.iloc[[-1]]
# find point where the optimal transition happened
# this is the lowest point around a transition window
optimal_trans_dp = find_optimal_transition_point(dfa, last_transition_index)
# make floor pred
current_floor, total_meter_change, predicted_floor_delta = predict_current_floor(optimal_trans_dp, last_dp, floor_height_meters=floor_height_meters)
# determine if indoors or not
io_status = get_io_status(dfa, last_transition_index)
# print real floor info
floor_delta, real_floor_start, real_floor_end = ground_truth_floor(test_name)
flr_heights.append(total_meter_change / floor_delta)
preds.append({'pred_delta': predicted_floor_delta, 'predicted': current_floor, 'real_delta': floor_delta, 'name': test_name, 'test_start': real_floor_start, 'test_end': real_floor_end})
# plot
dataset_name = test_name.split('.')[0]
if plot:
plot_classification(dfa, peaks_detected_by_classifier, optimal_trans_dp.index, dataset_name)
plot_baro(dfa, dataset_name)
#print(np.mean(flr_heights))
df = pd.DataFrame(preds)
df.to_csv('/Users/waf/Desktop/predictions.csv')
return df, flr_heights
focus_exp_name = ''
frames, frames_names = load_data(focus_exp_name)
print('num experiments: ', len(frames))
if use_building_based_m_val:
results, flr_heights = run(frames,frames_names, heights={'gsb': 3.8, 'mudd': 3.4, 'noco': 4.2, 'ssw': 3.9, 'rock': 3.8})
else:
results, flr_heights = run(frames,frames_names, 4.02)
print(results)
results['errors'] = abs(results.pred_delta) - abs(results.real_delta)
x = results.errors.values
# predicted in out
plt.figure(figsize=(6, 4))
fig, ax = plt.subplots()
plt.ylabel('Frequency')
plt.xlabel('Floor Error')
plt.hist(x, bins=range(-1, 5), normed=True, rwidth=0.5)
savefig('./images/floor_err.png', bbox_inches='tight')
plt.show()
def plot_err_dists(flr_delta=False):
bldgs = ['mudd', 'gsb', 'noco', 'ssw', 'rock']
# measure errors across flr height ranges
accs = []
# create evenly spaced line of floats
intervals = np.linspace(start=2.4, stop=5.0, num=11)
print(intervals)
results_mtx = np.zeros(shape=(len(bldgs), len(intervals)))
for i, bldg in enumerate(bldgs):
bldg_df, frames_names = load_data(bldg)
for j in range(len(intervals)):
height = intervals[j]
results, flr_heights = run(bldg_df,frames_names, height)
if flr_delta:
results['errors'] = abs(results.pred_delta) - abs(results.real_delta)
else:
results['errors'] = abs(results.predicted) - abs(results.test_end)
counts = results['errors'].value_counts()
errs = counts.to_dict()
acc = 0
if 0 in errs:
acc = (float(errs[0])/len(results))
results_mtx[i][j] = acc
return results_mtx, intervals
#accs, intervals = plot_err_dists()
accs, intervals = plot_err_dists(flr_delta=True)
accs.shape
plt.imshow(accs, cmap='hot_r', interpolation='nearest')
plt.xticks(np.linspace(0, 10, 11), intervals)
plt.yticks([0,1,2,3, 4], ['mudd', 'gsb', 'noco', 'ssw', 'rock'])
plt.ylabel('building')
plt.xlabel('floor-ceiling height estimate (m)')
plt.colorbar(ticks=[0, 0.5, 1.00], orientation='vertical', fraction=0.018)
plt.title('Distribution of prediction accuracies\nby building')
savefig('./images/heat.png', bbox_inches='tight')
plt.show()
counts = results['errors'].value_counts()
print(counts)
print('\npercents\n', counts/len(x))
```
### Run test on cluster dataset
```
focus_exp_name = ''
frames, frames_names = load_data(focus_exp_name, data_path='/Users/waf/Developer/temp_floor/floor/data/floor_cluster_test_data/data')
#print('num experiments: ', len(frames))
meters = run_flr_find(frames,frames_names).values.flatten()
# sort meter changes
meters = sorted(meters)
print(meters, '\n')
# list of heights as n goes to inf
pred_heights = []
for j in range(1, len(meters)):
local_meters = meters[0:j]
# cluster by taking items with < 1.5 meter difference
clusters = []
active_cluster = []
active_m = local_meters[0]
for m in local_meters:
diff = abs(m - active_m)
if diff <= 1.7:
# in same cluster
active_cluster.append(m)
else:
clusters.append(active_cluster)
active_cluster = [m]
active_m = m
clusters.append(active_cluster)
# try diff bn clusters
diffs_clusters = []
for i in range(len(clusters) - 1):
cluster_a = clusters[i]
cluster_b = clusters[i+1]
diffs_clusters.append(abs(np.median(cluster_a) - np.median(cluster_b)))
pred_height = np.mean(diffs_clusters)
pred_heights.append(pred_height)
print('predicted floor height = ', pred_height)
# try diff bn each measurement
msrs = []
for i in range(len(clusters) - 1):
c_a = clusters[i]
c_b = clusters[i+1]
for a in c_a:
for b in c_b:
diff = abs(a - b)
msrs.append(diff)
print(np.mean(msrs), np.median(msrs))
sns.distplot(msrs)
# estimated
est = []
pred_heights = []
# real
real_h = [5.461, 3.6576, 3.6576, 3.5, 3.5, 3.5, 3.5, 3.5]
z = abs(np.median(clusters[0]))
est.append(z)
print('1 - 2', z, 'real: ', real_h[0])
for i in range(len(clusters)-1):
z = abs(np.median(clusters[i]) - np.median(clusters[i + 1]))
est.append(z)
print(i + 2,'-', i + 3, z, 'real: ', real_h[i+1])
pred_heights.append(np.mean(est))
print('est: ', np.mean(est))
# plot predicted meter changes for each floor
ff = pd.DataFrame([0] + meters)
plt1 = plt.figure()
ax1 = plt1.add_subplot(111)
ax1.scatter(x=[0] * len(ff), y=ff)
for i in range(1, len(est)):
ax1.axhline(sum(est[:i]), linestyle='--')
ax1.axhline(sum(est), label='Floor', linestyle='--')
plt2 = plt.figure()
ax2 = plt2.add_subplot(111)
ax2.plot(pred_heights)
x = results.pred_delta.values.astype(np.float16)
# x += np.random.rand(len(x))
y = results.real_delta.values.astype(np.float16)
# y += np.random.rand(len(y))
# predicted in out
plt.figure(figsize=(6, 4))
fig, ax = plt.subplots()
plt.ylabel('Predicted floor')
plt.xlabel('True floor')
plt.title('m=4.02')
plt.scatter(x, y)
savefig('./images/floor_scatter.png', bbox_inches='tight')
plt.show()
```
---
# Look at building data distribution
This is how we learned the heuristic m=4.02 for office buildings
### Residential buildings
```
# plot distribution of heights
d = pd.read_json('../data/building_floor_distribution_data/buildings.json')
print(len(d))
bldgs = d[['height_architecture', 'floors_above','functions']]
office_bldgs = bldgs[bldgs.functions == 'office']
resi_bldgs = bldgs[bldgs.functions == 'residential']
bldgs = resi_bldgs
bldgs = bldgs.convert_objects(convert_numeric=True).dropna()
bldgs['flr_height'] = bldgs['height_architecture'] / bldgs['floors_above']
bldgs['flr_height'].describe()
bldgs['flr_height'].median()
# bldgs['flr_height'].plot.density()
ax= sns.distplot(bldgs['flr_height'], bins=100)
ax.set(xlabel='Floor height distribution (m)')
ax.set(title='Residential Buildings (n={})'.format(len(bldgs)))
savefig('./images/res_bldg.png', bbox_inches='tight')
```
---
### Office buildings
Look at building data distribution
```
# plot distribution of heights
d = pd.read_json('../data/building_floor_distribution_data/buildings.json')
print(len(d))
bldgs = d[['height_architecture', 'floors_above','functions']]
office_bldgs = bldgs[bldgs.functions == 'office']
bldgs = office_bldgs
bldgs = bldgs.convert_objects(convert_numeric=True).dropna()
bldgs['flr_height'] = bldgs['height_architecture'] / bldgs['floors_above']
bldgs['flr_height'].describe()
bldgs['flr_height'].median()
# bldgs['flr_height'].plot.density()
ax= sns.distplot(bldgs['flr_height'], bins=100)
ax.set(xlabel='Floor height distribution (m)')
ax.set(title='Office Buildings (n={})'.format(len(bldgs)))
savefig('./images/office_bldg.png', bbox_inches='tight')
```
| github_jupyter |
```
import calendar
from datetime import datetime as pydt
import requests
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import matplotlib.dates as mdates
import seaborn as sns
plt.style.use('seaborn-dark')
url = "https://api.midway.tomtom.com/ranking/liveHourly/ITA_rome"
# Request the data as json
url_req = requests.get(url)
url_json = url_req.json()
# Create a pandas dataframe from the json
traffic_data = pd.DataFrame(columns=["UpdateTime", "TrafficIndexLive"])
for row in url_json["data"]:
traffic_data = traffic_data.append({"UpdateTime": row["UpdateTime"],
"TrafficIndexLive": row["TrafficIndexLive"]},
ignore_index=True)
traffic_data['UpdateTime'] = pd.to_datetime(traffic_data['UpdateTime'], unit="ms")
traffic_data['TrafficIndexLive'] = traffic_data['TrafficIndexLive'].astype(int)
# Resample 1H
traffic_data = traffic_data.resample('1H', on='UpdateTime')["TrafficIndexLive"].mean().reset_index()
# Get hour and day
traffic_data["hour"] = traffic_data['UpdateTime'].dt.strftime('%H:00')
traffic_data["day"] = traffic_data['UpdateTime'].dt.strftime('%x')
# Creates a figure and one subplot
fig, ax = plt.subplots(figsize=(12, 6))
ax.set_title("Hourly congestion level, Last 7 days")
ax.plot(traffic_data['UpdateTime'], traffic_data['TrafficIndexLive'], color="r")
ax.yaxis.set_major_formatter(mtick.PercentFormatter())
ax.xaxis.set_major_locator(mdates.HourLocator(interval=10))
plt.xticks(rotation=45)
ax.set_ylim(-5, 140)
ax.grid(True)
# Show the plot
plt.tight_layout()
plt.show()
# Group by day, hour
traffic_data_hm = traffic_data.groupby(["day", "hour"])["TrafficIndexLive"].mean().reset_index()
# Create the 7 days heatmap
days = [pydt.strptime(date, "%x") for date in traffic_data_hm["day"].unique()]
month_day = ["{}, {}".format(calendar.month_name[date.month],
date.day) for date in days]
heatmap_pt = pd.pivot_table(traffic_data_hm,
values="TrafficIndexLive",
index=["hour"],
columns="day")
# Plot the heatmap
fig, ax = plt.subplots(figsize=(15,8))
sns.set()
ax = sns.heatmap(heatmap_pt/100, cmap="Reds",
center=0.00,
annot=True,
fmt='.0%',
linewidth=.5,
cbar=False)
ax.set_xlabel("Day")
ax.set_ylabel("Hour of the day")
ax.set_xticklabels(month_day)
plt.xticks(rotation=45)
plt.yticks(rotation=0)
plt.show()
```
| github_jupyter |
# 使用Mask R-CNN模型实现人体关键节点标注
在之前的[Mask R-CNN](#)案例中,我们对Mask R-CNN模型的整体架构进行简介。Mask R-CNN是一个灵活开放的框架,可以在这个基础框架的基础上进行扩展,以完成更多的人工智能任务。在本案例中,我们将展示如何对基础的Mask R-CNN进行扩展,完成人体关键节点标注的任务。
## Mask-RCNN模型的基本结构
也许您还记得我们之前介绍过的Mask R-CNN整体架构,它的3个主要网络:
- backbone网络,用于生成特征图
- RPN网络,用于生成实例的位置、分类、分割(mask)信息
- head网络,对位置、分类和分割(mask)信息进行训练
在head网络中,有分类、位置框和分割(mask)信息的3个分支,我们可以对head网络进行扩展,加入一个人体关键节点keypoint分支。并对其进行训练,使得我们的模型具备关键节点分析的能力。那么我们的模型结构将如下图所示:

> head网络中,红色的<span style="color:red">keypionts</span>分支为新加入的**人体关键节点分支**
MaskRCNN模型的解析可以参考[此文章](https://github.com/huaweicloud/ModelArts-Lab/wiki/Mask-R-CNN%E6%A8%A1%E5%9E%8B%E8%A7%A3%E6%9E%90) 。
本案例的运行环境是 TensorFlow 1.8.0 。
## keypoints分支
在RPN中,我们生成Proposal后,当检测到Proposal的分类为"Person"时,对每个部位的关键点生成一个one-hot掩码,训练的目标最终是得到一个56*56的二值掩码,当中只有一个像素被标记为关键点,其余像素均为背景。对于每一个关键点的位置,进行最小化平均交叉熵损失检测,K个关键点是被独立处理的。
人体姿态检测中,人本身可以作为一个目标实例进行分类检测。但是,采取了one-hot编码以后,就可以扩展到coco数据集中被标注的17个人体关键点(例如:左眼、右耳),同时也能够处理非连续型数值特征。
COCO数据集中,对人体中17个关键点进行了标注,包括:鼻子,左眼,右眼,左耳,右耳,左肩,右肩,左肘,右肘,左手腕,右手腕,左膝盖,右膝盖,左脚踝,右脚踝,左小腿,右小腿,如下图所示:

## 在ModelArts中训练Mask R-CNN keypoints模型
### 准备数据和源代码
第一步:准备数据集和预训练模型
```
from modelarts.session import Session
sess = Session()
sess.download_data(bucket_path='modelarts-labs-bj4/end2end/mask_rcnn_keypoints/mask_rcnn_keypoints.data.tgz',
path='./mask_rcnn_keypoints.data.tgz')
!tar zxf ./mask_rcnn_keypoints.data.tgz
!rm ./mask_rcnn_keypoints.data.tgz
```
解压后,得到data目录,其结构如下:
```bash
data/
├── mask_rcnn_coco.h5
├── annotations
│ ├── person_keypoints_train2014.json
│ ├── ***.json
├── train2014
│ ├── COCO_train2014_***.jpg
└── val2014
├── COCO_val2014_***.jpg
```
其中`data/mask_rcnn_coco_humanpose.h5`为预训练模型,`annotations`、`train2014`和`val2014`为我们提前准备好的最小数据集,包含了500张图片的标注信息。
第二步:准备源代码
```
sess.download_data(bucket_path='modelarts-labs-bj4/end2end/mask_rcnn_keypoints/mask_rcnn_keypoints.src.tgz',
path='./mask_rcnn_keypoints.src.tgz')
!tar zxf ./mask_rcnn_keypoints.src.tgz
!rm ./mask_rcnn_keypoints.src.tgz
```
第三步:安装依赖pycocotools
示例中,我们使用COCO数据集,需要安装工具库pycocotools
```
!pip install pycocotools
```
### 程序初始化
第一步:导入相关的库,定义全局变量
```
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
# from src.mrcnn.config import Config
from src.mrcnn import coco
from src.mrcnn import utils
import src.mrcnn.model as modellib
from src.mrcnn import visualize
from src.mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = "logs"
# Local path to trained weights file
COCO_HUMANPOSE_MODEL_PATH = "data/mask_rcnn_coco_humanpose.h5"
```
第二步:生成配置项
我们定义Config类的子类MyTrainConfig,指定相关的参数,较为关键的参数有:
- __NAME__: Config的唯一名称
- __NUM_CLASSIS__: 分类的数量,我们只生成圆形,正方形和三角形,再加上背景,因此一共是4个分类
- __IMAGE_MIN_DIM和IMAGE_MAX_DIM__: 图片的最大和最小尺寸,我们生成固定的128x128的图片,因此都设置为128
- __TRAIN_ROIS_PER_IMAGE__: 每张图片上训练的RoI个数
- __STEPS_PER_EPOCH和VALIDATION_STEPS__: 训练和验证时,每轮的step数量,减少step的数量可以加速训练,但是检测精度降低
```
class DemoTrainConfig(coco.CocoConfig):
# 可辨识的名称
NAME = "demo_train"
# GPU的数量和每个GPU处理的图片数量,可以根据实际情况进行调整,参考为Nvidia Tesla P100
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# 物体的分类个数,我们针对关键节点进行训练,只需要BG和Person两种分类
NUM_CLASSES = 1 + 1 # background + 80 shapes
# 图片尺寸统一处理为1024,可以根据实际情况再进一步调小
IMAGE_MIN_DIM = 1024
IMAGE_MAX_DIM = 1024
# 因为我们生成的形状图片较小,这里可以使用较小的Anchor进行RoI检测
# RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# 每张图片上训练的RoI个数,可以适当调小该参数
TRAIN_ROIS_PER_IMAGE = 100
# 每轮训练的step数量
STEPS_PER_EPOCH = 100
# 每轮验证的step数量
VALIDATION_STEPS = 20
config = DemoTrainConfig()
config.display()
```
第三步:创建数据集对象
我们使用封装好的CocoDataset类,生成训练集和验证集。
```
from src.mrcnn.coco import CocoDataset
COCO_DIR = 'data'
# 生成训练集
dataset_train = CocoDataset(task_type="person_keypoints")
dataset_train.load_coco(COCO_DIR, "train", "2014") # 加载训练数据集
dataset_train.prepare()
# 生成验证集
dataset_val = CocoDataset(task_type="person_keypoints")
dataset_val.load_coco(COCO_DIR, "val", "2014") # 加载验证数据集
dataset_val.prepare()
# 打印数据集中keypoints的相关信息
print("Train Keypoints Image Count: {}".format(len(dataset_train.image_ids)))
print("Train Keypoints Class Count: {}".format(dataset_train.num_classes))
for i, info in enumerate(dataset_train.class_info):
print("{:3}. {:50}".format(i, info['name']))
print("Val Keypoints Image Count: {}".format(len(dataset_val.image_ids)))
print("Val Keypoints Class Count: {}".format(dataset_val.num_classes))
for i, info in enumerate(dataset_val.class_info):
print("{:3}. {:50}".format(i, info['name']))
```
## 创建模型
用"training"模式创建模型对象,并加载预训练模型
```
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="training", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
# model.load_weights(COCO_MODEL_PATH, by_name=True,exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
# "mrcnn_bbox", "mrcnn_mask"])
COCO_HUMANPOSE_MODEL_PATH = './data/mask_rcnn_coco_humanpose.h5'
# Load weights trained on MS-COCO
print("Loading weights from ", COCO_HUMANPOSE_MODEL_PATH)
model.load_weights(COCO_HUMANPOSE_MODEL_PATH, by_name=True)
# model.keras_model.summary()
```
## 训练模型
Keras中的模型可以按照制定的层进行构建,在模型的train方法中,我们可以通过layers参数来指定特定的层进行训练。layers参数有以下几种预设值:
- heads:只训练head网络中的分类、mask和bbox回归
- all: 所有的layer
- 3+: 训练ResNet Stage3和后续Stage
- 4+: 训练ResNet Stage4和后续Stage
- 5+: 训练ResNet Stage5和后续Stage
此外,layers参数还支持正则表达式,按照匹配规则指定layer,可以调用model.keras_model.summary()查看各个层的名称,然后按照需要指定要训练的层。
我们针对不同的layer进行训练,首先,训练head网络中的4个分支:
```
# Training - Stage 1
print("Train heads")
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE,
epochs=1,
layers='heads')
```
然后训练ResNet Stage4和后续Stage
```
# Training - Stage 2
# Finetune layers from ResNet stage 4 and up
# print("Training Resnet layer 4+")
# model.train(dataset_train, dataset_val,
# learning_rate=config.LEARNING_RATE / 10,
# epochs=1,
# layers='4+')
```
最后,对所有layer进行优化,并将训练的模型保存到本地
```
# Training - Stage 3
# Finetune layers from ResNet stage 3 and up
print("Training Resnet layer 3+")
model.train(dataset_train, dataset_val,
learning_rate=config.LEARNING_RATE / 100,
epochs=2,
layers='all')
model_savepath = 'demo_mrcnn_humanpose_model.h5'
model.keras_model.save_weights(model_savepath)
```
## 使用模型检测图片物体
第一步:创建"Inference"模式的模型对象,并加载我们训练好的模型文件
```
# Recreate the model in inference mode
inference_model = modellib.MaskRCNN(mode="inference",
config=config,
model_dir=MODEL_DIR)
# 加载我们自己训练出的形状模型文件的权重信息
print("Loading weights from ", model_savepath)
inference_model.load_weights(model_savepath, by_name=True)
```
第二步:从验证数据集中随机选出一张图片,显式Ground Truth信息
```
# 随机选出图片进行测试
image_id = random.choice(dataset_val.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, config,
image_id, use_mini_mask=False)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
```
第三步:使用模型对图片进行预测,并显示结果
```
results = inference_model.detect_keypoint([original_image], verbose=1)
r = results[0] # for one image
log("rois",r['rois'])
log("keypoints",r['keypoints'])
log("class_ids",r['class_ids'])
log("keypoints",r['keypoints'])
log("masks",r['masks'])
log("scores",r['scores'])
# 定义助手函数用于设置matplot中的子绘制区域所在的行和列
def get_ax(rows=1, cols=1, size=8):
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
visualize.display_keypoints(original_image, r['rois'], r['keypoints'], r['class_ids'],
dataset_train.class_names,skeleton=config.LIMBS, ax=get_ax())
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
import matplotlib
# matplotlib.use("Agg")
import matplotlib.pyplot as plt
import os
import datetime
import numpy as np
from torch.nn import MSELoss
# get data from Oscar, might need to rewrite to fit the data structure he has
class MyDataset(Dataset):
def __init__(self, data, target, transform=None):
self.data = torch.from_numpy(data).float()
self.target = torch.from_numpy(target).long()
self.transform = transform
def __getitem__(self, index):
x = self.data[index]
y = self.target[index]
if self.transform:
x = self.transform(x)
return x, y
def __len__(self):
return len(self.data)
numpy_data = np.random.randn(100,112, 8, 8) # 100 samples, image size = 112 x 8 x 8
numpy_target = np.random.randn(100, 200)
from scipy.special import softmax
numpy_target = softmax(numpy_target, axis = 1)
action_size = 200
num_resblock = 4
dataset = MyDataset(numpy_data, numpy_target)
loader = DataLoader(dataset, batch_size=5, shuffle=True, num_workers=2, pin_memory=False) # Running on CPU
# convblock for doing convolutional work
class ConvBlock(nn.Module):
def __init__(self):
super(ConvBlock, self).__init__()
self.action_size = action_size
self.conv1 = nn.Conv2d(112, 256, 3, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(256)
def forward(self, s):
s = s.view(-1, 112, 8, 8) # batch_size x channels x board_x x board_y
s = F.relu(self.bn1(self.conv1(s)))
return s
# Resblock to do residual block: x + conv output (x)
class ResBlock(nn.Module):
def __init__(self, inplanes=256, planes=256, stride=1, downsample=None):
super(ResBlock, self).__init__()
self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,
padding=1, bias=False)
self.bn2 = nn.BatchNorm2d(planes)
def forward(self, x):
residual = x
out = self.conv1(x)
out = F.relu(self.bn1(out))
out = self.conv2(out)
out = self.bn2(out)
out += residual
out = F.relu(out)
return out
# final FC layers
# get more layer in the last round + flatten
class OutBlock(nn.Module):
def __init__(self):
super(OutBlock, self).__init__()
self.conv = nn.Conv2d(256, 1, kernel_size=1) # value head
self.bn = nn.BatchNorm2d(1)
self.fc1 = nn.Linear(8*8, 100)
self.fc2 = nn.Linear(100, action_size)
# self.conv1 = nn.Conv2d(256, 128, kernel_size=1) # policy head
# self.bn1 = nn.BatchNorm2d(128)
# self.logsoftmax = nn.LogSoftmax(dim=1)
# self.fc = nn.Linear(8*8*128, 8*8*73)
def forward(self,s):
v = F.relu(self.bn(self.conv(s))) # value head
v = v.view(-1, 8*8) # batch_size X channel X height X width
v = F.relu(self.fc1(v))
v = F.relu(self.fc2(v))
# p = F.relu(self.bn1(self.conv1(s))) # policy head
# p = p.view(-1, 8*8*128)
# p = self.fc(p)
# p = self.logsoftmax(p).exp()
# print("haha", v.shape)
return v
# stacking conv block + a bunch of res block + out block
class ChessNet(nn.Module):
def __init__(self):
super(ChessNet, self).__init__()
self.conv = ConvBlock()
for block in range(num_resblock):
setattr(self, "res_%i" % block,ResBlock())
self.outblock = OutBlock()
def forward(self,s):
s = self.conv(s)
for block in range(num_resblock):
s = getattr(self, "res_%i" % block)(s)
s = self.outblock(s)
return s
# training
def train(net, dataset, epoch_start=0, epoch_stop=20, cpu=0):
torch.manual_seed(cpu)
cuda = torch.cuda.is_available()
net.train()
criterion = MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.003)
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[100,200,300,400], gamma=0.2)
# train_set = board_data(dataset)
# train_loader = DataLoader(train_set, batch_size=30, shuffle=True, num_workers=0, pin_memory=False)
train_loader = dataset
losses_per_epoch = []
for epoch in range(epoch_start, epoch_stop):
scheduler.step()
total_loss = 0.0
losses_per_batch = []
for i,data in enumerate(train_loader,0):
state, value = data
# print("loll", value.shape)
if cuda:
state, value = state.cuda().float(), value.cuda().float()
optimizer.zero_grad()
value_pred = net(state) # policy_pred = torch.Size([batch, 4672]) value_pred = torch.Size([batch, 1])
# print(value_pred, value)
# print(value_pred.shape, value.shape)
loss = criterion(value_pred, value)
loss.backward()
optimizer.step()
total_loss += loss.item()
losses_per_batch.append(loss.item())
try:
losses_per_epoch.append(np.mean(losses_per_batch))
except:
losses_per_epoch.append(0.1)
if len(losses_per_epoch) > 100:
if abs(sum(losses_per_epoch[-4:-1])/3-sum(losses_per_epoch[-16:-13])/3) <= 0.01:
break
plt.figure(figsize=(12,8))
# ax = fig.add_subplot(222)
plt.scatter([e for e in range(1,epoch_stop+1,1)], losses_per_epoch)
plt.xlabel("Epoch")
plt.ylabel("Loss per batch")
plt.title("Loss vs Epoch")
print('Finished Training')
plt.savefig(os.path.join("./", "Loss_vs_Epoch_%s.png" % datetime.datetime.today().strftime("%Y-%m-%d")))
# from alpha_net import ChessNet, train
import os
import pickle
import numpy as np
import torch
def train_chessnet(net_to_train= None ,save_as= "weights.pth.tar"):
# gather data
# data_path = "./datasets/iter1/"
# datasets = []
# for idx,file in enumerate(os.listdir(data_path)):
# filename = os.path.join(data_path,file)
# with open(filename, 'rb') as fo:
# datasets.extend(pickle.load(fo, encoding='bytes'))
# data_path = "./datasets/iter0/"
# for idx,file in enumerate(os.listdir(data_path)):
# filename = os.path.join(data_path,file)
# with open(filename, 'rb') as fo:
# datasets.extend(pickle.load(fo, encoding='bytes'))
# datasets = np.array(datasets)
# train net
datasets = loader
net = ChessNet()
cuda = torch.cuda.is_available()
if cuda:
net.cuda()
if net_to_train:
current_net_filename = os.path.join("./model_data/",\
net_to_train)
checkpoint = torch.load(current_net_filename)
net.load_state_dict(checkpoint['state_dict'])
train(net,datasets)
# save results
# torch.save({'state_dict': net.state_dict()}, os.path.join("./model/",\
# save_as))
if __name__=="__main__":
train_chessnet()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/abegpatel/movie-recomendation-system-using-auto-encoder/blob/master/autoencoder.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
**AUTO ENCODERS:**
.auto encoders
.training of an auto encoders
.overcomplete hidden layers
.sparse autoencoders
.denoising autoencoders
.contractive autoencoders
.stacked autoencoders
.deep autoencoders
**.auto encoders**
used for recomendation system
visibleinput nodes->encoding->hiddenlayer->decoding->visible output layers
.it encoding it self
.self supervised model
.uesd for feature detection
.uesd for powerful recomendation system
.used for encoding
eg..
4 movies as i/p->hidden layer->4 visiable o/p layer
soft max function
->takes highest value
convert highest val-1
else 0
**training of an auto encoders **
1.we start with an array where lines corresponds to user and column
2.the first goes to the network.input vector contains ratings of all movie
3.the i/p vector x is encoded into a vector zof same dimension by mapping function
z=f(wx+b)w->weights
b-bias
4.z is decoded into the o/p vector y of same dimension as x
5.the reconstruction error d(x,y)=||x-y|| is computedthe goal is to minimize it
6.back_popagated throungh right to left and upadated(gradient descent)
7.repeat 1 to 7
8.read more epochs
**overcomplete hidden layers**
if i/p layer->more hidden layer->o/p layer
it will cheat
and go straight produce o/p
and hidden layer which are left not in use
**.sparse autoencoders**
hidden layer is more than i/p layer
(it will cheat)
.a regularization technique apply (prevent overfitting,stebilizing algorithm)
.it uses ceratin no of nodes at a time
**.denoising autoencoders**
when we have more hidden layer
.a regularization technique
.modified version of i/p value
.randomly assign to 0
.compare o/p to original value
.stochastic auto encoder
**contractive autoencoders**
.a regularization techinque
.they add penalty to loss function
**stacked autoencoders**
.add two hidden layer in auto encoders
.hidden layer->encoding->hidden layer
.directed neural network
**deep autoencoders**
i/p layer->hidden layer1..2..3..->o/p layer(rbm stack)
```
!unzip -uq "/content/drive/My Drive/P16-AutoEncoders.zip" -d "/content/drive/My Drive/"
!unzip -uq "/content/drive/My Drive/AutoEncoders/ml-100k.zip" -d "/content/drive/My Drive/AutoEncoders/"
!unzip -uq "/content/drive/My Drive/AutoEncoders/ml-1m.zip" -d "/content/drive/My Drive/AutoEncoders/"
# AutoEncoders
# Importing the libraries
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
# Importing the dataset
movies = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-1m/movies.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1')
users = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-1m/users.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1')
ratings = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-1m/ratings.dat', sep = '::', header = None, engine = 'python', encoding = 'latin-1')
movies
# Preparing the training set and the test set
training_set = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-100k/u1.base', delimiter = '\t')
training_set = np.array(training_set, dtype = 'int')
test_set = pd.read_csv('/content/drive/My Drive/AutoEncoders/ml-100k/u1.test', delimiter = '\t')
test_set = np.array(test_set, dtype = 'int')
# Getting the number of users and movies
nb_users = int(max(max(training_set[:,0]), max(test_set[:,0])))
nb_movies = int(max(max(training_set[:,1]), max(test_set[:,1])))
# Converting the data into an array with users in lines and movies in columns
def convert(data):
new_data = []
for id_users in range(1, nb_users + 1):
id_movies = data[:,1][data[:,0] == id_users]
id_ratings = data[:,2][data[:,0] == id_users]
ratings = np.zeros(nb_movies)
ratings[id_movies - 1] = id_ratings
new_data.append(list(ratings))
return new_data
training_set = convert(training_set)
test_set = convert(test_set)
# Converting the data into Torch tensors
training_set = torch.FloatTensor(training_set)
test_set = torch.FloatTensor(test_set)
# Creating the architecture of the Neural Network
class SAE(nn.Module):
def __init__(self, ):
super(SAE, self).__init__()
self.fc1 = nn.Linear(nb_movies, 20)
self.fc2 = nn.Linear(20, 10)
self.fc3 = nn.Linear(10, 20)
self.fc4 = nn.Linear(20, nb_movies)
self.activation = nn.Sigmoid()
def forward(self, x):
x = self.activation(self.fc1(x))
x = self.activation(self.fc2(x))
x = self.activation(self.fc3(x))
x = self.fc4(x)
return x
sae = SAE()
criterion = nn.MSELoss()
optimizer = optim.RMSprop(sae.parameters(), lr = 0.01, weight_decay = 0.5)
# Training the SAE
nb_epoch = 200
for epoch in range(1, nb_epoch + 1):
train_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = input.clone()
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[target == 0] = 0
loss= criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
loss.backward()
train_loss += np.sqrt(loss.data*mean_corrector)
s += 1.
optimizer.step()
print('epoch: '+str(epoch)+' loss: '+str(train_loss/s))
# Testing the SAE
test_loss = 0
s = 0.
for id_user in range(nb_users):
input = Variable(training_set[id_user]).unsqueeze(0)
target = Variable(test_set[id_user])
if torch.sum(target.data > 0) > 0:
output = sae(input)
target.require_grad = False
output[(target == 0).unsqueeze(0)] = 0
loss = criterion(output, target)
mean_corrector = nb_movies/float(torch.sum(target.data > 0) + 1e-10)
test_loss += np.sqrt(loss.data*mean_corrector)
s += 1.
print('test loss: '+str(test_loss/s))
```
| github_jupyter |
# Running a Federated Cycle with Synergos
In a federated learning system, there are many contributory participants, known as Worker nodes, which receive a global model to train on, with their own local dataset. The dataset does not leave the individual Worker nodes at any point, and remains private to the node.
The job to synchronize, orchestrate and initiate an federated learning cycle, falls on a Trusted Third Party (TTP). The TTP pushes out the global model architecture and parameters for the individual nodes to train on, calling upon the required data, based on tags, e.g "training", which points to relevant data on the individual nodes. At no point does the TTP receive, copy or access the Worker nodes' local datasets.

This tutorial aims to give you an understanding of how to use the synergos package to run a full federated learning cycle on a `Synergos Plus` grid.
In a `Synergos Plus` grid, you have access to a suite of quality-of-life addons that can help facilitate distributed operations. Such components include centralized logging via [Synergos Logger](https://github.com/aimakerspace/synergos_logger), artifact management via [Synergos MLOps](https://github.com/aimakerspace/synergos_mlops), with more component support in the works.
In this tutorial, you will go through the steps required by each participant (TTP and Worker), by simulating each of them locally with docker containers. Specifically, we will simulate a TTP and 2 Workers.
At the end of this, we will have:
- Connected the participants
- Trained the model
- Evaluate the model
## About the Dataset and Task
The dataset used in this notebook is on a small subset of Federated EMNIST (FEMNIST) images, comprising 3 classes, and all images are 28 x 28 pixels. The dataset is available in the same directory as this notebook. Within the dataset directory, `data1` is for Worker 1 and `data2` is for Worker 2. The task to be carried out will be a multi-classification.
The dataset we have provided is a processed subset of the original FEMNIST dataset retrieved from [here](https://github.com/TalwalkarLab/leaf/tree/master/data/femnist).
## Initiating the docker containers
Before we begin, we have to start the docker containers.
### A. Initialization via `Synergos Simulator`
In `Synergos Simulator`, a sandboxed environment has been created for you!
By running:
`docker-compose -f docker-compose-synplus.yml up --build`
the following components will be started:
- TTP (Basic)
- Worker_1
- Worker_2
- Synergos UI
- Synergos Logger
- Synergos MLOps
Refer to [this](https://github.com/aimakerspace/synergos_simulator) for all the pre-allocated host & port mappings.
### B. Manual Initialization
Firstly, pull the required docker images with the following commands:
1. Synergos TTP (Basic):
`docker pull gcr.io/synergos-aisg/synergos_ttp:v0.1.0`
2. Synergos Worker:
`docker pull gcr.io/synergos-aisg/synergos_worker:v0.1.0`
3. Synergos MLOps:
`docker pull gcr.io/synergos-aisg/synergos_mlops:v0.1.0`
Next, in <u>separate</u> CLI terminals, run the following command(s):
**Note: For Windows users, it is advisable to use powershell or command prompt based interfaces**
**TTP**
```
docker run
-p 5000:5000
-p 8020:8020
-v <directory femnist/orchestrator_outputs>:/orchestrator/outputs
-v <directory femnist/orchestrator_data>:/orchestrator/data
-v <directory femnist/mlflow>:/mlflow
--name ttp
gcr.io/synergos-aisg/synergos_ttp:v0.1.0
--id ttp
--logging_variant graylog <IP Synergos Logger> <TTP port>
```
**Worker 1**
```
docker run
-p 5001:5000
-p 8021:8020
-v <directory femnist/data1>:/worker/data
-v <directory femnist/outputs_1>:/worker/outputs
--name worker_1
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_1
--logging_variant graylog <IP Synergos Logger> <Worker port>
```
**Worker 2**
```
docker run
-p 5002:5000
-p 8022:8020
-v <directory femnist/data2>:/worker/data
-v <directory femnist/outputs_2>:/worker/outputs
--name worker_2
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_2
--logging_variant graylog <IP Synergos Logger> <Worker port>
```
**Synergos MLOps**
```
docker run --rm
-p 5500:5500
-v /path/to/mlflow_test/:/mlflow # <-- IMPT! Same as orchestrator's
--name synmlops
gcr.io/synergos-aisg/synergos_mlops:v0.1.0
```
**Synergos UI**
- Refer to these [instructions](https://github.com/aimakerspace/synergos_ui) to deploy `Synergos UI`.
**Synergos Logger**
- Refer to these [instructions](https://github.com/aimakerspace/synergos_logger) to deploy `Synergos Logger`.
Once ready, for each terminal, you should see a REST server running on http://0.0.0.0:5000 of the container.
You are now ready for the next step.
## Configurations
### A. Configuring `Synergos Simulator`
All hosts & ports have already been pre-allocated!
Refer to [this](https://github.com/aimakerspace/synergos_simulator) for all the pre-allocated host & port mappings.
### B. Configuring your manual setup
In a new terminal, run `docker inspect bridge` and find the IPv4Address for each container. Ideally, the containers should have the following addresses:
- ttp address: `172.17.0.2`
- worker_1 address: `172.17.0.3`
- worker_2 address: `172.17.0.4`
- UI address: `172.17.0.5`
- Logger address: `172.17.0.8`
- MLOps address: `172.17.0.9`
If not, just note the relevant IP addresses for each docker container.
Run the following cells below.
**Note: For Windows users, `host` should be Docker Desktop VM's IP. Follow [this](https://stackoverflow.com/questions/58073936/how-to-get-ip-address-of-docker-desktop-vm) on instructions to find IP**
```
from synergos import Driver
host = "172.19.0.2"
port = 5000
# Initiate Driver
driver = Driver(host=host, port=port)
```
## Phase 1: Registration
Submitting Orchestrator & Participant metadata
#### 1A. Orchestrator creates a collaboration
```
collab_task = driver.collaborations
collab_task.configure_logger(
host="172.19.0.10",
port=9000,
sysmetrics_port=9100,
director_port=9200,
ttp_port=9300,
worker_port=9400,
ui_port=9000,
secure=False
)
collab_task.configure_mlops(
host="172.19.0.11",
port=5500,
ui_port=5500,
secure=False
)
collab_task.create('femnist_synplus_collaboration')
```
#### 1B. Orchestrator creates a project
```
driver.projects.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
action="classify",
incentives={
'tier_1': [],
'tier_2': [],
}
)
```
#### 1C. Orchestrator creates an experiment
```
driver.experiments.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
expt_id="femnist_synplus_experiment",
model=[
{
"activation": "relu",
"is_input": True,
"l_type": "Conv2d",
"structure": {
"in_channels": 1,
"out_channels": 4,
"kernel_size": 3,
"stride": 1,
"padding": 1
}
},
{
"activation": None,
"is_input": False,
"l_type": "Flatten",
"structure": {}
},
{
"activation": "softmax",
"is_input": False,
"l_type": "Linear",
"structure": {
"bias": True,
"in_features": 4 * 28 * 28,
"out_features": 3
}
}
]
)
```
#### 1D. Orchestrator creates a run
```
driver.runs.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
expt_id="femnist_synplus_experiment",
run_id="femnist_synplus_run",
rounds=2,
epochs=1,
base_lr=0.0005,
max_lr=0.005,
criterion="NLLLoss"
)
```
#### 1E. Participants registers their servers' configurations and roles
```
participant_resp_1 = driver.participants.create(
participant_id="worker_1",
)
display(participant_resp_1)
participant_resp_2 = driver.participants.create(
participant_id="worker_2",
)
display(participant_resp_2)
registration_task = driver.registrations
# Add and register worker_1 node
registration_task.add_node(
host='172.19.0.3',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
participant_id="worker_1",
role="host"
)
registration_task = driver.registrations
# Add and register worker_2 node
registration_task.add_node(
host='172.19.0.4',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
participant_id="worker_2",
role="guest"
)
```
#### 1F. Participants registers their tags for a specific project
```
# Worker 1 declares their data tags
driver.tags.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
participant_id="worker_1",
train=[["femnist", "dataset", "data1", "train"]],
evaluate=[["femnist", "dataset", "data1", "evaluate"]]
)
# Worker 2 declares their data tags
driver.tags.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
participant_id="worker_2",
train=[["femnist", "dataset", "data2", "train"]],
evaluate=[["femnist", "dataset", "data2", "evaluate"]]
)
```
## Phase 2:
Alignment, Training & Optimisation
#### 2A. Perform multiple feature alignment to dynamically configure datasets and models for cross-grid compatibility
```
driver.alignments.create(
collab_id='femnist_synplus_collaboration',
project_id="femnist_synplus_project",
verbose=False,
log_msg=False
)
```
#### 2B. Trigger training across the federated grid
```
model_resp = driver.models.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
expt_id="femnist_synplus_experiment",
run_id="femnist_synplus_run",
log_msg=False,
verbose=False
)
display(model_resp)
```
## Phase 3: EVALUATE
Validation & Predictions
#### 3A. Perform validation(s) of combination(s)
```
# Orchestrator performs post-mortem validation
driver.validations.create(
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
expt_id="femnist_synplus_experiment",
run_id="femnist_synplus_run",
log_msg=False,
verbose=False
)
```
#### 3B. Perform prediction(s) of combination(s)
```
# Worker 1 requests for inferences
driver.predictions.create(
tags={"femnist_synplus_project": [["femnist", "dataset", "data1", "predict"]]},
participant_id="worker_1",
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
expt_id="femnist_synplus_experiment",
run_id="femnist_synplus_run"
)
# Worker 2 requests for inferences
driver.predictions.create(
tags={"femnist_synplus_project": [["femnist", "dataset", "data2", "predict"]]},
participant_id="worker_2",
collab_id="femnist_synplus_collaboration",
project_id="femnist_synplus_project",
expt_id="femnist_synplus_experiment",
run_id="femnist_synplus_run"
)
```
| github_jupyter |
# 应用自动数据增强
[](https://gitee.com/mindspore/docs/blob/master/docs/notebook/mindspore_enable_auto_augmentation.ipynb)
## 概述
自动数据增强(AutoAugment)是在一系列图像增强子策略的搜索空间中,通过搜索算法找到适合特定数据集的图像增强方案。MindSpore的`c_transforms`模块提供了丰富的C++算子来实现AutoAugment,用户也可以自定义函数或者算子来实现。更多MindSpore算子的详细说明参见[API文档](https://www.mindspore.cn/docs/api/zh-CN/master/api_python/mindspore.dataset.vision.html)。
MindSpore算子和AutoAugment中的算子的对应关系如下:
| AutoAugment算子 | MindSpore算子 | 描述 |
| :------: | :------ | ------ |
| shearX | RandomAffine | 横向剪切 |
| shearY | RandomAffine | 纵向剪切 |
| translateX | RandomAffine | 水平平移 |
| translateY | RandomAffine | 垂直平移 |
| rotate | RandomRotation | 旋转变换 |
| color | RandomColor | 颜色变换 |
| posterize | RandomPosterize | 减少颜色通道位数 |
| solarize | RandomSolarize | 指定的阈值范围内,反转所有的像素点 |
| contrast | RandomColorAdjust | 调整对比度 |
| sharpness | RandomSharpness | 调整锐度 |
| brightness | RandomColorAdjust | 调整亮度 |
| autocontrast | AutoContrast | 最大化图像对比度 |
| equalize | Equalize | 均衡图像直方图 |
| invert | Invert | 反转图像 |
> 本文档适用于CPU、GPU和Ascend环境。
## 整体流程
- 准备环节。
- CIFAR-10自动数据增强。
## 准备环节
### 下载所需数据集
以下示例代码将数据集下载并解压到指定位置。
```
import os
import requests
import tarfile
import zipfile
import shutil
requests.packages.urllib3.disable_warnings()
def download_dataset(url, target_path):
"""download and decompress dataset"""
if not os.path.exists(target_path):
os.makedirs(target_path)
download_file = url.split("/")[-1]
if not os.path.exists(download_file):
res = requests.get(url, stream=True, verify=False)
if download_file.split(".")[-1] not in ["tgz", "zip", "tar", "gz"]:
download_file = os.path.join(target_path, download_file)
with open(download_file, "wb") as f:
for chunk in res.iter_content(chunk_size=512):
if chunk:
f.write(chunk)
if download_file.endswith("zip"):
z = zipfile.ZipFile(download_file, "r")
z.extractall(path=target_path)
z.close()
if download_file.endswith(".tar.gz") or download_file.endswith(".tar") or download_file.endswith(".tgz"):
t = tarfile.open(download_file)
names = t.getnames()
for name in names:
t.extract(name, target_path)
t.close()
print("The {} file is downloaded and saved in the path {} after processing".format(os.path.basename(url), target_path))
download_dataset("https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/cifar-10-binary.tar.gz", "./datasets")
test_path = "./datasets/cifar-10-batches-bin/test"
train_path = "./datasets/cifar-10-batches-bin/train"
os.makedirs(test_path, exist_ok=True)
os.makedirs(train_path, exist_ok=True)
if not os.path.exists(os.path.join(test_path, "test_batch.bin")):
shutil.move("./datasets/cifar-10-batches-bin/test_batch.bin", test_path)
[shutil.move("./datasets/cifar-10-batches-bin/"+i, train_path) for i in os.listdir("./datasets/cifar-10-batches-bin/") if os.path.isfile("./datasets/cifar-10-batches-bin/"+i) and not i.endswith(".html") and not os.path.exists(os.path.join(train_path, i))]
```
下载并解压后的数据集文件的目录结构如下:
```text
./datasets/cifar-10-batches-bin
├── readme.html
├── test
│ └── test_batch.bin
└── train
├── batches.meta.txt
├── data_batch_1.bin
├── data_batch_2.bin
├── data_batch_3.bin
├── data_batch_4.bin
└── data_batch_5.bin
```
## CIFAR-10自动数据增强
本教程以在CIFAR-10数据集上实现AutoAugment作为示例。
针对CIFAR-10数据集的数据增强策略包含25条子策略,每条子策略中包含两种变换,针对一个batch中的每张图像随机挑选一个子策略的组合,以预定的概率来决定是否执行子策略中的每种变换。
用户可以使用MindSpore中`c_transforms`模块的`RandomSelectSubpolicy`接口来实现AutoAugment,在CIFAR-10分类训练中标准的数据增强方式分以下几个步骤:
- `RandomCrop`:随机裁剪。
- `RandomHorizontalFlip`:水平方向上随机翻转。
- `Normalize`:归一化。
- `HWC2CHW`:图片通道变化。
在`RandomCrop`后插入AutoAugment变换,如下所示:
1. 引入MindSpore数据增强模块。
```
from mindspore import dtype as mstype
import mindspore.dataset as ds
import mindspore.dataset.vision.c_transforms as c_vision
import mindspore.dataset.transforms.c_transforms as c_transforms
import matplotlib.pyplot as plt
```
2. 定义MindSpore算子到AutoAugment算子的映射:
```
# define Auto Augmentation operators
PARAMETER_MAX = 10
def float_parameter(level, maxval):
return float(level) * maxval / PARAMETER_MAX
def int_parameter(level, maxval):
return int(level * maxval / PARAMETER_MAX)
def shear_x(level):
v = float_parameter(level, 0.3)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, shear=(-v, -v)), c_vision.RandomAffine(degrees=0, shear=(v, v))])
def shear_y(level):
v = float_parameter(level, 0.3)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, shear=(0, 0, -v, -v)), c_vision.RandomAffine(degrees=0, shear=(0, 0, v, v))])
def translate_x(level):
v = float_parameter(level, 150 / 331)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, translate=(-v, -v)), c_vision.RandomAffine(degrees=0, translate=(v, v))])
def translate_y(level):
v = float_parameter(level, 150 / 331)
return c_transforms.RandomChoice([c_vision.RandomAffine(degrees=0, translate=(0, 0, -v, -v)), c_vision.RandomAffine(degrees=0, translate=(0, 0, v, v))])
def color_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomColor(degrees=(v, v))
def rotate_impl(level):
v = int_parameter(level, 30)
return c_transforms.RandomChoice([c_vision.RandomRotation(degrees=(-v, -v)), c_vision.RandomRotation(degrees=(v, v))])
def solarize_impl(level):
level = int_parameter(level, 256)
v = 256 - level
return c_vision.RandomSolarize(threshold=(0, v))
def posterize_impl(level):
level = int_parameter(level, 4)
v = 4 - level
return c_vision.RandomPosterize(bits=(v, v))
def contrast_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomColorAdjust(contrast=(v, v))
def autocontrast_impl(level):
return c_vision.AutoContrast()
def sharpness_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomSharpness(degrees=(v, v))
def brightness_impl(level):
v = float_parameter(level, 1.8) + 0.1
return c_vision.RandomColorAdjust(brightness=(v, v))
```
3. 定义CIFAR-10数据集的AutoAugment策略:
- 预置一条简单的子策略,其中只包含`RandomRotation`和`RandomColor`两个操作,概率分别为1.0和0.0。
```
policy_list = [
[(c_vision.RandomRotation((90, 90)), 1.0), (c_vision.RandomColorAdjust(), 0.0)]
]
```
- 预置多个子策略。
```
# define the Auto Augmentation policy
cifar10_policy = [
[(posterize_impl(8), 0.4), (rotate_impl(9), 0.6)],
[(solarize_impl(5), 0.6), (autocontrast_impl(5), 0.6)],
[(c_vision.Equalize(), 0.8), (c_vision.Equalize(), 0.6)],
[(posterize_impl(7), 0.6), (posterize_impl(6), 0.6)],
[(c_vision.Equalize(), 0.4), (solarize_impl(4), 0.2)],
[(c_vision.Equalize(), 0.4), (rotate_impl(8), 0.8)],
[(solarize_impl(3), 0.6), (c_vision.Equalize(), 0.6)],
[(posterize_impl(5), 0.8), (c_vision.Equalize(), 1.0)],
[(rotate_impl(3), 0.2), (solarize_impl(8), 0.6)],
[(c_vision.Equalize(), 0.6), (posterize_impl(6), 0.4)],
[(rotate_impl(8), 0.8), (color_impl(0), 0.4)],
[(rotate_impl(9), 0.4), (c_vision.Equalize(), 0.6)],
[(c_vision.Equalize(), 0.0), (c_vision.Equalize(), 0.8)],
[(c_vision.Invert(), 0.6), (c_vision.Equalize(), 1.0)],
[(color_impl(4), 0.6), (contrast_impl(8), 1.0)],
[(rotate_impl(8), 0.8), (color_impl(2), 1.0)],
[(color_impl(8), 0.8), (solarize_impl(7), 0.8)],
[(sharpness_impl(7), 0.4), (c_vision.Invert(), 0.6)],
[(shear_x(5), 0.6), (c_vision.Equalize(), 1.0)],
[(color_impl(0), 0.4), (c_vision.Equalize(), 0.6)],
[(c_vision.Equalize(), 0.4), (solarize_impl(4), 0.2)],
[(solarize_impl(5), 0.6), (autocontrast_impl(5), 0.6)],
[(c_vision.Invert(), 0.6), (c_vision.Equalize(), 1.0)],
[(color_impl(4), 0.6), (contrast_impl(8), 1.0)],
[(c_vision.Equalize(), 0.8), (c_vision.Equalize(), 0.6)],
]
```
4. 在`RandomCrop`操作后插入AutoAugment变换。
```
def create_dataset(dataset_path, do_train, policy, repeat_num=1, batch_size=32, shuffle=True, num_samples=5):
# create a train dataset for ResNet-50
data = ds.Cifar10Dataset(dataset_path, num_parallel_workers=8,
shuffle=shuffle, num_samples=num_samples)
image_size = 224
mean = [0.485 * 255, 0.456 * 255, 0.406 * 255]
std = [0.229 * 255, 0.224 * 255, 0.225 * 255]
# define map operations
if do_train:
trans = [
c_vision.RandomCrop((32, 32), (4, 4, 4, 4)),
]
post_trans = [
c_vision.RandomHorizontalFlip(prob=0.5),
]
else:
trans = [
c_vision.Decode(),
c_vision.Resize(256),
c_vision.CenterCrop(image_size),
c_vision.Normalize(mean=mean, std=std),
c_vision.HWC2CHW()
]
data = data.map(operations=trans, input_columns="image")
if do_train:
data = data.map(operations=c_vision.RandomSelectSubpolicy(policy), input_columns=["image"])
data = data.map(operations=post_trans, input_columns="image")
type_cast_op = c_transforms.TypeCast(mstype.int32)
data = data.map(operations=type_cast_op, input_columns="label")
# apply the batch operation
data = data.batch(batch_size, drop_remainder=True)
# apply the repeat operation
data = data.repeat(repeat_num)
return data
```
5. 验证自动数据增强效果。
- 在一条子策略的情况下,因为`RandomRotation`操作的概率设置为1,也就是该操作肯定会发生,而`RandomColor`操作的概率设置为0,也就是该操作不会发生。
```
DATA_DIR = "./datasets/cifar-10-batches-bin/train"
data = create_dataset(dataset_path=DATA_DIR, do_train=True, batch_size=5, shuffle=False, num_samples=5, policy=policy_list)
epochs = 5
itr = data.create_dict_iterator()
fig = plt.figure(figsize=(8, 8))
columns = 5
rows = 5
step_num = 0
for ep_num in range(epochs):
for data in itr:
step_num += 1
for index in range(rows):
fig.add_subplot(rows, columns, ep_num * rows + index + 1)
plt.imshow(data['image'].asnumpy()[index])
plt.show()
```
- 在多个子策略的情况下,每张图片首先等概率的随机选取一条子策略,然后根据该子策略内俩个操作的概率情况,进行随机的自动数据增强,增强数据的泛化性。
```
DATA_DIR = "./datasets/cifar-10-batches-bin/train"
data = create_dataset(dataset_path=DATA_DIR, do_train=True, batch_size=5, shuffle=False, num_samples=5, policy=cifar10_policy)
epochs = 5
itr = data.create_dict_iterator()
fig = plt.figure(figsize=(8, 8))
columns = 5
rows = 5
step_num = 0
for ep_num in range(epochs):
for data in itr:
step_num += 1
for index in range(rows):
fig.add_subplot(rows, columns, ep_num * rows + index + 1)
plt.imshow(data['image'].asnumpy()[index])
plt.show()
```
> 为了更好地演示效果,此处只加载5张图片,且读取时不进行`shuffle`操作,自动数据增强时也不进行`Normalize`和`HWC2CHW`操作。
>
> 运行结果可以看到,batch中每张图像的增强效果,水平方向表示1个batch的5张图像,垂直方向表示5个batch。
| github_jupyter |
```
from google.colab import drive
drive.mount('GoogleDrive')
!fusermount -u GoogleDrive
import tensorflow as tf
import numpy as np
import scipy.io as scio
import os
v_feature = scio.loadmat('./My_file_path')
v_feature
train_feature = v_feature['feature']
train_feature.shape
train_label = v_feature['label'].flatten()
train_label.shape
train_data = np.c_[train_feature, train_label.reshape(-1, 1)]
train_data.shape
num_hidden1 = 256 #@param {type: "integer"}
num_hidden2 = 128 #@param {type: "integer"}
initial_rate = 3e-3 #@param {type: "number"}
batch_size = 128 #@param {type: "number"}
num_epoch = 150 #@param {type: "number"}
decay_steps = 30
learning_rate_decay_factor = 0.95
feature_dim = train_feature.shape[1]
file_path = './My_file_path'
graph = tf.Graph()
with graph.as_default():
# learning rate 的选取策略:
global_step = tf.Variable(0, name='global_step', trainable=False)
decay_steps = decay_steps
learning_rate = tf.train.exponential_decay(learning_rate=initial_rate,
global_step=global_step,
decay_steps=decay_steps,
decay_rate=learning_rate_decay_factor,
staircase=True,
name='exponential_decay')
with tf.name_scope('Input'):
x = tf.placeholder(tf.float32, shape=[None, feature_dim], name='input')
label = tf.placeholder(tf.uint8, shape=[None, ], name='label')
y = tf.one_hot(label, depth=3, dtype=tf.float32)
keep_prob = tf.placeholder(tf.float32)
with tf.name_scope('Main_Network'):
with tf.name_scope('FC1'):
w_1 = tf.get_variable(name='w_fc1', shape=[feature_dim, num_hidden1], initializer=tf.initializers.random_normal(stddev=.1))
b_1 = tf.get_variable(name='b_fc1', shape=[num_hidden1, ], initializer=tf.initializers.random_normal(stddev=.1))
layer_1 = tf.nn.relu(tf.matmul(x, w_1) + b_1)
with tf.name_scope('Dropout1'):
layer_d1 = tf.nn.dropout(layer_1, keep_prob)
with tf.name_scope('FC2'):
w_2 = tf.get_variable(name='w_fc2', shape=[num_hidden1, num_hidden2], initializer=tf.initializers.random_normal(stddev=.1))
b_2 = tf.get_variable(name='b_fc2', shape=[num_hidden2, ], initializer=tf.initializers.random_normal(stddev=.1))
layer_2 = tf.nn.relu(tf.matmul(layer_d1, w_2) + b_2)
with tf.name_scope('Dropout2'):
layer_d2 = tf.nn.dropout(layer_2, keep_prob)
with tf.name_scope('Output'):
w_o = tf.get_variable(name='w_fco', shape=[num_hidden2, 3], initializer=tf.initializers.random_normal(stddev=.1))
b_o = tf.get_variable(name='b_fco', shape=[3, ], initializer=tf.initializers.random_normal(stddev=.1))
layer_3 = tf.matmul(layer_d2, w_o) + b_o
y_out = tf.nn.softmax(layer_3)
# # -------------regularization_L2----------------
# tf.add_to_collection(tf.GraphKeys.WEIGHTS, w_1)
# tf.add_to_collection(tf.GraphKeys.WEIGHTS, w_2)
# tf.add_to_collection(tf.GraphKeys.WEIGHTS, w_o)
# regularizer = tf.contrib.layers.l2_regularizer(scale=1. / 700)
# reg_tem = tf.contrib.layers.apply_regularization(regularizer)
with tf.name_scope('Loss'):
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=layer_3))
# cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=y, logits=layer_3) + reg_tem)
with tf.name_scope('Accuracy'):
prediction = tf.equal(tf.argmax(layer_3, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(prediction, "float"))
with tf.name_scope('Train'):
train_op = tf.train.AdamOptimizer(learning_rate).minimize(cross_entropy, global_step=global_step)
tf.summary.scalar('Cross_entropy', cross_entropy, collections=['train'])
tf.summary.scalar('Accuracy', accuracy, collections=['train'])
# tf.summary.scalar('global_step', global_step, collections=['train'])
tf.summary.scalar('learning_rate', learning_rate, collections=['train'])
tf.summary.histogram('Weights_fc1', w_1, collections=['train'])
tf.summary.histogram('Biases_fc1', b_1, collections=['train'])
summ_train = tf.summary.merge_all('train')
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
summ_train_dir = os.path.join(file_path, 'summaries')
summ_writer = tf.summary.FileWriter(summ_train_dir)
summ_writer.add_graph(sess.graph)
print('Training ============= (。・`ω´・) ===========\n')
for epoch in range(num_epoch):
np.random.shuffle(train_data)
max_batch = train_data.shape[0] // batch_size
for num_batch in range(max_batch):
train_x = train_data[num_batch*batch_size:(num_batch+1)*batch_size, :-1]
train_l = train_data[num_batch*batch_size:(num_batch+1)*batch_size:, -1]
# for i in range(3):
# train_x = train_data[i * 30:(i + 1) * 30, :-1]
# train_l = train_data[i * 30:(i + 1) * 30, -1]
_, loss, acc, rt = sess.run([train_op, cross_entropy, accuracy, summ_train], feed_dict={x: train_x,
label: train_l,
keep_prob: .8})
# output = sess.run(y_out, feed_dict={x: train_x[0].reshape(-1, feature_dim),
# label: train_l[0].reshape(-1),
# keep_prob: 1.})
summ_writer.add_summary(rt, global_step=epoch)
print_list = [epoch + 1, loss, acc * 100]
if (epoch + 1) % 10 == 0 or epoch == 0:
print('Epoch {0[0]}, cross_entropy: {0[1]:.4f}, accuracy: {0[2]:.2f}%.'.format(print_list))
# print('Output : {}\n'.format(output))
print('\nTraining completed.\n')
loss, acc = sess.run([cross_entropy, accuracy], feed_dict={x: train_feature,
label: train_label,
keep_prob: 1.})
acc *= 100
print('Cross_entropy on the whole training set: %.4f, accuracy: %.2f%%.' %(loss, acc))
```
| github_jupyter |
<img src="../../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# _*Qiskit Chemistry: Compiuting a Molecule's Dissociation Profile Using the Variational Quantum Eigensolver (VQE) Algorithm*_
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorials.
***
### Contributors
Antonio Mezzacapo<sup>[1]</sup>, Richard Chen<sup>[1]</sup>, Marco Pistoia<sup>[1]</sup>, Shaohan Hu<sup>[1]</sup>, Peng Liu<sup>[1]</sup>, Stephen Wood<sup>[1]</sup>, Jay Gambetta<sup>[1]</sup>
### Affiliation
- <sup>[1]</sup>IBMQ
### Introduction
One of the most compelling possibilities of quantum computation is the the simulation of other quantum systems. Quantum simulation of quantum systems encompasses a wide range of tasks, including most significantly:
1. Simulation of the time evolution of quantum systems.
2. Computation of ground state properties.
These applications are especially useful when considering systems of interacting fermions, such as molecules and strongly correlated materials. The computation of ground state properties of fermionic systems is the starting point for mapping out the phase diagram of condensed matter Hamiltonians. It also gives access to the key question of electronic structure problems in quantum chemistry - namely, reaction rates. The focus of this notebook is on molecular systems, which are considered to be the ideal bench test for early-stage quantum computers, due to their relevance in chemical applications despite relatively modest sizes. Formally, the ground state problem asks the following:
For some physical Hamiltonian *H*, find the smallest eigenvalue $E_G$, such that $H|\psi_G\rangle=E_G|\psi_G\rangle$, where $|\Psi_G\rangle$ is the eigenvector corresponding to $E_G$.
It is known that in general this problem is intractable, even on a quantum computer. This means that we cannot expect an efficient quantum algorithm that prepares the ground state of general local Hamiltonians. Despite this limitation, for specific Hamiltonians of interest it might be possible, given physical constraints on the interactions, to solve the above problem efficiently. Currently, at least four different methods exist to approach this problem:
1. Quantum phase estimation: Assuming that we can approximately prepare the state $|\psi_G\rangle$, this routine uses controlled implementations of the Hamiltonian to find its smallest eigenvalue.
2. Adiabatic theorem of quantum mechanics: The quantum system is adiabatically dragged from being the ground state of a trivial Hamiltonian to the one of the target problem, via slow modulation of the Hamiltonian terms.
3. Dissipative (non-unitary) quantum operation: The ground state of the target system is a fixed point. The non-trivial assumption here is the implementation of the dissipation map on quantum hardware.
4. Variational quantum eigensolvers: Here we assume that the ground state can be represented by a parameterization containing a relatively small number of parameters.
In this notebook we focus on the last method, as this is most likely the simplest to be realized on near-term devices.
The general idea is to define a parameterization $|\psi(\boldsymbol\theta)\rangle$ of quantum states, and minimize the energy
$$E(\boldsymbol\theta) = \langle \psi(\boldsymbol\theta)| H |\psi(\boldsymbol\theta)\rangle,$$
The key ansatz is that the number of parameters $|\boldsymbol\theta^*|$ that minimizes the energy function scales polynomially with the size (e.g., number of qubits) of the target problem.
Then, any local fermionic Hamiltonian can be mapped into a sum over Pauli operators $P_i$,
$$H\rightarrow H_P = \sum_i^M w_i P_i,$$
and the energy corresponding to the state $|\psi(\boldsymbol\theta\rangle$, $E(\boldsymbol\theta)$, can be estimated by sampling the individual Pauli terms $P_i$ (or sets of them that can be measured at the same time) on a quantum computer:
$$E(\boldsymbol\theta) = \sum_i^M w_i \langle \psi(\boldsymbol\theta)| P_i |\psi(\boldsymbol\theta)\rangle.$$
Last, some optimization technique must be devised in order to find the optimal value of parameters $\boldsymbol\theta^*$, such that $|\psi(\boldsymbol\theta^*)\rangle\equiv|\psi_G\rangle$.
### Fermionic Hamiltonians
The Hamiltonians describing systems of interacting fermions can be expressed in second quantization language, considering fermionic creation (annihilation) operators $a^\dagger_\alpha(a_\alpha)$, relative to the $\alpha$-th fermionic mode. In the case of molecules, the $\alpha$ labels stand for the different atomic or molecular orbitals. Within the second-quantization framework, a generic molecular Hamiltonian with $M$ orbitals can be written as
$$H =H_1+H_2=\sum_{\alpha, \beta=0}^{M-1} t_{\alpha \beta} \, a^\dagger_{\alpha} a_{\beta} +\frac{1}{2} \sum_{\alpha, \beta, \gamma, \delta = 0}^{M-1} u_{\alpha \beta \gamma \delta}\, a^\dagger_{\alpha} a^\dagger_{\gamma} a_{\delta} a_{\beta},$$
with the one-body terms representing the kinetic energy of the electrons and the potential energy that they experience in the presence of the nuclei,
$$ t_{\alpha\beta}=\int d\boldsymbol x_1\Psi_\alpha(\boldsymbol{x}_1) \left(-\frac{\boldsymbol\nabla_1^2}{2}+\sum_{i} \frac{Z_i}{|\boldsymbol{r}_{1i}|}\right)\Psi_\beta (\boldsymbol{x}_1),$$
and their interactions via Coulomb forces
$$ u_{\alpha\beta\gamma\delta}=\int\int d \boldsymbol{x}_1 d \boldsymbol{x}_2 \Psi_\alpha^*(\boldsymbol{x}_1)\Psi_\beta(\boldsymbol{x}_1)\frac{1}{|\boldsymbol{r}_{12}|}\Psi_\gamma^*(\boldsymbol{x}_2)\Psi_\delta(\boldsymbol{x}_2),$$
where we have defined the nuclei charges $Z_i$, the nuclei-electron and electron-electron separations $\boldsymbol{r}_{1i}$ and $\boldsymbol{r}_{12}$, the $\alpha$-th orbital wavefunction $\Psi_\alpha(\boldsymbol{x}_1)$, and we have assumed that the spin is conserved in the spin-orbital indices $\alpha,\beta$ and $\alpha,\beta,\gamma,\delta$.
### Molecules considered in this notebook and mapping to qubits
We consider in this notebook the optimization of two potential energy surfaces, for the hydrogen and lithium hydride molecules, obtained using the STO-3G basis. The molecular Hamiltonians are computed as a function of their interatomic distance, then mapped to two-(H$_2$) and four-(LiH$_2$) qubit problems, via elimination of core and high-energy orbitals and removal of $Z_2$ symmetries.
### Approximate universal quantum computing for quantum chemistry problems
In order to find the optimal parameters $\boldsymbol\theta^*$, we set up a closed optimization loop with a quantum computer, based on some stochastic optimization routine. Our choice for the variational ansatz is a deformation of the one used for the optimization of classical combinatorial problems, with the inclusion of $Z$ rotation together with the $Y$ ones. The optimization algorithm for fermionic Hamiltonians is similar to the one for combinatorial problems, and can be summarized as follows:
1. Map the fermionic Hamiltonian $H$ to a qubit Hamiltonian $H_P$.
2. Choose the maximum depth of the quantum circuit (this could be done adaptively).
3. Choose a set of controls $\boldsymbol\theta$ and make a trial function $|\psi(\boldsymbol\theta)\rangle$. The difference with the combinatorial problems is the insertion of additional parameterized $Z$ single-qubit rotations.
4. Evaluate the energy $E(\boldsymbol\theta) = \langle\psi(\boldsymbol\theta)~|H_P|~\psi(\boldsymbol\theta)\rangle$ by sampling each Pauli term individually, or sets of Pauli terms that can be measured in the same tensor product basis.
5. Use a classical optimizer to choose a new set of controls.
6. Continue until the energy has converged, hopefully close to the real solution $\boldsymbol\theta^*$, and return the last value of $E(\boldsymbol\theta)$.
Note that, as opposed to the classical case, in the case of a quantum chemistry Hamiltonian one has to sample over non-computational states that are superpositions, and therefore take advantage of using a quantum computer in the sampling part of the algorithm. Motivated by the quantum nature of the answer, we also define a variational trial ansatz in this way:
$$|\psi(\boldsymbol\theta)\rangle = [U_\mathrm{single}(\boldsymbol\theta) U_\mathrm{entangler}]^m |+\rangle$$
where $U_\mathrm{entangler}$ is a collection of cPhase gates (fully entangling gates), $U_\mathrm{single}(\boldsymbol\theta) = \prod_{i=1}^n Y(\theta_{i})Z(\theta_{n+i})$ are single-qubit $Y$ and $Z$ rotation, $n$ is the number of qubits and $m$ is the depth of the quantum circuit.
References and additional details:
[1] A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, *Hardware-efficient Variational Quantum Eigensolver for Small Molecules and Quantum Magnets*, Nature 549, 242 (2017), and references therein.
```
# useful additional packages
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
from qiskit import Aer
from qiskit_chemistry import QiskitChemistry
import warnings
warnings.filterwarnings('ignore')
# setup qiskit_chemistry logging
import logging
from qiskit_chemistry import set_qiskit_chemistry_logging
set_qiskit_chemistry_logging(logging.ERROR) # choose among DEBUG, INFO, WARNING, ERROR, CRITICAL and NOTSET
```
### [Optional] Setup token to run the experiment on a real device
If you would like to run the experiement on a real device, you need to setup your account first.
Note: If you do not store your token yet, use `IBMQ.save_accounts()` to store it first.
```
# from qiskit import IBMQ
# IBMQ.load_accounts()
```
## Optimization of H$_2$ at bond length
In this first part of the notebook, we show the optimization of the H$_2$ Hamiltonian in the `STO-3G` basis at the bond length of 0.735 Angstrom. After mapping it to a four-qubit system with a parity transformation, two spin-parity symmetries are modded out, leading to a two-qubit Hamiltonian. The energy of the mapped Hamiltonian obtained is then minimized using the variational ansatz described in the introduction, and a stochastic perturbation simultaneous approximation (SPSA) gradient descent method. We stored the precomputed one- and two-body integrals and other molecular information in the `hdf5` file.
Here we use the [*declarative approach*](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/aqua/chemistry/declarative_approach.ipynb) to run our experiement, but the same is doable in a [fully programmatic way](https://github.com/Qiskit/qiskit-tutorials/blob/master/qiskit/aqua/chemistry/programmatic_approach.ipynb), especially for those users who are interested in learning the Qiskit Aqua and Qiskit Chemistry APIs as well as contributing new algorithmic components.
```
# First, we use classical eigendecomposition to get ground state energy (including nuclear repulsion energy) as reference.
qiskit_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': 'H2/H2_equilibrium_0.735_sto-3g.hdf5'},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': 'ExactEigensolver'}
}
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict)
print('Ground state energy (classical): {:.12f}'.format(result['energy']))
# Second, we use variational quantum eigensolver (VQE)
qiskit_chemistry_dict['algorithm']['name'] = 'VQE'
qiskit_chemistry_dict['optimizer'] = {'name': 'SPSA', 'max_trials': 350}
qiskit_chemistry_dict['variational_form'] = {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'}
backend = Aer.get_backend('statevector_simulator')
solver = QiskitChemistry()
result = solver.run(qiskit_chemistry_dict, backend=backend)
print('Ground state energy (quantum) : {:.12f}'.format(result['energy']))
print("====================================================")
# You can also print out other info in the field 'printable'
for line in result['printable']:
print(line)
```
## Optimizing the potential energy surface
The optimization considered previously is now performed for two molecules, H$_2$ and LiH, for different interatomic distances, and the corresponding nuclei Coulomb repulsion is added in order to obtain a potential energy surface.
```
# select H2 or LiH to experiment with
molecule='H2'
qiskit_chemistry_dict = {
'driver': {'name': 'HDF5'},
'HDF5': {'hdf5_input': ''},
'operator': {'name':'hamiltonian',
'qubit_mapping': 'parity',
'two_qubit_reduction': True},
'algorithm': {'name': ''},
'optimizer': {'name': 'SPSA', 'max_trials': 350},
'variational_form': {'name': 'RYRZ', 'depth': 3, 'entanglement':'full'}
}
# choose which backend want to use
# backend = Aer.get_backend('statevector_simulator')
backend = Aer.get_backend('qasm_simulator')
backend_cfg = {'shots': 1024}
algos = ['ExactEigensolver', 'VQE']
if molecule == 'LiH':
mol_distances = np.arange(0.6, 5.1, 0.1)
qiskit_chemistry_dict['operator']['freeze_core'] = True
qiskit_chemistry_dict['operator']['orbital_reduction'] = [-3, -2]
qiskit_chemistry_dict['optimizer']['max_trials'] = 2500
qiskit_chemistry_dict['variational_form']['depth'] = 5
else:
mol_distances = np.arange(0.2, 4.1, 0.1)
energy = np.zeros((len(algos), len(mol_distances)))
for j, algo in enumerate(algos):
qiskit_chemistry_dict['algorithm']['name'] = algo
if algo == 'ExactEigensolver':
qiskit_chemistry_dict.pop('backend', None)
elif algo == 'VQE':
qiskit_chemistry_dict['backend'] = backend_cfg
print("Using {}".format(algo))
for i, dis in enumerate(mol_distances):
print("Processing atomic distance: {:1.1f} Angstrom".format(dis), end='\r')
qiskit_chemistry_dict['HDF5']['hdf5_input'] = "{}/{:1.1f}_sto-3g.hdf5".format(molecule, dis)
result = solver.run(qiskit_chemistry_dict, backend=backend if algo == 'VQE' else None)
energy[j][i] = result['energy']
print("\n")
for i, algo in enumerate(algos):
plt.plot(mol_distances, energy[i], label=algo)
plt.xlabel('Atomic distance (Angstrom)')
plt.ylabel('Energy')
plt.legend()
plt.show()
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.linalg import expm
```
# Entanglement in the Stern-Gerlach Experiment
In this problem we want to consider Stern-Gerlach experiment with a more realistic approach.
Assume the electrons are shot towards the apparatus. The hamiltonian is as follows:
$H = \frac{P_z^2}{2m} - \mu \lambda z \sigma_z$
where the second term is from the interaction of the spin of the electron with the linear magnetic field.
The state of the system is a member of the tensor product space of spin and z-space. Let's make a concrete
z-space of 501 sites with $a = 0.01$ where he middle site is for $z=0$. Then, the state is a vector with 1002
coefficients.
For making the evolution matrix let's take the time steps to be $\epsilon = 0.01$ and $\hbar = 1$.
Also let's assume $\mu \lambda = 1, m = 1$.
```
# constants
mu_lambda = 0.05
n = 501
a = 0.01
m = 1
epsilon = 0.01
# the z-space operator
z_op = np.asmatrix(np.kron(np.diag(np.linspace(-2.5, 2.5, 501)), np.eye(2)))
# spin operator
z_pauli = np.kron(np.matrix([[1, 0], [0, -1]]), np.eye(n))
z_pauli
## making the momentum operator
# making shift_op
shift_op = np.eye(n, k=1)
shift_op[n-1, 0] = 1
shift_op = np.asmatrix(shift_op)
# diagonalizing the shift_op
vals, vecs = np.linalg.eig(shift_op)
# making P
P = vecs * np.diag(np.angle(vals)) * vecs.H / a
P = np.kron(P, np.eye(2))
P.shape
H = P ** 2 / (2 * m) - mu_lambda * z_op * z_pauli
H.shape
```
Now we have the hamiltonian, let's generate the evolution operator $U = \exp(-i H \epsilon$, taking $\hbar=1$.
```
evolution = expm(-1j * H * epsilon)
evolution = np.asmatrix(evolution)
evolution
```
## Section A
Let's take the initial state to be at $z = 0$ and spin up. Let's find the evolution to $t=10$
```
# making the z=0 state in z-space
z_0 = np.zeros((501, 1))
z_0[250, 0] = 1
z_0 = np.asmatrix(z_0)
# making the spin up state in spin space
spin_up = np.array([[1], [0]])
# making the initial state of the system.
init_state = np.kron(z_0, spin_up)
init_state.shape
# now to evolve the system to t = 10 = 1000 * epsilon
state = init_state
for _ in range(1000):
state = evolution * state
state
```
## Section B
Now for state $z=0$ and spin down. We follow the same procedure:
```
# making spin down in the spin space
spin_down = np.matrix([[0], [1]])
# making initial state
init_state = np.kron(z_0, spin_down)
init_state.shape
# now we evolve the system
state = init_state
for _ in range(1000):
state = evolution * state
state
```
## Section C
Now we try $z = 0$ and spin up in the x axis. In the spin space, this state is given as:
$|x, +> = \frac{1}{\sqrt{2}} (1, 1)^t$
```
# making spin +x
spin_x_up = np.matrix([[1], [1]]) / (2 ** 0.5)
# making the initial state
init_state = np.kron(z_0, spin_x_up)
init_state.shape
# evolve the state to t = 10
state = init_state
for _ in range(1000):
state = evolution * state
state
plt.plot(np.linspace(0, 1001, 1002), np.absolute(np.array(state)[:, 0]) ** 2)
plt.show()
```
Now let's check to see if this final state is entangled.
In order to do this, we need to generate the reduced density operator of the state and take the trace of $\rho^2$.
If the trace is less than 1, then the state is entangled.
$\hat{\rho} = |\psi><\psi|, tr(\hat{\rho}^2) \leq 1$
Let's take a look at our state right now. The first sub-space is for z-space which has a dimension of 501 and
the second is the spin sub-space which has a dimension of 2.
There's a technique with which we can generate the reduced density matrix; and that is the `reshape` method of the `matrix` and `array` classes in `numpy`.
We need to reshape the matrix from $1002 \times 1$ to $2 \times 501$ and take its transpose. according to the
`reshape` algorithm by `numpy`, if we multiply this ket with its bra, we get the reduced density matrix.
$|\psi'> = `state`, \rho' = |\psi><\psi| = `state` * `state.H`$
```
# generating the reduced density operator
altered_state = state.reshape((2, 501)) # reshaping the state
reduced_rho = altered_state.T * altered_state.conjugate()
reduced_rho
```
Now we need to take the trace of the square of the reduced density operator. If this parameter is significantly
less than 1, then the state is entangled.
```
param = np.trace(reduced_rho ** 2)
param
```
As we can see, the parameter is about half, so we can conclude that the final state is eventually entangled.
| github_jupyter |
<img src="NotebookAddons/blackboard-banner.png" width="100%" />
<font face="Calibri">
<br>
<font size="5"> <b>Change Detection in <font color='rgba(200,0,0,0.2)'>Your Own</font> SAR Amplitude Time Series Stack </b> </font>
<br>
<font size="4"> <b> Franz J Meyer; University of Alaska Fairbanks & Josef Kellndorfer, <a href="http://earthbigdata.com/" target="_blank">Earth Big Data, LLC</a> </b> <br>
<img style="padding: 7px" src="NotebookAddons/UAFLogo_A_647.png" width="170" align="right"/>
</font>
<font size="3"> This notebook introduces you to the methods of change detection in deep multi-temporal SAR image data stacks.
<br><br>
<b>In this chapter we introduce the following data analysis concepts:</b>
- How to use your own HyP3-generated data stack in a change detection effort
- The concepts of time series slicing by month, year, and date.
- The concepts and workflow of Cumulative Sum-based change point detection.
- The identification of change dates for each identified change point.
</font>
</font>
<hr>
<font face="Calibri" size="5" color="darkred"> <b>Important Note about JupyterHub</b> </font>
<br><br>
<font face="Calibri" size="3"> <b>Your JupyterHub server will automatically shutdown when left idle for more than 1 hour. Your notebooks will not be lost but you will have to restart their kernels and re-run them from the beginning. You will not be able to seamlessly continue running a partially run notebook.</b> </font>
```
%%javascript
var kernel = Jupyter.notebook.kernel;
var command = ["notebookUrl = ",
"'", window.location, "'" ].join('')
kernel.execute(command)
from IPython.display import Markdown
from IPython.display import display
user = !echo $JUPYTERHUB_USER
env = !echo $CONDA_PREFIX
if env[0] == '':
env[0] = 'Python 3 (base)'
if env[0] != '/home/jovyan/.local/envs/rtc_analysis':
display(Markdown(f'<text style=color:red><strong>WARNING:</strong></text>'))
display(Markdown(f'<text style=color:red>This notebook should be run using the "rtc_analysis" conda environment.</text>'))
display(Markdown(f'<text style=color:red>It is currently using the "{env[0].split("/")[-1]}" environment.</text>'))
display(Markdown(f'<text style=color:red>Select the "rtc_analysis" from the "Change Kernel" submenu of the "Kernel" menu.</text>'))
display(Markdown(f'<text style=color:red>If the "rtc_analysis" environment is not present, use <a href="{notebookUrl.split("/user")[0]}/user/{user[0]}/notebooks/conda_environments/Create_OSL_Conda_Environments.ipynb"> Create_OSL_Conda_Environments.ipynb </a> to create it.</text>'))
display(Markdown(f'<text style=color:red>Note that you must restart your server after creating a new environment before it is usable by notebooks.</text>'))
```
<hr>
<font face="Calibri">
<font size="5"> <b> 0. Importing Relevant Python Packages </b> </font>
<font size="3">In this notebook we will use the following scientific libraries:
<ol type="1">
<li> <b><a href="https://pandas.pydata.org/" target="_blank">Pandas</a></b> is a Python library that provides high-level data structures and a vast variety of tools for analysis. The great feature of this package is the ability to translate rather complex operations with data into one or two commands. Pandas contains many built-in methods for filtering and combining data, as well as the time-series functionality. </li>
<li> <b><a href="https://www.gdal.org/" target="_blank">GDAL</a></b> is a software library for reading and writing raster and vector geospatial data formats. It includes a collection of programs tailored for geospatial data processing. Most modern GIS systems (such as ArcGIS or QGIS) use GDAL in the background.</li>
<li> <b><a href="http://www.numpy.org/" target="_blank">NumPy</a></b> is one of the principal packages for scientific applications of Python. It is intended for processing large multidimensional arrays and matrices, and an extensive collection of high-level mathematical functions and implemented methods makes it possible to perform various operations with these objects. </li>
<li> <b><a href="https://matplotlib.org/index.html" target="_blank">Matplotlib</a></b> is a low-level library for creating two-dimensional diagrams and graphs. With its help, you can build diverse charts, from histograms and scatterplots to non-Cartesian coordinates graphs. Moreover, many popular plotting libraries are designed to work in conjunction with matplotlib. </li>
</font>
<br>
<font face="Calibri" size="3"><b>Our first step is to import them:</b> </font>
```
%%capture
import os
import glob
import json # for loads
import pandas as pd
from osgeo import gdal
import numpy as np
%matplotlib inline
import matplotlib.pylab as plt
import asf_notebook as asfn
asfn.jupytertheme_matplotlib_format()
```
<hr>
<font face="Calibri">
<font size="5"> <b> 1. Load Your Prepared Data Stack Into the Notebook </b> </font>
<font size="3"> This notebook assumes that you've prepared your own data stack of <b>RTC image products</b> over your personal area of interest. This can be done using the <b>Prepare_Data_Stack_Hyp3</b> and <b>Subset_Data_Stack notebooks</b>.
This notebook expects <a href="https://media.asf.alaska.edu/uploads/RTC/rtc_atbd_v1.2_final.pdf" target="_blank">Radiometric Terrain Corrected</a> (RTC) image products as input, so be sure to select an RTC process when creating the subscription for your input data within HyP3. Prefer a **unique orbit geometry** (ascending or descending) to keep geometric differences between images low.
<b>Begin by writing a function to retrieve and the absolute paths to each of our tiffs:</b>
</font>
</font>
```
def get_tiff_paths(paths):
tiff_paths = !ls $paths | sort -t_ -k5,5
return tiff_paths
```
<font face="Calibri" size="3"><b>Enter the path to the directory holding your tiffs:</b> </font>
```
while True:
print("Enter the absolute path to the directory holding your tiffs.")
tiff_dir = input()
wildcard_path = f"{tiff_dir}/*.tif*"
if os.path.exists(tiff_dir):
tiff_paths = get_tiff_paths(wildcard_path)
if len(tiff_paths) < 1:
print(f"{tiff_dir} exists but contains no tifs.")
print("You will not be able to proceed until tifs are prepared.")
break
else:
print(f"\n{tiff_dir} does not exist.")
continue
```
<font face="Calibri" size="3"><b>Determine the path to the analysis directory containing the tiff directory:</b> </font>
```
analysis_dir = os.path.dirname(tiff_dir)
print(analysis_dir)
```
<font face="Calibri" size="3"><b>Create a wildcard path to the tiffs:</b> </font>
```
wildcard_path = f"{tiff_dir}/*.tif*"
print(wildcard_path)
```
<font face="Calibri" size="3"><b>Write a function to extract the tiff dates from a wildcard path:</b> </font>
```
def get_dates(paths):
dates = []
pths = glob.glob(paths)
for p in pths:
filename = os.path.basename(p).split('_')
for chunk in filename:
if len(chunk) == 15 and 'T' in chunk:
date = chunk.split('T')[0]
dates.append(date)
break
elif len(chunk) == 8:
try:
int(chunk)
dates.append(chunk)
break
except ValueError:
continue
dates.sort()
return dates
```
<font face="Calibri" size="3"><b>Call get_dates() to collect the product acquisition dates:</b></font>
```
dates = get_dates(wildcard_path)
print(dates)
```
<font face="Calibri" size="3"><b>Gather the upper-left and lower-right corner coordinates of the data stack:</b></font>
```
coords = [[], []]
info = (gdal.Info(tiff_paths[0], options = ['-json']))
info = json.dumps(info)
coords[0] = (json.loads(info))['cornerCoordinates']['upperLeft']
coords[1] = (json.loads(info))['cornerCoordinates']['lowerRight']
print(coords)
```
<font face="Calibri" size="3"><b>Grab the stack's UTM zone.</b> Note that any UTM zone conflicts should already have been handled in the Prepare_Data_Stack_Hyp3 notebook.</font>
```
utm = json.loads(info)['coordinateSystem']['wkt'].split('ID')[-1].split(',')[1][0:-2]
print(f"UTM Zone: {utm}")
```
<hr>
<font face="Calibri" size="3"> Now we stack up the data by creating a virtual raster table with links to all subset data files: </font>
<br><br>
<font size="3"><b>Create the virtual raster table for the subset GeoTiffs:</b></font>
```
!gdalbuildvrt -separate raster_stack.vrt $wildcard_path
```
<hr>
<font face="Calibri">
<font size="5"> <b> 3. Now You Can Work With Your Data </b> </font>
<font size="3"> Now you are ready to perform time series change detection on your data stack.
</font>
</font>
<br>
<font face="Calibri" size="4"> <b> 3.1 Define Data Directory and Path to VRT </b> </font>
<br><br>
<font face="Calibri" size="3"><b>Create a variable containing the VRT filename:</b></font>
```
image_file = "raster_stack.vrt"
```
<font face="Calibri" size="3"><b>Create an index of timedelta64 data with Pandas:</b></font>
```
# Get some indices for plotting
time_index = pd.DatetimeIndex(dates)
```
<font face="Calibri" size="3"><b>Print the bands and dates for all images in the virtual raster table (VRT):</b></font>
```
j = 1
print(f"Bands and dates for {image_file}")
for i in time_index:
print("{:4d} {}".format(j, i.date()), end=' ')
j += 1
if j%5 == 1: print()
```
<hr>
<br>
<font face="Calibri" size="4"> <b> 3.2 Open Your Data Stack with gdal </b> </font>
```
img = gdal.Open(image_file)
```
<font face="Calibri" size="3"><b>Print the bands, pixels, and lines:</b></font>
```
print(f"Number of bands: {img.RasterCount}")
print(f"Number of pixels: {img.RasterXSize}")
print(f"Number of lines: {img.RasterYSize}")
```
<hr><hr>
<font face="Calibri" size="4"> <b> 3.3 Create a masked raster stack:</b></font>
```
raster_stack = img.ReadAsArray()
raster_stack_masked = np.ma.masked_where(raster_stack==0, raster_stack)
del raster_stack
```
<br>
<hr>
<font face="Calibri" size="5"> <b> 4. Cumulative Sum-based Change Detection Across an Entire Image</b> </font>
<font face="Calibri" size="3"> Using numpy arrays we can apply the concept of **cumulative sum change detection** analysis effectively on the entire image stack. We take advantage of array slicing and axis-based computing in numpy. **Axis 0 is the time domain** in our raster stacks.
<hr>
<font size="4"><b>4.1 Create our time series stack</b></font>
<br><br>
<font size="3"><b>Calculate the dB scale:</b></font>
```
db = 10.*np.ma.log10(raster_stack_masked)
```
<font face="Calibri" size="3">Sometimes it makes sense to <b>extract a reduced time span</b> from the full time series to reduce the number of different change objects in a scene. In the following, we extract a shorter time span:
</font>
```
date_picker = asfn.gui_date_picker(dates)
date_picker
subset_dates = date_picker.value
subset_dates = pd.DatetimeIndex(subset_dates)
date_index_subset = np.where((time_index>=subset_dates[0]) & (time_index<=subset_dates[1]))
db_subset = np.squeeze(db[date_index_subset, :, :])
time_index_subset = time_index[date_index_subset]
plt.figure(figsize=(12, 8))
band_number = 0
vmin = np.percentile(db_subset[band_number], 5)
vmax = np.percentile(db_subset[band_number], 95)
plt.title('Band {} {}'.format(band_number+1, time_index_subset[band_number].date()))
plt.imshow(db_subset[0], cmap='gray', vmin=vmin, vmax=vmax)
cbar = plt.colorbar()
_ = cbar.ax.set_xlabel('dB', fontsize='12')
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.2 Calculate Mean Across Time Series to Prepare for Calculation of Cummulative Sum $S$:</b> </font>
<br><br>
<font face="Calibri" size="3"><b>Write a function to convert our plots into GeoTiffs:</b></font>
```
def geotiff_from_plot(source_image, out_filename, extent, utm, cmap=None, vmin=None, vmax=None, interpolation=None, dpi=300):
assert "." not in out_filename, 'Error: Do not include the file extension in out_filename'
assert type(extent) == list and len(extent) == 2 and len(extent[0]) == 2 and len(
extent[1]) == 2, 'Error: extent must be a list in the form [[upper_left_x, upper_left_y], [lower_right_x, lower_right_y]]'
plt.figure()
plt.axis('off')
plt.imshow(source_image, cmap=cmap, vmin=vmin, vmax=vmax, interpolation=interpolation)
temp = f"{out_filename}_temp.png"
plt.savefig(temp, dpi=dpi, transparent='true', bbox_inches='tight', pad_inches=0)
cmd = f"gdal_translate -of Gtiff -a_ullr {extent[0][0]} {extent[0][1]} {extent[1][0]} {extent[1][1]} -a_srs EPSG:{utm} {temp} {out_filename}.tiff"
!{cmd}
try:
os.remove(temp)
except FileNotFoundError:
pass
```
<font face="Calibri" size="3"><b>Create a directory in which to store our plots and animations:</b></font>
```
output_path = f"{tiff_dir}/plots_and_animations"
asfn.new_directory(output_path)
```
<font face="Calibri" size="3"><b>Plot the time-series mean and save as a png (time_series_mean.png):</b></font>
```
db_mean = np.mean(db_subset, axis=0)
plt.figure(figsize=(12, 8))
plt.imshow(db_mean, cmap='gray')
cbar = plt.colorbar()
cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/time_series_mean.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the time-series mean as a GeoTiff (time_series_mean.tiff):</b></font>
```
%%capture
geotiff_from_plot(db_mean, f"{output_path}/time_series_mean", coords, utm, cmap='gray')
```
<font face="Calibri" size="3"><b>Calculate the residuals and plot residuals[0]. Save it as a png (residuals.png):</b></font>
```
residuals = db_subset - db_mean
plt.figure(figsize=(12, 8))
plt.imshow(residuals[0])
plt.title('Residuals for Band {} {}'.format(band_number+1, time_index_subset[band_number].date()))
cbar = plt.colorbar()
_ = cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/residuals.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the residuals[0] as a GeoTiff (residuals.tiff):</b></font>
```
%%capture
geotiff_from_plot(residuals[0], f"{output_path}/residuals", coords, utm)
```
<br>
<hr>
<font face="Calibri" size="4"><b> 4.3 Calculate Cummulative Sum $S$ as well as Change Magnitude $S_{diff}$:</b></font>
<br><br>
<font face="Calibri" size="3"><b>Plot Smin, Smax, and the change magnitude and save a png of the plots (Smin_Smax_Sdiff.png):</b></font>
```
summation = np.cumsum(residuals, axis=0)
summation_max = np.max(summation, axis=0)
summation_min = np.min(summation, axis=0)
change_mag = summation_max - summation_min
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
vmin = np.percentile(summation_min.flatten(), 3)
vmax = np.percentile(summation_max.flatten(), 97)
max_plot = ax[0].imshow(summation_max, vmin=vmin, vmax=vmax)
ax[0].set_title('$S_{max}$')
ax[1].imshow(summation_min, vmin=vmin, vmax=vmax)
ax[1].set_title('$S_{min}$')
ax[2].imshow(change_mag, vmin=vmin, vmax=vmax)
ax[2].set_title('Change Magnitude')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7])
cbar = fig.colorbar(max_plot, cax=cbar_ax)
_ = cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/Smin_Smax_Sdiff.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save Smax as a GeoTiff (Smax.tiff):</b></font>
```
%%capture
geotiff_from_plot(summation_max, f"{output_path}/Smax", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save Smin as a GeoTiff (Smin.tiff):</b></font>
```
%%capture
geotiff_from_plot(summation_min, f"{output_path}/Smin", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save the change magnitude as a GeoTiff (Sdiff.tiff):</b></font>
```
%%capture
geotiff_from_plot(change_mag, f"{output_path}/Sdiff", coords, utm, vmin=vmin, vmax=vmax)
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.4 Mask $S_{diff}$ With a-priori Threshold To Idenfity Change Candidates:</b> </font>
<font face="Calibri" size="3">To identified change candidate pixels, we can threshold $S_{diff}$ to reduce computation of the bootstrapping. For land cover change, we would not expect more than 5-10% change pixels in a landscape. So, if the test region is reasonably large, setting a threshold for expected change to 10% is appropriate. In our example, we'll start out with a very conservative threshold of 50%.
<br><br>
<b>Plot and tsave the histogram and CDF for the change magnitude (change_mag_histogram_CDF.png):</b></font>
```
plt.rcParams.update({'font.size': 14})
fig = plt.figure(figsize=(14, 6)) # Initialize figure with a size
ax1 = fig.add_subplot(121) # 121 determines: 2 rows, 2 plots, first plot
ax2 = fig.add_subplot(122)
# Second plot: Histogram
# IMPORTANT: To get a histogram, we first need to *flatten*
# the two-dimensional image into a one-dimensional vector.
histogram = ax1.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag)))
ax1.xaxis.set_label_text('Change Magnitude')
ax1.set_title('Change Magnitude Histogram')
plt.grid()
n, bins, patches = ax2.hist(change_mag.flatten(), bins=200, range=(0, np.max(change_mag)), cumulative='True', density='True', histtype='step', label='Empirical')
ax2.xaxis.set_label_text('Change Magnitude')
ax2.set_title('Change Magnitude CDF')
plt.grid()
plt.savefig(f"{output_path}/change_mag_histogram_CDF", dpi=72)
precentile = 0.5
out_indicies = np.where(n>precentile)
threshold_index = np.min(out_indicies)
threshold = bins[threshold_index]
print('At the {}% percentile, the threshold value is {:2.2f}'.format(precentile*100, threshold))
```
<font face="Calibri" size="3">Using this threshold, we can <b>visualize our change candidate areas and save them as a png (change_candidate.png):</b></font>
```
change_mag_mask = change_mag < threshold
plt.figure(figsize=(12, 8))
plt.title('Change Candidate Areas (black)')
_ = plt.imshow(change_mag_mask, cmap='gray')
plt.savefig(f"{output_path}/change_candidate.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the change candidate areas as a GeoTiff (change_canididate.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_mag_mask, f"{output_path}/change_canididate", coords, utm, cmap='gray')
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.5 Bootstrapping to Prepare for Change Point Selection:</b> </font>
<font face="Calibri" size="3">We can now perform bootstrapping over the candidate pixels. The workflow is as follows:
<ul>
<li>Filter our residuals to the change candidate pixels</li>
<li>Perform bootstrapping over candidate pixels</li>
</ul>
For efficient computing we permutate the index of the time axis.
</font>
```
residuals_mask = np.broadcast_to(change_mag_mask , residuals.shape)
residuals_masked = np.ma.array(residuals, mask=residuals_mask)
```
<font face="Calibri" size="3">On the masked time series stack of residuals, we can re-compute the cumulative sums:
</font>
```
summation_masked = np.ma.cumsum(residuals_masked, axis=0)
```
<font face="Calibri" size="3"><b>Plot the masked Smax, Smin, and change magnitude. Save them as a png (masked_Smax_Smin_Sdiff.png):</b>
</font>
```
summation_masked_max = np.ma.max(summation_masked, axis=0)
summation_masked_min = np.ma.min(summation_masked, axis=0)
change_mag_masked = summation_masked_max - summation_masked_min
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
vmin = summation_masked_min.min()
vmax = summation_masked_max.max()
masked_sum_max_plot = ax[0].imshow(summation_masked_max, vmin=vmin, vmax=vmax)
ax[0].set_title('Masked $S_{max}$')
ax[1].imshow(summation_masked_min, vmin=vmin, vmax=vmax)
ax[1].set_title('Masked $S_{min}$')
ax[2].imshow(change_mag_masked, vmin=vmin, vmax=vmax)
ax[2].set_title('Masked Change Magnitude')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.15, 0.02, 0.7])
cbar = fig.colorbar(masked_sum_max_plot, cax=cbar_ax)
_ = cbar.ax.set_xlabel('dB', fontsize='12')
plt.savefig(f"{output_path}/masked_Smax_Smin_Sdiff.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the masked Smax as a GeoTiff (masked_Smax.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(summation_masked_max, f"{output_path}/masked_Smax", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save the masked Smin as a GeoTiff (masked_Smin.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(summation_masked_min, f"{output_path}/masked_Smin", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3"><b>Save the masked change magnitude as a GeoTiff (masked_Sdiff.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_mag_masked, f"{output_path}/masked_Sdiff", coords, utm, vmin=vmin, vmax=vmax)
```
<font face="Calibri" size="3">Now let's perform <b>bootstrapping</b>:
</font>
```
random_index = np.random.permutation(residuals_masked.shape[0])
residuals_random = residuals_masked[random_index,:,:]
n_bootstraps = 100 # bootstrap sample size
# to keep track of the maxium Sdiff of the bootstrapped sample:
change_mag_random_max = np.ma.copy(change_mag_masked)
change_mag_random_max[~change_mag_random_max.mask]=0
# to compute the Sdiff sums of the bootstrapped sample:
change_mag_random_sum = np.ma.copy(change_mag_masked)
change_mag_random_sum[~change_mag_random_max.mask]=0
# to keep track of the count of the bootstrapped sample
n_change_mag_gt_change_mag_random = np.ma.copy(change_mag_masked)
n_change_mag_gt_change_mag_random[~n_change_mag_gt_change_mag_random.mask]=0
print("Running Bootstrapping for %4.1f iterations ..." % (n_bootstraps))
for i in range(n_bootstraps):
# For efficiency, we shuffle the time axis index and use that
#to randomize the masked array
random_index = np.random.permutation(residuals_masked.shape[0])
# Randomize the time step of the residuals
residuals_random = residuals_masked[random_index,:,:]
summation_random = np.ma.cumsum(residuals_random, axis=0)
summation_random_max = np.ma.max(summation_random, axis=0)
summation_random_min = np.ma.min(summation_random, axis=0)
change_mag_random = summation_random_max - summation_random_min
change_mag_random_sum += change_mag_random
change_mag_random_max[np.ma.greater(change_mag_random, change_mag_random_max)] = \
change_mag_random[np.ma.greater(change_mag_random, change_mag_random_max)]
n_change_mag_gt_change_mag_random[np.ma.greater(change_mag_masked, change_mag_random)] += 1
if ((i+1)/n_bootstraps*100)%10 == 0:
print("\r%4.1f%% completed" % ((i+1)/n_bootstraps*100), end='\r', flush=True)
print(f"Bootstrapping Complete")
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.6 Extract Confidence Metrics and Select Final Change Points:</b> </font>
<font face="Calibri" size="3">We first <b>compute for all pixels the confidence level $CL$, the change point significance metric $CP_{significance}$ and the product of the two as our confidence metric for identified change points. Plot the results and save them as a png (confidenceLevel_CPSignificance.png):</b></font>
```
confidence_level = n_change_mag_gt_change_mag_random / n_bootstraps
change_point_significance = 1.- (change_mag_random_sum / n_bootstraps)/change_mag
#Plot
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
a = ax[0].imshow(confidence_level*100)
cbar0 = fig.colorbar(a, ax=ax[0])
_ = cbar0.ax.set_xlabel('%', fontsize='12')
ax[0].set_title('Confidence Level %')
a = ax[1].imshow(change_point_significance)
_ = fig.colorbar(a, ax=ax[1])
ax[1].set_title('Significance')
a = ax[2].imshow(confidence_level*change_point_significance)
_ = fig.colorbar(a, ax=ax[2])
_ = ax[2].set_title('CL x S')
plt.savefig(f"{output_path}/confidenceLevel_CPSignificance.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the confidence level as a GeoTiff (confidence_level.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(confidence_level*100, f"{output_path}/confidence_level", coords, utm)
```
<font face="Calibri" size="3"><b>Save the change point significance as a GeoTiff (cp_significance.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_point_significance, f"{output_path}/cp_significance", coords, utm)
```
<font face="Calibri" size="3"><b>Save the change point significance as a GeoTiff (cp_significance.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(confidence_level*change_point_significance, f"{output_path}/confidenceLevel_x_CPSignificance", coords, utm)
```
<font face="Calibri" size="3">Now we can <b>set a change point threshold</b> to identify most likely change pixels in our map of change candidates:
</font>
```
change_point_threshold = 0.01
```
<font face="Calibri" size="3"><b>Plot the detected change pixels based on the change_point_threshold and save it as a png (detected_change_pixels.png):</b></font>
```
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(1, 1, 1)
plt.title('Detected Change Pixels based on Threshold %2.2f' % (change_point_threshold))
a = ax.imshow(confidence_level*change_point_significance < change_point_threshold, cmap='cool')
plt.savefig(f"{output_path}/detected_change_pixels.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the detected_change_pixels as a GeoTiff (detected_change_pixels.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(confidence_level*change_point_significance < change_point_threshold, f"{output_path}/detected_change_pixels", coords, utm, cmap='cool')
```
<br>
<hr>
<font face="Calibri" size="4"> <b> 4.7 Derive Timing of Change for Each Change Pixel:</b> </font>
<font face="Calibri" size="3">Our last step in the identification of the change points is to extract the timing of the change. We will produce a raster layer that shows the band number of this first date after a change was detected. We will make use of the numpy indexing scheme. First, we create a combined mask of the first threshold and the identified change points after the bootstrapping. For this we use the numpy "mask_or" operation.
</font>
```
# make a mask of our change points from the new threhold and the previous mask
change_point_mask = np.ma.mask_or(confidence_level*change_point_significance < change_point_threshold, confidence_level.mask)
# Broadcast the mask to the shape of the masked S curves
change_point_mask2 = np.broadcast_to(change_point_mask, summation_masked.shape)
# Make a numpy masked array with this mask
change_point_raster = np.ma.array(summation_masked.data, mask=change_point_mask2)
```
<font face="Calibri" size="3">To retrieve the dates of the change points we find the band indices in the time series along the time axis where the maximum of the cumulative sums was located. Numpy offers the "argmax" function for this purpose.
</font>
```
change_point_index = np.ma.argmax(change_point_raster, axis=0)
change_indices = list(np.unique(change_point_index))
print(change_indices)
change_indices.remove(0)
print(change_indices)
# Look up the dates from the indices to get the change dates
all_dates = time_index_subset
change_dates = [str(all_dates[x].date()) for x in change_indices]
```
<font face="Calibri" size="3">Lastly, we <b>plot the change dates by showing the $CP_{index}$ raster and label the change dates. Save the plot as a png (change_dates.png):</b></font>
```
ticks = change_indices
ticklabels = change_dates
cmap = plt.cm.get_cmap('tab20', ticks[-1])
fig, ax = plt.subplots(figsize=(12, 12))
cax = ax.imshow(change_point_index, interpolation='nearest', cmap=cmap)
# fig.subplots_adjust(right=0.8)
# cbar_ax = fig.add_axes([0.85, 0.15, 0.05, 0.7])
# fig.colorbar(p,cax=cbar_ax)
ax.set_title('Dates of Change')
# cbar = fig.colorbar(cax,ticks=ticks)
cbar = fig.colorbar(cax, ticks=ticks, orientation='horizontal')
_ = cbar.ax.set_xticklabels(ticklabels, size=10, rotation=45, ha='right')
plt.savefig(f"{output_path}/change_dates.png", dpi=300, transparent='true')
```
<font face="Calibri" size="3"><b>Save the change dates as a GeoTiff (change_dates.tiff):</b>
</font>
```
%%capture
geotiff_from_plot(change_point_index, f"{output_path}/change_dates", coords, utm, cmap=cmap, interpolation='nearest', dpi=600)
```
<font face="Calibri" size="2"> <i>GEOS 657 Microwave Remote Sensing - Version 1.3.0 - April 2021 </i>
<br>
<b>Version Changes</b>
<ul>
<li>namespace asf_notebook</li>
</ul>
</font>
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib notebook
df = pd.read_csv('BinSize_d{}.csv'.format(400))
station_locations_by_hash = df[df['hash'] == 'fb441e62df2d58994928907a91895ec62c2c42e6cd075c2700843b89']
lons = station_locations_by_hash['LONGITUDE'].tolist()
lats = station_locations_by_hash['LATITUDE'].tolist()
station_locations_by_hash
df = pd.read_csv('fb441e62df2d58994928907a91895ec62c2c42e6cd075c2700843b89.csv')
df.set_index('Date', inplace=True)
df.index = pd.DatetimeIndex(df.index)
df.sort_index(inplace=True)
dateIndex2005_2014 = pd.date_range(start='2005-01-01', end='2014-12-31', freq='D')
dateIndex2015 = pd.date_range(start='2015-01-01', end='2015-12-31', freq='D')
date2005_2014 = df.loc[dateIndex2005_2014]
date2005_2014.drop(['2012-02-29', '2008-02-29'], inplace=True, axis=0)
date2015 = df.loc[dateIndex2015]
Tmax_2005to2014 = date2005_2014[date2005_2014['Element'] == 'TMAX']
Tmin_2005to2014 = date2005_2014[date2005_2014['Element'] == 'TMIN']
Tmax2015 = date2015[date2015['Element'] == 'TMAX']
Tmin2015 = date2015[date2015['Element'] == 'TMIN']
dates = list(Tmax_2005to2014.index.astype('str'))
dates = list(map(lambda x: '-'.join(x.split('-')[1:]), dates))
Tmax_2005to2014.index = dates
dates = list(Tmin_2005to2014.index.astype('str'))
dates = list(map(lambda x: '-'.join(x.split('-')[1:]), dates))
Tmin_2005to2014.index = dates
tenYearsTmax = Tmax_2005to2014.groupby(level=0).max()
tenYearsTmin = Tmin_2005to2014.groupby(level=0).min()
Tmax2015 = Tmax2015.groupby(level=0).max()
Tmin2015 = Tmin2015.groupby(level=0).min()
xlabes = list(tenYearsTmin.index)
Tmax2015.index = tenYearsTmax.index
Tmin2015.index = tenYearsTmin.index
brokenTmin2015 = Tmin2015[Tmin2015['Data_Value'] < tenYearsTmin['Data_Value']].dropna()
brokenTmax2015 = Tmax2015[Tmax2015['Data_Value'] > tenYearsTmax['Data_Value']].dropna()
brokenTmax2015.index = brokenTmax2015.index.to_datetime().dayofyear
brokenTmin2015.index = brokenTmin2015.index.to_datetime().dayofyear
dateIndex2015 = pd.date_range(start='2015-01-01', end='2015-12-31', freq=pd.offsets.MonthBegin())
labelsRange = list(dateIndex2015.dayofyear)
labels = list(dateIndex2015.strftime('%b %d'))
labels
plt.figure(figsize=(12,8))
xvals = range(1, 366)
TmaxVals = list(tenYearsTmax['Data_Value'] * 0.1)
TminVals = list(tenYearsTmin['Data_Value'] * 0.1)
plt.plot(xvals, TmaxVals, 'c-', alpha=0.2)
plt.plot(xvals, TminVals, 'c-', alpha=0.2)
plt.scatter(list(brokenTmax2015.index), brokenTmax2015['Data_Value']*0.1, marker='.', c='r', alpha=0.6)
plt.scatter(list(brokenTmin2015.index), brokenTmin2015['Data_Value']*0.1, marker='.', c='b', alpha=0.6)
plt.fill_between(xvals,
TminVals, TmaxVals,
facecolor='c',
alpha=0.1)
for i, spine in enumerate(plt.gca().spines.values()):
if i in (1, 3):
spine.set_visible(False)
plt.tick_params(top='off', bottom='on', left='on', right='off', labelleft='on', labelbottom='on')
plt.title('2015 Max-Min Temperatures(per day) of Stations near Ann Arbor, Michigan, United States\n Compare to Past Ten Year Record', alpha=0.8)
plt.ylabel('Temperature (℃)', alpha=0.8)
plt.legend(['past ten year Tmax', 'past ten year Tmin', '2015 Tmax(broken record)', '2015 Tmin(broken record)'])
plt.xticks(labelsRange, labels, alpha=0.8)
x = plt.gca().xaxis
for item in x.get_ticklabels():
item.set_rotation(-45)
plt.subplots_adjust(bottom=0.2)
plt.savefig('text.jpg')
%%html
<img src='text.jpg' />
plt.figure()
languages =['Python', 'SQL', 'Java', 'C++', 'JavaScript']
pos = np.arange(len(languages))
popularity = [56, 39, 34, 34, 29]
# change the bar colors to be less bright blue
bars = plt.bar(pos, popularity, align='center', linewidth=0, color='lightslategrey', label=['1', '2', '3', '4', '5'])
# make one bar, the python bar, a contrasting color
bars[0].set_color('#1F77B4')
# soften all labels by turning grey
plt.xticks(pos, languages, alpha=0.8)
# TODO: remove the Y label since bars are directly labeled
plt.ylabel('% Popularity', alpha=0.8)
plt.title('Top 5 Languages for Math & Data \nby % popularity on Stack Overflow', alpha=0.8)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='on', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
# TODO: direct label each bar with Y axis values
plt.show()
```
| github_jupyter |
# Dask Array
### What is Dask Array?
- Dask Array is composed of many NumPy or NumPy-like arrays (e.g. CuPy arrays) under the hood
- Dask Array implements a subset of the NumPy ndarray API using blocked algorithms
- These array may be streamed out of the disk of a single computer or multiple/distributed computers
- Dask Array uses the threaded scheduler in order to avoid data transfer costs, and because NumPy releases the GIL well
**Summary**: Dask arrays include many NumPy arrays which are loaded lazily and in parallel across parallel hardware.
Source: https://docs.dask.org/en/latest/array.html
<img src="../images/dask-array-black-text.svg" width="600" height="200" style="border-style: solid;">
```
import numpy as np
from dask.distributed import Client
import dask.dataframe as dd
```
#### Create a Dask Client connecting to the LocalCluster
```
client = Client(n_workers=10, threads_per_worker=1, memory_limit='2GB')
client
```
#### Read the data from the Data from multiple CSV files into a Dask DataFrame
```
# DATA_DIR = "../data/train"
DATA_DIR = "/opt/vssexclude/personal/kaggle/volcano/data/raw/train"
# Define the datatypes for different sensor data
data_types = {"sensor_1" : np.float32,
"sensor_2" : np.float32,
"sensor_3" : np.float32,
"sensor_4" : np.float32,
"sensor_5" : np.float32,
"sensor_6" : np.float32,
"sensor_7" : np.float32,
"sensor_8" : np.float32,
"sensor_9" : np.float32,
"sensor_10" : np.float32}
dd_sample_small = dd.read_csv(urlpath=f"{DATA_DIR}/1403*.csv", blocksize=None, dtype=data_types)
```
#### Create a Dask Array
Creating a Dask Array from a Dask DataFrame
```
da = dd_sample_small.to_dask_array(lengths=True)
da
print(f"Shape of the Dask Array: {da.shape}")
print(f"Number of chunks: {da.npartitions}")
print(f"Type of the data inside the array: {da.dtype}")
print(f"Shape of the Chunk {da.chunksize}")
print(f"Shape of the Chunk {da.chunks}")
```
#### Deep dive into `chunks`
- This Dask Array is composed of 6 NumPy Arrays each having a shape (60001, 10)
- Dask Array stores the size of each block (NumPy Array) along each axis using a tuple of tuples
- Length of the outer tuple is equal to the number of dimensions of the array (In this case 2)
- Lengths of the inner tuples are equal to the number of blocks (NumPy Arrays) along each dimension
- For dimenesion 1, it is 6.
- For dimension 2, it is 1.
<img src="../images/Dask_Array_Chunks.png" width="600" height="200" style="border-style: solid;">
```
da.visualize()
```
#### Dask arrays support **most** of the NumPy interface like the following:
- Arithmetic and scalar mathematics: +, *, exp, log, ...
- Reductions along axes: sum(), mean(), std(), sum(axis=0), ...
- Tensor contractions / dot products / matrix multiply: tensordot
- Axis reordering / transpose: transpose
- Slicing: x[:100, 500:100:-2]
- Fancy indexing along single axes with lists or NumPy arrays: x[:, [10, 1, 5]]
- Array protocols like __array__ and __array_ufunc__
- Some linear algebra: svd, qr, solve, solve_triangular, lstsq
#### Blocked Algorithms
- Dask Arrays are implemented using blocked algorithms
- These algorithms break up a computation on a large array into many computations on smaller pieces of the array
- As a result, data can be loaded from the disk to the main memory (RAM) on the need basis
### Slice the Array
```
da[60000:60010, 1:3]
da[60000:60010, 1:3].visualize()
da[60000:60010, 1:3].compute()
```
#### Compute Mean of a slice
```
da[60000:60010, 1:3].mean()
da[60000:60010, 1:3].mean().visualize()
da[60000:60010, 1:3].mean().compute()
type(da[60000:60010, 1:3].mean().compute())
```
#### Close the Dask Client
```
client.close()
```
| github_jupyter |
```
# General purpose libraries
import boto3
import copy
import csv
import datetime
import json
import numpy as np
import pandas as pd
import s3fs
from collections import defaultdict
import time
import re
import random
from sentence_transformers import SentenceTransformer
import sentencepiece
from scipy.spatial import distance
from json import JSONEncoder
import sys
sys.path.append("/Users/dafirebanks/Projects/policy-data-analyzer/")
sys.path.append("C:/Users/jordi/Documents/GitHub/policy-data-analyzer/")
from tasks.data_loading.src.utils import *
```
### 1. Set up AWS
```
def aws_credentials_from_file(f_name):
with open(f_name, "r") as f:
creds = json.load(f)
return creds["aws"]["id"], creds["aws"]["secret"]
def aws_credentials(path, filename):
file = path + filename
with open(file, 'r') as dict:
key_dict = json.load(dict)
for key in key_dict:
KEY = key
SECRET = key_dict[key]
return KEY, SECRET
```
### 2. Optimized full loop
```
def aws_credentials(path, filename):
file = path + filename
with open(file, 'r') as dict:
key_dict = json.load(dict)
for key in key_dict:
KEY = key
SECRET = key_dict[key]
return KEY, SECRET
def aws_credentials_from_file(f_name):
with open(f_name, "r") as f:
creds = json.load(f)
return creds["aws"]["id"], creds["aws"]["secret"]
def load_all_sentences(language, s3, bucket_name, init_doc, end_doc):
policy_dict = {}
sents_folder = f"{language}_documents/sentences"
for i, obj in enumerate(s3.Bucket(bucket_name).objects.all().filter(Prefix="english_documents/sentences/")):
if not obj.key.endswith("/") and init_doc <= i < end_doc:
serializedObject = obj.get()['Body'].read()
policy_dict = {**policy_dict, **json.loads(serializedObject)}
return labeled_sentences_from_dataset(policy_dict)
def save_results_as_separate_csv(results_dictionary, queries_dictionary, init_doc, results_limit, aws_id, aws_secret):
path = "s3://wri-nlp-policy/english_documents/assisted_labeling"
col_headers = ["sentence_id", "similarity_score", "text"]
for i, query in enumerate(results_dictionary.keys()):
filename = f"{path}/query_{queries_dictionary[query]}_{i}_results_{init_doc}.csv"
pd.DataFrame(results_dictionary[query], columns=col_headers).head(results_limit).to_csv(filename, storage_options={"key": aws_id, "secret": aws_secret})
def labeled_sentences_from_dataset(dataset):
sentence_tags_dict = {}
for document in dataset.values():
sentence_tags_dict.update(document['sentences'])
return sentence_tags_dict
# Set up AWS
credentials_file = '/Users/dafirebanks/Documents/credentials.json'
aws_id, aws_secret = aws_credentials_from_file(credentials_file)
region = 'us-east-1'
s3 = boto3.resource(
service_name = 's3',
region_name = region,
aws_access_key_id = aws_id,
aws_secret_access_key = aws_secret
)
path = "C:/Users/jordi/Documents/claus/"
filename = "AWS_S3_keys_wri.json"
aws_id, aws_secret = aws_credentials(path, filename)
region = 'us-east-1'
bucket = 'wri-nlp-policy'
s3 = boto3.resource(
service_name = 's3',
region_name = region,
aws_access_key_id = aws_id,
aws_secret_access_key = aws_secret
)
# Define params
init_at_doc = 3284
end_at_doc = 4926
similarity_threshold = 0
search_results_limit = 500
language = "english"
bucket_name = 'wri-nlp-policy'
transformer_name = 'xlm-r-bert-base-nli-stsb-mean-tokens'
model = SentenceTransformer(transformer_name)
# Get all sentence documents
sentences = load_all_sentences(language, s3, bucket_name, init_at_doc, end_at_doc )
# Define queries
path = "../../input/"
filename = "English_queries.xlsx"
file = path + filename
df = pd.read_excel(file, engine='openpyxl', sheet_name = "Hoja1", usecols = "A:C")
queries = {}
for index, row in df.iterrows():
queries[row['Query sentence']] = row['Policy instrument']
# Calculate and store query embeddings
query_embeddings = dict(zip(queries, [model.encode(query.lower(), show_progress_bar=False) for query in queries]))
# For each sentence, calculate its embedding, and store the similarity
query_similarities = defaultdict(list)
i = 0
for sentence_id, sentence in sentences.items():
sentence_embedding = model.encode(sentence['text'].lower(), show_progress_bar=False)
i += 1
if i % 100 == 0:
print(i)
for query_text, query_embedding in query_embeddings.items():
score = round(1 - distance.cosine(sentence_embedding, query_embedding), 4)
if score > similarity_threshold:
query_similarities[query_text].append([sentence_id, score, sentences[sentence_id]['text']])
# Sort results by similarity score
for query in query_similarities:
query_similarities[query] = sorted(query_similarities[query], key = lambda x : x[1], reverse=True)
# Store results
save_results_as_separate_csv(query_similarities, queries, init_at_doc, search_results_limit, aws_id, aws_secret)
```
| github_jupyter |
```
from __future__ import print_function, division
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
%matplotlib inline
import pandas as pd
```
# Import data & initial guess
```
def create_filepaths(numbers, pre_path):
padded_numbers = []
file_ext = '.dat'
for n in numbers:
if n <= 9:
padded_numbers = np.append(padded_numbers, pre_path + '00' + str(n) + file_ext)
elif n <= 99:
padded_numbers = np.append(padded_numbers, pre_path + '0' + str(n) + file_ext)
else:
padded_numbers = np.append(padded_numbers, pre_path + str(n) + file_ext)
return padded_numbers
def decayingSinModel(time, freq, T_decay, amp, phase, offset, drift):
# Linearly decaying sinusoidal function
return amp * np.exp(-time/T_decay) * np.sin(2*np.pi*( freq*time ) + np.radians(phase)) + offset + (drift*time)
def ramsey_fit_guess_default():
freq_guess = 5.0 # MHz
T_decay_guess = 1.0 # us
amp_guess = 0.5
phase_guess = 180
offset_guess = 0.5
drift_guess = 0.0
return [freq_guess, T_decay_guess, amp_guess, phase_guess, offset_guess, drift_guess]
#date = '120417'
#file_numbers = [1, 28]
#pressures = [0, 0.335]
def ramsey_fit_test(date, file_numbers, pressures=[], pressure_errors=[], guess=ramsey_fit_guess_default(),
eval_time=0.0, crop=[0,0], local=False, figSize=(15.0, 4.0), normalise=False):
if local:
file_path = "SR" + date + "_"
full_paths = create_filepaths(file_numbers, file_path)
else:
file_path = "C:\data\\" + date + "\\SR" + date + "_"
full_paths = create_filepaths(file_numbers, file_path)
if pressures == []: pressures = np.arange(1, len(full_paths)+1, 1)
matplotlib.rcParams['figure.figsize'] = figSize
min_time, max_time = 0, 0
timeScale = 1000
for i, path in enumerate(full_paths):
data = np.loadtxt(path)
time = data[:,1] * 1E6
time = time[crop[0]:len(time)-crop[1]]
p_g = data[:,4] * 1E9
p_g = p_g[crop[0]:len(p_g)-crop[1]]
if normalise:
p_g = p_g - np.min(p_g)
p_g = p_g / np.max(p_g)
min_time = np.min([min_time, np.min(time)])
max_time = np.max([max_time, np.max(time)])
plt.plot(time*timeScale, p_g, alpha=0.5, label=str(pressures[i]))
timeSteps = np.linspace(min_time, max_time, 1000)
plt.plot(timeSteps*timeScale, decayingSinModel(timeSteps, *guess), '--', lw=3, color=[1.0,0.2,0.2], label='Fit guess')
plt.xlabel('Time (ns)')
plt.ylabel('Ground state probability, $P_g$')
plt.grid()
plt.legend(title='Pressue (mbar)')
#ramsey_fit_test(date, file_numbers, pressures, local=False, normalise=True)
```
# Fit sinusoidal waveforms
```
def ramsey_fit(date, file_numbers, pressures=[], pressure_errors=[], guess=ramsey_fit_guess_default(),
eval_time=0.0, crop=[0,0], local=False, figSize=(15.0, 4.0), normalise=False):
if local:
file_path = "SR" + date + "_"
full_paths = create_filepaths(file_numbers, file_path)
else:
file_path = "C:\data\\" + date + "\\SR" + date + "_"
full_paths = create_filepaths(file_numbers, file_path)
matplotlib.rcParams['figure.figsize'] = (15.0, 4.0)
colors = ['k','r','g','b','c','m','y']
params = ['Frequency', 'T decay', 'Amplitude', 'Initial phase', 'Offset', 'Drift']
if pressures == []: pressures = np.arange(1, len(full_paths)+1, 1)
if pressure_errors == []: pressure_errors = np.zeros(len(full_paths))
popts = []
perrs = []
df = pd.DataFrame(columns=['Pressure', 'Pressure error', *params, *[p + ' error' for p in params]])
min_time, max_time = 0, 0
timeScale = 1000
for i, path in enumerate(full_paths):
data = np.loadtxt(path)
time = data[:,1] * 1E6
time = time[crop[0]:len(time)-crop[1]]
p_g = data[:,4] * 1E9
p_g = p_g[crop[0]:len(p_g)-crop[1]]
if normalise:
p_g = p_g - np.min(p_g)
print('Prescale max', np.max(p_g))
p_g = p_g / np.max(p_g)
min_time = np.min([min_time, np.min(time)])
max_time = np.max([max_time, np.max(time)])
popt,pcov = curve_fit(decayingSinModel, time, p_g, p0=guess)
perr = np.sqrt(np.diag(pcov))
popts = np.concatenate((popts, popt), axis=0)
perrs = np.concatenate((perrs, perr), axis=0)
df.loc[i] = [pressures[i], pressure_errors[i], *popt, *perr]
matplotlib.rcParams['figure.figsize'] = figSize
timeSteps = np.linspace(min_time, max_time, 1000)
p_g_fit = decayingSinModel(timeSteps, *popt)
plt.plot(time*timeScale, p_g, '-', lw=2, color=colors[np.mod(i, len(colors))], alpha=0.5, label=str(pressures[i]))
plt.plot(timeSteps*timeScale, p_g_fit, '--', lw=2, color=colors[np.mod(i, len(colors))], alpha=1.0)
plt.xlabel('Time (ns)')
plt.ylabel('Ground state probability, $P_g$')
plt.grid()
plt.legend(title='Pressure (mbar)')
popts = np.reshape(popts, [len(file_numbers), len(params)])
perrs = np.reshape(perrs, [len(file_numbers), len(params)])
ref_popt = popts[0]
diff_freq = popts[:,0] - ref_popt[0]
diff_init_phase = popts[:,3] - ref_popt[3]
if eval_time != 0.0: diff_eval_phase = (360 * diff_freq * eval_time) + diff_init_phase # MHz * us
diff_phase = (360 * diff_freq)
if eval_time != 0.0: plt.axvline(x=eval_time, color='r', linestyle='--')
df['Phase shift /t'] = diff_phase
ref_error = df['Frequency error'][0]
df['Phase shift /t error'] = ((df['Frequency error']**2 + ref_error**2)**0.5)*360
if eval_time != 0.0: df['Phase shift at T'] = diff_eval_phase
columns = ['Pressure', 'Pressure error', *list(np.array([[p, p + ' error'] for p in params]).flatten()), 'Phase shift /t', 'Phase shift /t error']
if eval_time != 0.0: columns = [*columns, 'Phase shift at T']
return df[columns]
df = ramsey_fit(date, file_numbers, pressures, crop=[1,40], local=False, normalise=True)
df
```
| github_jupyter |
# Comparing machine learning models in scikit-learn
*From the video series: [Introduction to machine learning with scikit-learn](https://github.com/justmarkham/scikit-learn-videos)*
```
#environment setup with watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer
```
## Agenda
- How do I choose **which model to use** for my supervised learning task?
- How do I choose the **best tuning parameters** for that model?
- How do I estimate the **likely performance of my model** on out-of-sample data?
## Review
- Classification task: Predicting the species of an unknown iris
- Used three classification models: KNN (K=1), KNN (K=5), logistic regression
- Need a way to choose between the models
**Solution:** Model evaluation procedures
## Evaluation procedure #1: Train and test on the entire dataset
1. Train the model on the **entire dataset**.
2. Test the model on the **same dataset**, and evaluate how well we did by comparing the **predicted** response values with the **true** response values.
```
# read in the iris data
from sklearn.datasets import load_iris
iris = load_iris()
# create X (features) and y (response)
X = iris.data
y = iris.target
```
### Logistic regression
```
# import the class
from sklearn.linear_model import LogisticRegression
# instantiate the model (using the default parameters)
logreg = LogisticRegression()
# fit the model with data
logreg.fit(X, y)
# predict the response values for the observations in X
logreg.predict(X)
# store the predicted response values
y_pred = logreg.predict(X)
# check how many predictions were generated
len(y_pred)
```
Classification accuracy:
- **Proportion** of correct predictions
- Common **evaluation metric** for classification problems
```
# compute classification accuracy for the logistic regression model
from sklearn import metrics
print(metrics.accuracy_score(y, y_pred))
```
- Known as **training accuracy** when you train and test the model on the same data
### KNN (K=5)
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
```
### KNN (K=1)
```
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
y_pred = knn.predict(X)
print(metrics.accuracy_score(y, y_pred))
```
### Problems with training and testing on the same data
- Goal is to estimate likely performance of a model on **out-of-sample data**
- But, maximizing training accuracy rewards **overly complex models** that won't necessarily generalize
- Unnecessarily complex models **overfit** the training data

*Image Credit: [Overfitting](http://commons.wikimedia.org/wiki/File:Overfitting.svg#/media/File:Overfitting.svg) by Chabacano. Licensed under GFDL via Wikimedia Commons.*
## Evaluation procedure #2: Train/test split
1. Split the dataset into two pieces: a **training set** and a **testing set**.
2. Train the model on the **training set**.
3. Test the model on the **testing set**, and evaluate how well we did.
```
# print the shapes of X and y
print(X.shape)
print(y.shape)
# STEP 1: split X and y into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=4)
```

What did this accomplish?
- Model can be trained and tested on **different data**
- Response values are known for the testing set, and thus **predictions can be evaluated**
- **Testing accuracy** is a better estimate than training accuracy of out-of-sample performance
```
# print the shapes of the new X objects
print(X_train.shape)
print(X_test.shape)
# print the shapes of the new y objects
print(y_train.shape)
print(y_test.shape)
# STEP 2: train the model on the training set
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
# STEP 3: make predictions on the testing set
y_pred = logreg.predict(X_test)
# compare actual response values (y_test) with predicted response values (y_pred)
print(metrics.accuracy_score(y_test, y_pred))
```
Repeat for KNN with K=5:
```
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
```
Repeat for KNN with K=1:
```
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print(metrics.accuracy_score(y_test, y_pred))
```
Can we locate an even better value for K?
```
# try K=1 through K=25 and record testing accuracy
k_range = list(range(1, 26))
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test, y_pred))
# import Matplotlib (scientific plotting library)
import matplotlib.pyplot as plt
# allow plots to appear within the notebook
%matplotlib inline
# plot the relationship between K and testing accuracy
plt.plot(k_range, scores)
plt.xlabel('Value of K for KNN')
plt.ylabel('Testing Accuracy')
```
- **Training accuracy** rises as model complexity increases
- **Testing accuracy** penalizes models that are too complex or not complex enough
- For KNN models, complexity is determined by the **value of K** (lower value = more complex)
## Making predictions on out-of-sample data
```
# instantiate the model with the best known parameters
knn = KNeighborsClassifier(n_neighbors=11)
# train the model with X and y (not X_train and y_train)
knn.fit(X, y)
# make a prediction for an out-of-sample observation
knn.predict([[3, 5, 4, 2]])
```
## Downsides of train/test split?
- Provides a **high-variance estimate** of out-of-sample accuracy
- **K-fold cross-validation** overcomes this limitation
- But, train/test split is still useful because of its **flexibility and speed**
## Resources
- Quora: [What is an intuitive explanation of overfitting?](http://www.quora.com/What-is-an-intuitive-explanation-of-overfitting/answer/Jessica-Su)
- Video: [Estimating prediction error](https://www.youtube.com/watch?v=_2ij6eaaSl0&t=2m34s) (12 minutes, starting at 2:34) by Hastie and Tibshirani
- [Understanding the Bias-Variance Tradeoff](http://scott.fortmann-roe.com/docs/BiasVariance.html)
- [Guiding questions](https://github.com/justmarkham/DAT8/blob/master/homework/09_bias_variance.md) when reading this article
- Video: [Visualizing bias and variance](http://work.caltech.edu/library/081.html) (15 minutes) by Abu-Mostafa
## Comments or Questions?
- Email: <kevin@dataschool.io>
- Website: http://dataschool.io
- Twitter: [@justmarkham](https://twitter.com/justmarkham)
```
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
test complete; Gopal
```
| github_jupyter |
# Importing Libraries
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from imblearn.over_sampling import SMOTE
```
# Importing Dataset
```
data = pd.read_csv('MIES_Dev_Data/data.csv', '\t')
data.head()
for column in data.columns:
print(column, end = " ")
```
# Feature Analysis
According to the codebook provided by the following study: https://openpsychometrics.org/tests/MIES/development/,
which is present in the **./MIES_Dev_Data** directory,
**Q(Integer)A** features represent te response of the user
**Q(Integer)I** features stand for the position of the questions in the survey which were randomized
**Q(Integer)E** features stand for the time elapsed for the question to be answered
Clearly we don't require the **Q(Integer)I** features for the prediction algorithm
**Q(Integer)E** features can contribute unstable results and is irreliable as the user might take varying time for multiple reasons
```
# Features to be dropped
target = [column for column in data.columns if (column[0] == 'Q' and (column[-1] == 'I' or column[-1] == 'E'))]
print(target)
data.drop(target, axis = 1, inplace = True)
for column in data.columns:
print(column, end = " ")
```
##### We also don't require the following features for obvious reasons (Access the codebook mentioned above for clarification)
country, dateload, introelapse, testelapse, surveyelapse, engnat
```
data.drop(['country', 'dateload', 'introelapse', 'testelapse', 'surveyelapse', 'engnat'], axis = 1, inplace = True)
```
**Gender** & **Age** might be able to contribute some insights into this prediction
**IE** is our target output
IE == 1 -> Introvert
IE == 2 -> Extravert
IE == 3 -> Ambivert
```
data.head()
```
#### We don't need to scale the data since the values are close to each other, also we don't need to One-Hot-Encode the data since it is supposed to be ordinal
## Examining the output
```
data['IE'].unique()
```
### According to the codebook, IE == 0 should not be present
```
# rows to be deleted
target = [i for i in range(len(data)) if data['IE'][i] == 0]
print("The percentage of missing IE data is :", (len(target) * 100) / len(data))
# Deleting the rows
data.drop(target, axis = 0, inplace = True)
data = data.reset_index(drop = True)
```
# Missing values
```
plt.figure(figsize = (14,5))
sns.heatmap(data.isna(), cmap = 'viridis')
plt.show()
data_is_na = data.isna().copy()
for column in data_is_na.columns:
if True in data_is_na[column].unique():
print("Missing Data Present")
```
### Hence, there is no missing data
# Analysis of remaining data
```
data['gender'].unique()
```
### gender == 0 can be classified into "Rather not say"
```
print('Ages :', end = " ")
print(sorted(list(data['age'].unique())))
len(data[data['age'] >= 200])
```
Clearly the rows with value above 200 are irrelevant and the corresponding response cannot be trusted, so we need to delete these rows. Also since there are very few such rows it is relevant to delete them
```
# rows to be deleted
target = [i for i in range(len(data)) if data['age'][i] >= 200]
target
# Deleting the rows
data.drop(target, axis = 0, inplace = True)
data = data.reset_index(drop = True)
```
### Checking the validity of ordinal features
```
invalid = False
for column in data.columns:
if column[0] == 'Q' and column[-1] == 'A':
for val in data[column].unique():
# Check the codebook for the reason of this condition
if val < 1 or val > 5:
invalid = True
if invalid:
print("The ordinal features has invalid data")
else:
print("The data is valid")
```
### Hence, now the data is valid & clean
# Exporting imbalanced data
```
data.to_csv('./MIES_Dev_Data/imbalanced_data.csv')
```
# Looking for imbalance
```
data['IE'].value_counts()
```
### As we can see, there is a high imbalance present in the data
### We will apply SMOTE (Synthetic Minority Over-Sampling Technique) to deal with the same
```
X = data.drop(['IE'], axis = 1)
Y = data['IE']
sm = SMOTE()
X_over_sampled, Y_over_sampled = sm.fit_sample(X, Y)
data = pd.DataFrame(X_over_sampled, columns = data.columns[:-1])
data['IE'] = Y_over_sampled
data['IE'].value_counts()
```
### Hence, the data is now ready to be used
# Exporting cleaned data
```
data.to_csv('./MIES_Dev_Data/cleaned_data.csv')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from calc_footprint_FFP_adjusted01 import FFP
import matplotlib.pyplot as plt
import numpy as np
from shapely.geometry import Point
from shapely.geometry.polygon import Polygon
from matplotlib.path import Path
import rasterio
import rasterio.plot
import rasterio.mask
import geopandas as gpd
from numpy import ma
a = FFP()
output = a.output(zm=9., umean=3, h=1000, ol=-50, sigmav=0.6, ustar=0.2, wind_dir=180,rs= [0.3,0.5,0.9],crop=False, fig=False)
plt.plot(output[8][2], output[9][2])
# (x, y) da torre para Sirgas 2000 23S
x_utm=203917.07880027
y_utm=7545463.6805863
tif_file = '..\..\iab3_site\IAB1_SIRGAS_23S.tif'
poly = [(i+x_utm, j+y_utm) for i, j in zip(output[8][2], output[9][2])]
poly_shp = Polygon(poly)
gdf = gpd.GeoDataFrame({'a':['teste01'], 'geometry':poly_shp})
raster = rasterio.open(tif_file)
fig, ax = plt.subplots(figsize=(5,5))
rasterio.plot.show(raster, ax=ax)
gdf.plot(ax=ax, edgecolor='red', facecolor='none')
mask_teste = rasterio.mask.mask(raster, [poly_shp], crop=True, invert=False)
unique, counts = np.unique(mask_teste[0], return_counts=True)
dict(zip(unique,counts))
I = plt.imread(tif_file)
plt.imshow(ma.masked_where(I==4, I))
```
### Significado dos numeros do raster (geotiff)
- **3**: Floresta Natural => Formação Florestal
- **4**: Floresta Natural=> Formação Savânica
- **9**: Floresta Plantada
- **12**: Formação Campestre/Outra Formação não Florestal
- **15**: Pastagem
- **19**: Agricultura => Cultivo Anual e Perene
- **20**: Agricultura => Cultivo Semi-Perene
- **24**: Infraestrutura Urbana
- **25**: Outra área não Vegetada
- **33**: Corpo d'água
- **255**: Ignorar
```
significado_pixel = {3: 'Floresta Natural => Formação Florestal',
4: 'Floesta Natural => Formação Savânica',
9: 'Floresta Plantada',
12: 'Formação Campestre/Outra formação não Florestal',
15: 'Pastagem',
19: 'Agricultura => Cultivo Anual e Perene',
20: 'Agricultura => Cultivo Semi-Perene',
24: 'Infraestrutura Urbana',
25: 'Outra área não Vegetada',
33: "Corpo d'água",
255: 'Fora do escopo'}
list(significado_pixel)
print('{0},{11}'.format(*significado_pixel.values(),3 ))
pixels = dict(zip(unique,counts))
def stats_pixel(pixels_dict):
floresta = pixels_dict
list(pixels.keys())
pixels.values()
for i in pixels.items():
print(i)
pixels[3] + pixels[4]
pixel_list = []
for i in significado_pixel:
# print(i, significado_pixel[i])
try:
# print(pixels[i])
pixel_list.append(pixels[i])
except:
# print(0)
pixel_list.append(0)
print('Floresta',pixel_list[0] + pixel_list[1])
print('Outros', sum(pixel_list[2:-1]))
pixel_list
```
| github_jupyter |
# Chapter 3 Conditional Execution
```
x = 10 # assignment
x
x == 10 # does x equal to 10? True/False
```
# one-way decision
```
x = 20 # sequentional
print(x) # sequentional
if x > 10: # sequentional (condition)
print('x is big') # conditional
print('the value of x is', x) # conditional
print('done') # sequentional
# executed lines: 1, 2, 3, 4, 5, 6
x = 5 # sequentional
print(x) # sequentional
if x > 10: # sequentional (condition)
print('x is big') # conditional
print('the value of x is', x) # conditional
print('done') # sequentional
# executed lines: 1, 2, 3, 6
# one-way
x = 5 # sequentional
print(x) # sequentional
if x > 10: # sequentional (condition)
print('x is big') # conditional
print('the value of x is', x) # conditional
if x > 3:
print('x is larger than 3')
print('done') # sequentional
# executed lines: 1, 2, 3, 6, 7, 8
# one-way nested
x = 30 # sequentional
print(x) # sequentional
if x > 10: # sequentional (condition)
print('x is big') # conditional
print('the value of x is', x) # conditional
if x > 20:
print('x is larger than 20')
print('done') # sequentional
# executed lines: 2-10
# one-way nested
x = 7 # sequentional
print(x) # sequentional
if x > 10: # sequentional (condition)
print('x is big') # conditional
print('the value of x is', x) # conditional
if x > 20:
print('x is larger than 20')
print('done') # sequentional
# executed lines: 2-4, 9
# one-way nested
x = 11 # sequentional
print(x) # sequentional
if x > 10: # sequentional (condition)
print('x is big') # conditional
print('the value of x is', x) # conditional
if x > 20:
print('x is larger than 20')
print('done') # sequentional
# executed lines: 2-7, 9
# one-way nested
x = 20 # sequentional
print(x) # sequentional
if x > 10: # sequentional (condition)
print('x is big') # conditional
print('the value of x is', x) # conditional
if x > 20:
print('x is larger than 20')
print('done') # sequentional
# executed lines: 2-7, 9
```
# two-way decision
```
x = 10
print('x =', x)
if x > 5:
print('x is large')
else: # x <= 5
print('x is small')
print('done')
x = 3
print('x =', x)
if x > 5:
print('x is large')
else: # x <= 5
print('x is small')
print('done')
```
# if - else if
```
x = 3
print('x =', x)
if x > 5:
print('x is large')
else: # x <= 5
if x < 2:
print('x is too small')
else: # x >= 2 and x <=5 ; in other words between 2-5
print('x is small')
print('done')
x = 1
print('x =', x)
if x > 5:
print('x is large')
else: # x <= 5
if x < 2:
print('x is too small')
else: # x >= 2 and x <=5 ; in other words between 2-5
print('x is small')
print('done')
x = 1
print('x =', x)
if x > 5:
print('x is large')
elif x < 2: # x <= 5
print('x is too small')
else: # x >= 2 and x <=5 ; in other words between 2-5
print('x is small')
print('done')
x = 3
print('x =', x)
if x > 5:
print('x is large')
elif x < 2: # x <= 5
print('x is too small')
else: # x >= 2 and x <=5 ; in other words between 2-5
print('x is small')
print('done')
```
# multi-way decision
```
x = 7
if x < 2:
print('small')
elif x < 6: # x >=2
print('meduim')
else: # x >= 6
print('large')
x = int(input('enter x: '))
if x < 2 :
print('tiny')
elif x < 10 :
print('small')
elif x < 20 :
print('Big')
elif x < 40 :
print('so big')
elif x < 100:
print('Huge')
else :
print('so huge')
```
# Grade scale assignment
* satisfactory 1-1.66
* good 1.67-2.66
* v.good 2.67-3.49
* Excellent 3.5-4.0
# Try-except in python
```
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
```
# Handle Exception in Python
```
try:
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
except ValueError:
print('input must be numberic')
except ZeroDivisionError:
print('can not divide by zero')
try:
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
except ValueError:
print('input must be numberic')
except ZeroDivisionError:
print('can not divide by zero')
try:
number1 = int(input('enter 1st number: '))
number2 = int(input('enter 2nd number: '))
print(number1 + number2) # addition
print(number1 - number2)
print(number1 * number2)
print(number1 / number2)
except ValueError:
print('input must be numberic')
except ZeroDivisionError:
print('can not divide by zero')
```
| github_jupyter |
```
# ----------------------------------------------------
# Country Tally Plot
# Generate a comprehensive set of plots to visualise
# COVID-19 situation in a country.
#
# For more information, please go to:
# https://github.com/MunchDev/EpidemicSimulator
# ----------------------------------------------------
# Country -- The country to be plotted
# Now supported multiple countries! You can put any countries in a list
# and it will be drawn.
#
# For the full list of available countries, please go to:
# https://github.com/MunchDev/EpidemicSimulator
country = ["Singapore", "Vietnam", "Malaysia", "US"]
# Date -- The latest date to be plotted (in dd-mm-yyyy format)
# Please note that there are delays between the real-time report
# and this compiled report. If there is no report available
# for today, please switch to the previous date.
date = "15-06-2020"
# Timespan -- The period of data plotted
# Please choose a suitable timespan, so that the earliest
# date is no earlier than 22-03-2020.
timespan = 85
# Scale -- Set to 'linear' for linear scale or 'log' for logarithmic scale
# Support multiple countries. If only one string of scale is provided, it
# will used across all plots. If the number of scale is smaller than the
# number of countries, the rest will be taken as default of "log".
scale = "log"
# Plot type -- Show plot of confirmed cases (c), deaths (d), recovered cases (r),
# active cases (a) or any combination of these.
# Support multiple countries. If only one string of plot type is provided, it
# will used across all plots. If the number of plot type is smaller than
# the number of countries, the rest will be taken as default of "cdra".
plot_type = "cdra"
# --------------------------------------------------------------
# ---------- CAUTION: DO NOT MODIFY ANY LINE BELOW -------------
from data_miner import plot_tally
plot_tally(country, date, timespan, scale=scale, plot_type=plot_type)
# --------------------------------------------------------------
# ------------------------------------------------------
# World Tally Plot
# Generate a comprehensive set of plots to visualise
# COVID-19 situation of multiple countries in the world.
# This is basically an extension of the above, but for
# confirmed cases only.
#
# For more information, please go to:
# https://github.com/MunchDev/EpidemicSimulator
# ------------------------------------------------------
# Country -- The country to be plotted
# Now supported multiple countries! You can put any countries in a list
# and it will be drawn.
#
# For the full list of available countries, please go to:
# https://github.com/MunchDev/EpidemicSimulator
countries = ["Singapore", "Vietnam", "Malaysia", "US"]
# Date -- The latest date to be plotted (in dd-mm-yyyy format)
# Please note that there are delays between the real-time report
# and this compiled report. If there is no report available
# for today, please switch to the previous date.
date = "15-06-2020"
# Timespan -- The period of data plotted
# Please choose a suitable timespan, so that the earliest
# date is no earlier than 22-03-2020.
timespan = 85
# Scale -- Set to 'linear' for linear scale or 'log' for logarithmic scale
# Support multiple countries. If only one string of scale is provided, it
# will used across all plots. If the number of scale is smaller than the
# number of countries, the rest will be taken as default of "log".
scale = "log"
# Plot type -- Show plot of confirmed cases (c), deaths (d), recovered cases (r),
# active cases (a) or any combination of these.
plot_type = "cdra"
# --------------------------------------------------------------------
# ------------- CAUTION: DO NOT MODIFY ANY LINE BELOW ----------------
from data_miner import plot_tally
plot_tally(countries, date, timespan, scale=scale, plot_type=plot_type, transpose=True)
# --------------------------------------------------------------------
```
| github_jupyter |
# Kats 204 Forecasting with Meta-Learning
This tutorial will introduce the meta-learning framework for forecasting in Kats. The table of contents for Kats 203 is as follows:
1. Overview of Meta-Learning Framework For Forecasting
2. Introduction to `GetMetaData`
3. Determining Predictability with `MetaLearnPredictability`
4. Model Selection with `MetaLearnModelSelect`
5. Hyperparameter Tuning with `MetaLearnHPT`
5.1. Initializing `MetaLearnHPT`
5.2. `MetaLearnHPT` with Default Neural Network Model Structure
5.3. `MetaLearnHPT` with Custom Neural Network Model Structure
**Note:** We provide two types of tutorial notebooks
- **Kats 101**, basic data structure and functionalities in Kats (this tutorial)
- **Kats 20x**, advanced topics, including advanced forecasting techniques, advanced detection algorithms, `TsFeatures`, meta-learning, etc.
## 1. Overview of Meta-Learning Framework For Forecasting
Suppose we have a time series and we are looking to build the best possible forecast (with respect to a predefined error metric such as mean absolute error) from the following list of candidate models (and possibly other forecasting models in Kats too):
* ARIMA
* SARIMA
* Holt-Winters
* Prophet
* Theta
* STLF
For a single time series, it is straightforward to to do hyperparameter tuning for each of the candidate models with this time series, calculate the error metric, and choose the model that minimizes the error metric. We have discussed this methodology in detail in Kats 201. Our basic metadata object, `GetMetaData`, which we will introduce below, also does this calculation to find the best forecasting model for a single time series.
However, when we are working with a large number of time series, repeating this process quickly becomes intractable, and for that, we include a meta-learning framework for forecasting. There are two key model classes, plus one optional one, in our meta-learning framework:
1. `MetaLearnModelSelect`: Given the metadata for a time series, predict the best model family (from the candidate models of interest) to forecast the series. This model is a random forest by default.
2. `MetaLearnHPT`: Given a time series and a model type, predict the best parameters for this model. This model is a neutral network.
3. `MetaLearnPredictability` (optional): Given the metadata for a time series, predict if it is "predictable", i.e. if it is possible to forecast with a threshold error. This model is a random forest by default.
For each of these models, you can use labeled training data to build a model or you load a pre-trained model from a file path.
We use the `GetMetaData` object to represent the metadata for a time series in `MetaLearnModelSelect` and `MetaLearnPredictability`. This tutorial begins with an introduction to the `GetMetaData` object. Since this object is heavily dependent on `TsFeatures`, if you are not familiar `TsFeatures`, you should check out Kats 203 prior to continuing with this tutorial.
Next we will use labeled time series data from the `m3_meta_data.csv` file to show how to use the `MetaLearnPredictability`, `MetaLearnModelSelect` and `MetaLearnPredictability`.
The sample data in `m3_meta_data.csv` is very small, with 78 labeled examples, so the examples we provide here will not be highly accurate, but they will show you the proper workflow for using the meta-learning framework for forecasting in Kats.
The full table of contents for Kats 204 is as follow
## 2. Introduction to `GetMetaData`
The `GetMetaData` class generates the metadata for any time series. There are three key components to the the metadata for a time series:
1. `features`: the `TsFeatures` dictionary for the time series
2. `hpt_res`: a dictionary giving the best hyperparameters for each candidate model and the corresponding error metric for the time series
3. `best_model`: the name of the model with the smallest error metric
The default error metric is mean absolute error (mae) but this can be controlled with the `error_method` argument in `GetMetaData`.
The list of candidate models that we consider is controlled by the `all_models` argument in `GetMetaData`, which is a dictionary with string names of the candidate models as keys and corresponding model classes as value.
with keys equal to the string names of the models as keys and values equal to the corresponding model class. The keys in `hpt_res` and the value of `best_model` come from the keys of the `all_models` dictionary. The default value of `all_models` will include the following six models.
1. ARIMA
2. SARIMA
3. Holt-Winters
4. Prophet`
5. Theta
6. STLF
Our first example uses the `air_passengers` data set. We show how to get the metadata for this time series. We start by loading the time series into a `TimeSeriesData` object.
```
import pandas as pd
import numpy as np
import sys
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter(action='ignore')
sys.path.append("../")
from kats.consts import TimeSeriesData
air_passengers_df = pd.read_csv("../kats/data/air_passengers.csv")
air_passengers_df.columns = ["time", "value"]
air_passengers_ts = TimeSeriesData(air_passengers_df)
```
Now we can construct the `GetMetaData` object for the `air_passengers` data set time series like follows. We use all of the default settings except that we use mean absolute percentage error (mape) as our error metric rather than the default of mean absolute error (mae)
```
from kats.models.metalearner.get_metadata import GetMetaData
# create an object MD of class GetMetaData with error method mean absolute percentage error (mape)
MD = GetMetaData(data=air_passengers_ts, error_method='mape')
```
Let's take a look at the `all_models` dictionary that is used by default here. You are allowed to specify your own `all_models` dictionary as long as all the values are classes that extends the abstract class `kats.models.Model`.
```
MD.all_models
```
The `all_params` dictionary will have the same keys as the `all_models` dictionary, and the values are the corresponding parameter class (i.e. a class that extends the class `kats.const.Params`)
```
MD.all_params
```
Now we can use the `get_meta_data` function to calculate all the metadata and output the result as a dictionary.
```
# get meta data as a dictionary
air_passengers_metadata = MD.get_meta_data()
```
Let's take a look at the keys of the metadata dictionary.
```
air_passengers_metadata.keys()
```
We explained what `features`, `hpt_res` and `best_model` are above. This dictionary also includes the `search_method` and `error_method`, which will just be the default values in this case. We can see these as follows.
```
print(f"search_method: {air_passengers_metadata['search_method']}")
print(f"error_method: {air_passengers_metadata['error_method']}")
```
The keys of the `hpt_res` dictionary are name of the candidate model families; they should be the same as the keys for the `all_models` and `all_parameters` dictionaries.
```
air_passengers_metadata['hpt_res'].keys()
```
The values of the `hpt_res` dictionary are two-element tuples. The first element is gives the hyperparameters that minimize the error metric. The second element gives the corresponding minimum error metric. Let's take a look at these values for ARIMA:
```
air_passengers_metadata['hpt_res']['arima']
```
We can sort the different methods by their error metric as follows:
```
methods = list(air_passengers_metadata['hpt_res'].keys())
sorted(methods, key = lambda m: air_passengers_metadata['hpt_res'][m][1])
```
This suggests that Prophet has the lowest error metric. Let's confirm that this is what `best_model` indicates:
```
air_passengers_metadata['best_model']
```
We constructed the `GetMetaData` object for the `air_passengers` data set with all of the default settings. Let's take a look at the full set of attributes that can be used to initialize `GetMetadata`.
This is the only required attributed:
* **data**: TimeSeriesData, the time series for which we calculate the metadata
The following attributes are all optional:
* **all_models**: `Dict[str, m.Model]`, a dictionary for the candidate model classes. The key is a string naming the model and each value is a corresponding model class (i.e. a class that extends the abstract class `kats.models.Model`).
* **all_params**: `Dict[str, Params]`, a dictionary for the candidate model parameter classes. The keys are the same as the keys for `all_models` and each value is a corresponding parameter class (i.e. a class that extends the class `kats.const.Params`).
* **min_length**: int, the minimal length of time series. We raise a value error if the length of `data` is smaller than `min_length`. The default value of `min_length` is 30.
* **scale**: bool, Whether to rescale the time series by its maximum value; default is true.
* **method**: SearchMethodEnum, Search method for hyper-parameters tuning; default is random search in the default parameter space
* **executor**: Callable, A parallel executor for parallel processing. By default, we use Python's native multiprocessing implementation.
* **error_method**: str, Type of error metric. Options are `'mape`', `'smape`',`'mae`', `'mase`', `'mse`', `'rmse`'; default is `'mae'`.
* **num_trials**: int, Number of trials for hyperparameter search; default is 5
* **num_arm**: optional Number of arms in hyperparameter search; default is 4.
For the remaining examples, we use the sample data in `m3_meta_data.csv` to show how to build meta-learning models. This sample data set contains the metadata for 78 time series, meaning it has that we need to construct 78 metadata dictionaries like the one we constructed for the `air_passengers` data set. While 78 metadata objects is certainly too few to develop an accurate meta-learning model and you should use more examples for your own meta-learning models to get high accuracy, these examples will help familiarize you with our meta-learning framework.
Loading this data is straightforward is straightforward. After loading it into a `DataFrame`, we have to do some pre-processing with the `eval` function to ensure that the dictionaries are represented as dictionaries and not as strings. We demonstrate this as follows:
```
# load the metadata into a DataFrame
metadata_df = pd.read_csv("../kats/data/m3_meta_data.csv")
# We need to do a little pre-processing to make sure the dictionaries are represented as dictionaries
# rather than as strings. This function will do that pre-processing.
def change_format(tmp):
tmp['hpt_res']=eval(tmp['hpt_res'])
tmp['hpt_res']['sarima'][0]['seasonal_order'] = eval(tmp['hpt_res']['sarima'][0]['seasonal_order'])
tmp['features']=eval(tmp['features'])
return tmp
metadata_df = metadata_df.apply(change_format, axis=1)
```
Let's preview the metadata `DataFrame` we just loaded.
```
metadata_df.head()
```
Let's convert this metadata `DataFrame` into a list of metadata dictionaries.
```
metadata_list = metadata_df.to_dict(orient='records')
```
## 3. Determining Predictability with `MetaLearnPredictability`
Before using meta-learning models for model selection and hyper-parameter forecasting, we would like to know if our target time series is predictable. The `MetaLearnPredictability` module allows us to treat this like a binary classification problem and build a model for it. We train this model using a list of a metadata and a threshold for the error metric. We use the threshold to label each metadata dictionary as predictable if and only if the error of it's `best_model` is smaller than the input threshold. The arguments for `MetaLearnPredictability` are as follows:
* **metadata**: A list of dictionaries representing the meta-data of time series (e.g., the meta-data generated by GetMetaData object). Required unless `load_model=True`.
* **threshold**: Float; the threshold for the forecasting error. A time series whose forecasting error of the best forecasting model is higher than the threshold is considered as unpredictable. Default is 0.2.
* **load_model**: Boolean; whether or not to load a trained model. Default is False.
If we want to train a new predictability model from a list of metadata dictionaries, we should include that list in the `metadata` argument. If we want to load a trained model, we set `load_data=True` and do ignore the `metadata` argument. We will provide examples of both below.
For our example, we are going to use the sample metadata from the `m3_meta_data.csv` file to train a predictability model with `MetaLearnPredictability`. Then we will use this to predict whether or not `air_passenger` time series can be forecasted (with MAPE at most 0.2).
We initialize model using the `metadata_list` we previously generated from `m3_meta_data.csv` as follows:
```
from kats.models.metalearner.metalearner_predictability import MetaLearnPredictability
# take the time series with MAPE>=0.2 as unpreditable time series and initial the object
mlp=MetaLearnPredictability(metadata_list, threshold=0.2)
```
When we train the model, we see a dictionary with performance metrics calculated on the test data set.
```
mlp.train()
```
Now we can use this model to predict if the `air_passenger` time series is predictable.
```
mlp.pred(air_passengers_ts)
```
This suggests that this model can be forecast with MAPE at most 0.2.
Let's save the model we trained to a file.
```
mlp.save_model("mlp.pkl")
```
Now let's re-load our saved model into a new `MetaLearnPredictability` object.
```
#initiate a new object and load the trained model
mlp2 = MetaLearnPredictability(load_model=True)
mlp2.load_model("mlp.pkl")
```
Finally, let's use our newly loaded model to repeat the prediction we did on the `air_passenger` data set.
```
mlp2.pred(air_passengers_ts)
```
## 4. **Model Selection with `MetaLearnModelSelect`**
The `MetaLearnModelSelect` object allows you to build a predictive model to determine the best forecasting model for a time series. It is trained using a list of metadata dictionaries. The arguments for `MetaLearnModelSelect` are as follows:
* **metadata**: A list of dictionaries representing the meta-data of time series (e.g., the meta-data generated by GetMetaData object). Required unless `load_model=True`.
* **load_model**: Boolean; whether or not to load a trained model. Default is False.
If we want to train a new predictability model from a list of metadata dictionaries, we should include that list in the `metadata` argument. If we want to load a trained model, we set `load_data=True` and do ignore the `metadata` argument. We will provide examples of both below.
For our example, we are going to use the sample metadata from the `m3_meta_data.csv` file to train a selection model with `MetaLearnModelSelect`. Then we will use this to predict the best forecasting model for the `air_passenger` time series.
We initialize model using the `metadata_list` we previously generated from `m3_meta_data.csv` as follows:
```
from kats.models.metalearner.metalearner_modelselect import MetaLearnModelSelect
#Initialize the MetaLearnModelSelect object
mlms = MetaLearnModelSelect(metadata_list)
```
Each metadata dictionary includes a `best_model`, and we can take a look at the frquencies of these models using the `count_category` function.
```
mlms.count_category()
```
Before we visualize the data and train the model, it is helpful do some preprocessing. We can do this with the `preprocess` function.
```
# pre-process the metadata
# don't down-sample it to balance the classes
# standardize the TsFeatures to have zero mean and unit variance
mlms.preprocess(downsample=False, scale=True)
```
We can see how the different `TsFeatures` in our metadata objects are correlated with each other by plotting a heatmap, which can be generated using the `plot_corr_heatmap` function.
```
mlms.plot_corr_heatmap()
```
Now, it is time to train our model. By default, we will be fitting a random forest model, but other model types (including GBDT, SVT, KNN, Naive Bayes) can be supported using the `method` parameter in the `train` function. When we run the `train` function, it outputs a dictionary with the training error and test error for each of the candidate models. All of these error metrics are MAPE because that is the error metric our metadata is using for this example.
```
# train a modelselect model using random forest algorithm
results=mlms.train()
# preview the dictionary
results
```
Let's view this dictionary as a `DataFrame`.
```
results_df=pd.DataFrame([results['fit_error'], results['pred_error']])
results_df['error_type']=['fit_error', 'pred_error']
results_df['error_metric']='MAPE'
results_df
```
Now, let's use our trained model to predict the best model for the `air_passengers` time series.
```
mlms.pred(air_passengers_ts)
```
Let's save the model we trained to a file.
```
mlms.save_model("mlms.pkl")
```
Now let's re-load our saved model into a new `MetaLearnModelSelect` object.
```
mlms2 = MetaLearnModelSelect(load_model=True)
mlms2.load_model("mlms.pkl")
```
Finally, let's use our newly loaded model to repeat the prediction we did on the `air_passenger` data set.
```
mlms2.pred(air_passengers_ts)
```
## 5. **Hyperparameter Tuning with `MetaLearnHPT`**
The `MetaLearnHPT` object allows you to build a model to predict the best hyperparameters for a time series given a designated forecasting model. Specifically, `MetaLearnHPT` builds a neural network model that takes the `TsFeatures` for a time series as inputs and predicts the best hyperparameters for the forecasting model.
Since a metadata dictionary contains both the `TsFeatures` and the best parameters (with keys `features` and `hpt_res`, respectively), we can use a list of metadata dictionaries to build this predictive model.
For our example, we use `metadata_list`, which contains the metadata from the `m3_meta_data.csv` file, to build a model for the Holt-Winters parameters for a time series. We then use this model to predict the best Holt-Winters parameters for the `air_passengers` time series. While this example is using the Holt-Winters model as the designated model, the same process can be used for any forecasting model supported by Kats as long as it is included in our metadata objects.
### 5.1 Initializing `MetaLearnHPT`
To initialize the `MetaLearnHPT` model, we need to input the `TsFeatures` and hyperparameters for the Holt-Winters model as `DataFrame` objects. To extract these from the metadata from `m3_meta_data.csv`, it is easiest use the `DataFrame` we loaded with this data, `metadata_df`.
First, let's load the `TsFeatures` from `metadata_df` to a new `DataFrame` and preview it.
```
metadata_features_df = pd.DataFrame(metadata_df['features'].tolist())
metadata_features_df.head()
```
Now, let's do the same for the the Holt-Winters hyperparameters.
```
metadata_hpt_df = pd.DataFrame(metadata_df['hpt_res'].map(lambda x: x['holtwinters'][0]).tolist())
metadata_hpt_df.head()
```
The arguments for `MetaLearnHPT` are:
* **data_x**: pd.DataFrame; A DataFrame with the TsFeatures. Required unless `load_model=True`.
* **data_y**: pd.DataFrame; A DataFrame with the best hyperparameters. Required unless `load_model=True`.
* **default_model**: string; The name of the forecast model whose default settings will be used. Supported options are 'arima', 'sarima', 'theta', 'prophet', 'holtwinters', 'stlf' and None. Default is None, in which case we instantiate a custom model and use `categorical_idx` and `numerical_idx` to get the names of the hyperparameters.
* **categorical_idx**: A list of strings of the names of the categorical hyper-parameters. Required only when `default_model` is `None` and there are categorical hyper-parameters.
* **numerical_idx**: Optional; A list of strings of the names of the numerical hyper-parameters. Required only when `default_model` is `None` and there are numerical hyper-parameters.
If None, then a customized model will be initiated.
* **load_model**: Boolean; whether or not to load a trained model. Default is False.
We can initialize the `MetaLearnHPT` model using a `default_model` as follows.
```
from kats.models.metalearner.metalearner_hpt import MetaLearnHPT
mlhpt_holtwinters = MetaLearnHPT(
data_x=metadata_features_df,
data_y=metadata_hpt_df,
default_model='holtwinters'
)
```
```
mlhpt_holtwinters2=MetaLearnHPT(
data_x=metadata_features_df,
data_y=metadata_hpt_df,
categorical_idx = ["trend","damped","seasonal"],
numerical_idx = ["seasonal_periods"]
)
```
### 5.2 `MetaLearnHPT` with Default Neural Network Model Structure
When using a default model like we did when initializing `mlhpt_holtwinters`, `MetaLearnHPT` builds a neural network with the default neural network model structure. This means we call the `build_network` function with no parameters.
```
mlhpt_holtwinters.build_network()
```
We use the `train` function to train the neural network.
```
mlhpt_holtwinters.train(lr=0.001, batch_size=20)
```
Let's look at the training curves for this model.
```
mlhpt_holtwinters.plot()
```
Now let's use our trained model to predict the best Holt-Winters parameters for the `air_passengers` time series. The `pred` function returns a `DataFrame` and the predicted parameters are in the `parameters` column.
```
pred=mlhpt_holtwinters.pred(air_passengers_ts)
pred['parameters'].iloc[0]
```
Let's save the model we trained to a file.
```
mlhpt_holtwinters.save_model("mlhpt_hw.pkl")
```
Now let's re-load our saved model into a new `MetaLearnHPT` object.
```
mlhpt_holtwinters3=MetaLearnHPT(load_model=True)
mlhpt_holtwinters3.load_model("mlhpt_hw.pkl")
```
Let's use our newly loaded model to repeat the prediction we did on the `air_passenger` data set.
```
pred=mlhpt_holtwinters3.pred(air_passengers_ts)
pred['parameters'].iloc[0]
```
### 5.3 `MetaLearnHPT` with Custom Neural Network Model Structure
When using a custom model like we did when initializing `mlhpt_holtwinters2`, you need to specify the model structure by providing the parameters for the neural network to the `build_network` function.
Here's how we can do that.
```
mlhpt_holtwinters2.build_network(
#One shared one-layer NN with 50 neurons.
n_hidden_shared=[50],
#Each classification task has its own task-specific NN. In this example, "trend" and "dampled" both have a two-layer NN respectively
#and "seasonal" has a one-layer NN.
n_hidden_cat_combo=[[20, 10], [20, 10], [20]],
#One task-specific one-layer NN with 30 neurons for regression task.
n_hidden_num=[30]
)
```
Now let's use the `train` function to train the model. We include some of the extra parameters here to specify how to train the neural network model.
```
#train the customized NN
mlhpt_holtwinters2.train(
#loss_scale is used to balance 2 types of losses: cross-entropy for classification tasks and MSE for regression tasks
loss_scale=30,
#learning rate
lr=0.005,
n_epochs=2000,
batch_size=16,
#supports ADAM and SGD
method='SGD',
#momentum in SGD.
momentum=0,
#early stop option.
n_epochs_stop=50,)
```
Let's look at the training curves for this model.
```
mlhpt_holtwinters2.plot()
```
Let's use our trained model to predict the best parameters for the `air_passengers` time series.
```
pred=mlhpt_holtwinters2.pred(air_passengers_ts)
pred['parameters'].iloc[0]
```
| github_jupyter |
# PageRank
In this notebook, you'll build on your knowledge of eigenvectors and eigenvalues by exploring the PageRank algorithm.
The notebook is in two parts, the first is a worksheet to get you up to speed with how the algorithm works - here we will look at a micro-internet with fewer than 10 websites and see what it does and what can go wrong.
The second is an assessment which will test your application of eigentheory to this problem by writing code and calculating the page rank of a large network representing a sub-section of the internet.
## Part 1 - Worksheet
### Introduction
PageRank (developed by Larry Page and Sergey Brin) revolutionized web search by generating a
ranked list of web pages based on the underlying connectivity of the web. The PageRank algorithm is
based on an ideal random web surfer who, when reaching a page, goes to the next page by clicking on a
link. The surfer has equal probability of clicking any link on the page and, when reaching a page with no
links, has equal probability of moving to any other page by typing in its URL. In addition, the surfer may
occasionally choose to type in a random URL instead of following the links on a page. The PageRank is
the ranked order of the pages from the most to the least probable page the surfer will be viewing.
```
# Before we begin, let's load the libraries.
%pylab notebook
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
```
### PageRank as a linear algebra problem
Let's imagine a micro-internet, with just 6 websites (**A**vocado, **B**ullseye, **C**atBabel, **D**romeda, **e**Tings, and **F**aceSpace).
Each website links to some of the others, and this forms a network as shown,

The design principle of PageRank is that important websites will be linked to by important websites.
This somewhat recursive principle will form the basis of our thinking.
Imagine we have 100 *Procrastinating Pat*s on our micro-internet, each viewing a single website at a time.
Each minute the Pats follow a link on their website to another site on the micro-internet.
After a while, the websites that are most linked to will have more Pats visiting them, and in the long run, each minute for every Pat that leaves a website, another will enter keeping the total numbers of Pats on each website constant.
The PageRank is simply the ranking of websites by how many Pats they have on them at the end of this process.
We represent the number of Pats on each website with the vector,
$$\mathbf{r} = \begin{bmatrix} r_A \\ r_B \\ r_C \\ r_D \\ r_E \\ r_F \end{bmatrix}$$
And say that the number of Pats on each website in minute $i+1$ is related to those at minute $i$ by the matrix transformation
$$ \mathbf{r}^{(i+1)} = L \,\mathbf{r}^{(i)}$$
with the matrix $L$ taking the form,
$$ L = \begin{bmatrix}
L_{A→A} & L_{B→A} & L_{C→A} & L_{D→A} & L_{E→A} & L_{F→A} \\
L_{A→B} & L_{B→B} & L_{C→B} & L_{D→B} & L_{E→B} & L_{F→B} \\
L_{A→C} & L_{B→C} & L_{C→C} & L_{D→C} & L_{E→C} & L_{F→C} \\
L_{A→D} & L_{B→D} & L_{C→D} & L_{D→D} & L_{E→D} & L_{F→D} \\
L_{A→E} & L_{B→E} & L_{C→E} & L_{D→E} & L_{E→E} & L_{F→E} \\
L_{A→F} & L_{B→F} & L_{C→F} & L_{D→F} & L_{E→F} & L_{F→F} \\
\end{bmatrix}
$$
where the columns represent the probability of leaving a website for any other website, and sum to one.
The rows determine how likely you are to enter a website from any other, though these need not add to one.
The long time behaviour of this system is when $ \mathbf{r}^{(i+1)} = \mathbf{r}^{(i)}$, so we'll drop the superscripts here, and that allows us to write,
$$ L \,\mathbf{r} = \mathbf{r}$$
which is an eigenvalue equation for the matrix $L$, with eigenvalue 1 (this is guaranteed by the probabalistic structure of the matrix $L$).
Complete the matrix $L$ below, we've left out the column for which websites the *FaceSpace* website (F) links to.
Remember, this is the probability to click on another website from this one, so each column should add to one (by scaling by the number of links).
```
# Replace the ??? here with the probability of clicking a link to each website when leaving Website F (FaceSpace).
L = np.array([[0, 1/2, 1/3, 0, 0, 0 ],
[1/3, 0, 0, 0, 1/2, 0 ],
[1/3, 1/2, 0, 1, 0, 1/2 ],
[1/3, 0, 1/3, 0, 1/2, 1/2 ],
[0, 0, 0, 0, 0, 0 ],
[0, 0, 1/3, 0, 0, 0 ]])
```
In principle, we could use a linear algebra library, as below, to calculate the eigenvalues and vectors.
And this would work for a small system. But this gets unmanagable for large systems.
And since we only care about the principal eigenvector (the one with the largest eigenvalue, which will be 1 in this case), we can use the *power iteration method* which will scale better, and is faster for large systems.
Use the code below to peek at the PageRank for this micro-internet.
```
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0] # Sets r to be the principal eigenvector
100 * np.real(r / np.sum(r)) # Make this eigenvector sum to one, then multiply by 100 Procrastinating Pats
```
We can see from this list, the number of Procrastinating Pats that we expect to find on each website after long times.
Putting them in order of *popularity* (based on this metric), the PageRank of this micro-internet is:
**C**atBabel, **D**romeda, **A**vocado, **F**aceSpace, **B**ullseye, **e**Tings
Referring back to the micro-internet diagram, is this what you would have expected?
Convince yourself that based on which pages seem important given which others link to them, that this is a sensible ranking.
Let's now try to get the same result using the Power-Iteration method that was covered in the video.
This method will be much better at dealing with large systems.
First let's set up our initial vector, $\mathbf{r}^{(0)}$, so that we have our 100 Procrastinating Pats equally distributed on each of our 6 websites.
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
r # Shows it's value
```
Next, let's update the vector to the next minute, with the matrix $L$.
Run the following cell multiple times, until the answer stabilises.
```
r = L @ r # Apply matrix L to r
r # Show it's value
# Re-run this cell multiple times to converge to the correct answer.
```
We can automate applying this matrix multiple times as follows,
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
for i in np.arange(100) : # Repeat 100 times
r = L @ r
r
```
Or even better, we can keep running until we get to the required tolerance.
```
r = 100 * np.ones(6) / 6 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
See how the PageRank order is established fairly quickly, and the vector converges on the value we calculated earlier after a few tens of repeats.
Congratulations! You've just calculated your first PageRank!
### Damping Parameter
The system we just studied converged fairly quickly to the correct answer.
Let's consider an extension to our micro-internet where things start to go wrong.
Say a new website is added to the micro-internet: *Geoff's* Website.
This website is linked to by *FaceSpace* and only links to itself.

Intuitively, only *FaceSpace*, which is in the bottom half of the page rank, links to this website amongst the two others it links to,
so we might expect *Geoff's* site to have a correspondingly low PageRank score.
Build the new $L$ matrix for the expanded micro-internet, and use Power-Iteration on the Procrastinating Pat vector.
See what happens…
```
# We'll call this one L2, to distinguish it from the previous L.
L2 = np.array([[0, 1/2, 1/3, 0, 0, 0, 0 ],
[1/3, 0, 0, 0, 1/2, 0, 0 ],
[1/3, 1/2, 0, 1, 0, 1/3, 0 ],
[1/3, 0, 1/3, 0, 1/2, 1/3, 0 ],
[0, 0, 0, 0, 0, 1/3, 0 ],
[0, 0, 1/3, 0, 0, 0, 0 ],
[0, 0, 0, 0, 0, 0, 1 ]])
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = L2 @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = L2 @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
That's no good! *Geoff* seems to be taking all the traffic on the micro-internet, and somehow coming at the top of the PageRank.
This behaviour can be understood, because once a Pat get's to *Geoff's* Website, they can't leave, as all links head back to Geoff.
To combat this, we can add a small probability that the Procrastinating Pats don't follow any link on a webpage, but instead visit a website on the micro-internet at random.
We'll say the probability of them following a link is $d$ and the probability of choosing a random website is therefore $1-d$.
We can use a new matrix to work out where the Pat's visit each minute.
$$ M = d \, L + \frac{1-d}{n} \, J $$
where $J$ is an $n\times n$ matrix where every element is one.
If $d$ is one, we have the case we had previously, whereas if $d$ is zero, we will always visit a random webpage and therefore all webpages will be equally likely and equally ranked.
For this extension to work best, $1-d$ should be somewhat small - though we won't go into a discussion about exactly how small.
Let's retry this PageRank with this extension.
```
d = 0.5 # Feel free to play with this parameter after running the code once.
M = d * L2 + (1-d)/7 * np.ones([7, 7]) # np.ones() is the J matrix, with ones for each entry.
r = 100 * np.ones(7) / 7 # Sets up this vector (6 entries of 1/6 × 100 each)
lastR = r
r = M @ r
i = 0
while la.norm(lastR - r) > 0.01 :
lastR = r
r = M @ r
i += 1
print(str(i) + " iterations to convergence.")
r
```
This is certainly better, the PageRank gives sensible numbers for the Procrastinating Pats that end up on each webpage.
This method still predicts Geoff has a high ranking webpage however.
This could be seen as a consequence of using a small network. We could also get around the problem by not counting self-links when producing the L matrix (an if a website has no outgoing links, make it link to all websites equally).
We won't look further down this route, as this is in the realm of improvements to PageRank, rather than eigenproblems.
You are now in a good position, having gained an understanding of PageRank, to produce your own code to calculate the PageRank of a website with thousands of entries.
Good Luck!
## Part 2 - Assessment
In this assessment, you will be asked to produce a function that can calculate the PageRank for an arbitrarily large probability matrix.
This, the final assignment of the course, will give less guidance than previous assessments.
You will be expected to utilise code from earlier in the worksheet and re-purpose it to your needs.
### How to submit
Edit the code in the cell below to complete the assignment.
Once you are finished and happy with it, press the *Submit Assignment* button at the top of this notebook.
Please don't change any of the function names, as these will be checked by the grading script.
If you have further questions about submissions or programming assignments, here is a [list](https://www.coursera.org/learn/linear-algebra-machine-learning/discussions/weeks/1/threads/jB4klkn5EeibtBIQyzFmQg) of Q&A. You can also raise an issue on the discussion forum. Good luck!
```
# PACKAGE
# Here are the imports again, just in case you need them.
# There is no need to edit or submit this cell.
import numpy as np
import numpy.linalg as la
from readonly.PageRankFunctions import *
np.set_printoptions(suppress=True)
# GRADED FUNCTION
# Complete this function to provide the PageRank for an arbitrarily sized internet.
# I.e. the principal eigenvector of the damped system, using the power iteration method.
# (Normalisation doesn't matter here)
# The functions inputs are the linkMatrix, and d the damping parameter - as defined in this worksheet.
# (The damping parameter, d, will be set by the function - no need to set this yourself.)
def pageRank(linkMatrix, d) :
n = linkMatrix.shape[0]
M = d * linkMatrix + (1 - d) / n
r = 100 * np.ones(n) / n
lastR = r
r = M @ r
while la.norm(lastR - r) > 0.01 :
lastR = r
r = M @ r
return r
```
## Test your code before submission
To test the code you've written above, run the cell (select the cell above, then press the play button [ ▶| ] or press shift-enter).
You can then use the code below to test out your function.
You don't need to submit this cell; you can edit and run it as much as you like.
```
# Use the following function to generate internets of different sizes.
generate_internet(5)
# Test your PageRank method against the built in "eig" method.
# You should see yours is a lot faster for large internets
L = generate_internet(10)
pageRank(L, 1)
# Do note, this is calculating the eigenvalues of the link matrix, L,
# without any damping. It may give different results that your pageRank function.
# If you wish, you could modify this cell to include damping.
# (There is no credit for this though)
eVals, eVecs = la.eig(L) # Gets the eigenvalues and vectors
order = np.absolute(eVals).argsort()[::-1] # Orders them by their eigenvalues
eVals = eVals[order]
eVecs = eVecs[:,order]
r = eVecs[:, 0]
100 * np.real(r / np.sum(r))
# You may wish to view the PageRank graphically.
# This code will draw a bar chart, for each (numbered) website on the generated internet,
# The height of each bar will be the score in the PageRank.
# Run this code to see the PageRank for each internet you generate.
# Hopefully you should see what you might expect
# - there are a few clusters of important websites, but most on the internet are rubbish!
%pylab notebook
r = pageRank(generate_internet(100), 0.9)
plt.bar(arange(r.shape[0]), r);
```
| github_jupyter |
# Four Qubit Chip Design
Creates a complete quantum chip and exports it
### Preparations
The next cell enables [module automatic reload](https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html?highlight=autoreload). Your notebook will be able to pick up code updates made to the qiskit-metal (or other) module code.
```
%reload_ext autoreload
%autoreload 2
```
Import key libraries and open the Metal GUI. Also we configure the notebook to enable overwriting of existing components
```
import numpy as np
from collections import OrderedDict
from qiskit_metal import designs, draw
from qiskit_metal import MetalGUI, Dict, Headings
design = designs.DesignPlanar()
gui = MetalGUI(design)
# if you disable the next line, then you will need to delete a component [<component>.delete()] before recreating it
design.overwrite_enabled = True
```
Import components that will be necessary for the design
```
from qiskit_metal.qlibrary.qubits.transmon_pocket_cl import TransmonPocketCL
from qiskit_metal.qlibrary.tlines.meandered import RouteMeander
from qiskit_metal.qlibrary.tlines.anchored_path import RouteAnchors
from qiskit_metal.qlibrary.tlines.pathfinder import RoutePathfinder
from qiskit_metal.qlibrary.terminations.open_to_ground import OpenToGround
from qiskit_metal.qlibrary.terminations.launchpad_wb import LaunchpadWirebond
from qiskit_metal.qlibrary.terminations.launchpad_wb_coupled import LaunchpadWirebondCoupled
```
## Let's design the core of the chip
Setup the design-wide default settings for trace width and trace gap. These can be customized later for individual transmission lines.
```
design.variables['cpw_width'] = '10 um'
design.variables['cpw_gap'] = '6 um'
design._chips['main']['size']['size_y'] = '9mm'
design._chips['main']['size']['size_y'] = '6.5mm'
```
We need 4 transmons with 3 connection pads each and a chargeline. Let's explore the options of one transmon
```
TransmonPocketCL.get_template_options(design)
```
We want to change the `pad_width` for these transmons, as well as define the 3 connection pads and chargeline.
To apply the same modifications to all 4 transmons, we define a single option-dictionary to pass to all transmons at the monent of creation
```
transmon_options = dict(
connection_pads=dict(
a = dict(loc_W=+1, loc_H=-1, pad_width='70um', cpw_extend = '50um'),
b = dict(loc_W=-1, loc_H=-1, pad_width='125um', cpw_extend = '50um'),
c = dict(loc_W=-1, loc_H=+1, pad_width='110um', cpw_extend = '50um')
),
gds_cell_name='FakeJunction_01',
cl_off_center = '-50um',
cl_pocket_edge = '180'
)
```
We can now create the 4 transmons by specifying the desired coordinates and rotations.
```
offset_tm = 69 #we the transmon slightly out of center-line
q1 = TransmonPocketCL(design, 'Q1', options = dict(
pos_x='+2420um', pos_y=f'{offset_tm}um', **transmon_options))
q2 = TransmonPocketCL(design, 'Q2', options = dict(
pos_x='0um', pos_y='-857.6um', orientation = '270', **transmon_options))
q3 = TransmonPocketCL(design, 'Q3', options = dict(
pos_x='-2420um', pos_y=f'{offset_tm}um', orientation = '180', **transmon_options))
q4 = TransmonPocketCL(design, 'Q4', options = dict(
pos_x='0um', pos_y='+857.6um', orientation = '90', **transmon_options))
gui.rebuild()
gui.autoscale()
```
Let's now connect the transmons with tranismission lines. We want to have an "exact length" transmission line, so we will use the `RouteMeander`. Let's first observe what are the default options
```
RouteMeander.get_template_options(design)
```
We want to globally override the default lead (straight initial segment leaving the transmon) and the default fillet (corner rounding radius). Let's collect this information in one dictionary
```
fillet='99.99um'
cpw_options = Dict(
lead=Dict(
start_straight='100um',
end_straight='250um'),
fillet=fillet
)
```
We then want each transmission line to be connected to different pins and to have different lengths and asymmetry w.r.t their centerline. Let's collect this information in other dictionaries. Before doing that, to manage the dictionaries in a simpler way, we redefine the `RouteMeander` signature by wrapping it into a convenience method named `connect`
```
def connect(cpw_name: str, pin1_comp_name: str, pin1_comp_pin: str, pin2_comp_name: str, pin2_comp_pin: str,
length: str, asymmetry='0 um'):
"""Connect two pins with a CPW."""
myoptions = Dict(
pin_inputs=Dict(
start_pin=Dict(
component=pin1_comp_name,
pin=pin1_comp_pin),
end_pin=Dict(
component=pin2_comp_name,
pin=pin2_comp_pin)),
total_length=length)
myoptions.update(cpw_options)
myoptions.meander.asymmetry = asymmetry
return RouteMeander(design, cpw_name, myoptions)
```
We can now proceed and define the meanders following the signature: `connect(cpw_name, pin1_comp_name, pin1_comp_pin, pin2_comp_name, pin2_comp_pin, length, asymmetry)`
```
asym = 500
cpw1 = connect('cpw1', 'Q1', 'c', 'Q4', 'b', '9000um', f'-{asym-1.25*offset_tm}um')
cpw2 = connect('cpw2', 'Q3', 'b', 'Q4', 'c', '9000um', f'+{asym-1.25*offset_tm}um')
cpw3 = connect('cpw3', 'Q3', 'c', 'Q2', 'b', '9000um', f'-{asym+0.75*offset_tm}um')
cpw4 = connect('cpw4', 'Q1', 'b', 'Q2', 'c', '9000um', f'+{asym+0.75*offset_tm}um')
gui.rebuild()
gui.autoscale()
```
## Let's now connect the core elements to the launchpads
First we setup the launchpad location and orientation
```
# V1 - Corners
p1_c = LaunchpadWirebond(design, 'P1_C', options = dict(pos_x='3545um', pos_y='2812um', orientation='270', lead_length='0um'))
p2_c = LaunchpadWirebond(design, 'P2_C', options = dict(pos_x='3545um', pos_y='-2812um', orientation='90', lead_length='0um'))
p3_c = LaunchpadWirebond(design, 'P3_C', options = dict(pos_x='-3545um', pos_y='-2812um', orientation='90', lead_length='0um'))
p4_c = LaunchpadWirebond(design, 'P4_C', options = dict(pos_x='-3545um', pos_y='2812um', orientation='270', lead_length='0um'))
# V2
p1_q = LaunchpadWirebondCoupled(design, 'P1_Q', options = dict(pos_x='4020um', pos_y='0', orientation='180', lead_length='30um'))
p2_q = LaunchpadWirebondCoupled(design, 'P2_Q', options = dict(pos_x='-990um', pos_y='-2812um', orientation='90', lead_length='30um'))
p3_q = LaunchpadWirebondCoupled(design, 'P3_Q', options = dict(pos_x='-4020um', pos_y='0', orientation='0', lead_length='30um'))
p4_q = LaunchpadWirebondCoupled(design, 'P4_Q', options = dict(pos_x='990um', pos_y='2812um', orientation='270', lead_length='30um'))
gui.rebuild()
gui.autoscale()
```
Then we route. First the V2 launchpads - Exchange Coupler Lines to Edges
```
asym = 150
cpw_options = Dict(
lead=Dict(
start_straight='430um',
end_straight='0um'),
fillet=fillet
)
ol1 = connect('ol1', 'Q1', 'a', 'P1_Q', 'tie', '8.6 mm', f'+{asym}um')
ol3 = connect('ol3', 'Q3', 'a', 'P3_Q', 'tie', '8.6 mm', f'+{asym}um')
asym = 200
cpw_options = Dict(
lead=Dict(
start_straight='535um',
end_straight='0um'),
fillet=fillet
)
ol2 = connect('ol2', 'Q2', 'a', 'P2_Q', 'tie', '8.6 mm', f'+{asym}um')
ol4 = connect('ol4', 'Q4', 'a', 'P4_Q', 'tie', '8.6 mm', f'+{asym}um')
gui.rebuild()
gui.autoscale()
```
Finally we route the V1 launchpads - Charge Lines to Corners
We create the transmission lines between the corner launchpads and the open to grounds
```
from collections import OrderedDict
jogsA_in = OrderedDict()
jogsA_in[0] = ["L", '200um']
options_line_cl1 = {'pin_inputs':
{'start_pin': {'component': 'Q1', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P1_C', 'pin': 'tie'}},
'lead': {'start_straight': '120um', 'end_straight': '225um','start_jogged_extension': jogsA_in},
'fillet': fillet
}
cl1 = RouteAnchors(design, 'line_cl1', options_line_cl1)
options_line_cl3 = {'pin_inputs':
{'start_pin': {'component': 'Q3', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P3_C', 'pin': 'tie'}},
'lead': {'start_straight': '120um', 'end_straight': '225um', 'start_jogged_extension': jogsA_in},
'fillet': fillet
}
cl3 = RouteAnchors(design, 'line_cl3', options_line_cl3)
gui.rebuild()
gui.autoscale()
jogsB_in = OrderedDict()
jogsB_in[0] = ["L", '300um']
anchors2c = OrderedDict()
anchors2c[0] = np.array([2, -2.5])
options_line_cl2 = {'pin_inputs':
{'start_pin': {'component': 'Q2', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P2_C', 'pin': 'tie'}},
'lead': {'start_straight': '200um', 'end_straight': '225um',
'start_jogged_extension': jogsB_in},
'anchors': anchors2c,
'fillet': fillet
}
cl2 = RouteAnchors(design, 'line_cl2', options_line_cl2)
anchors4c = OrderedDict()
anchors4c[0] = np.array([-2, 2.5])
options_line_cl4 = {'pin_inputs':
{'start_pin': {'component': 'Q4', 'pin': 'Charge_Line'},
'end_pin': {'component': 'P4_C', 'pin': 'tie'}},
'lead': {'start_straight': '200um', 'end_straight': '225um',
'start_jogged_extension': jogsB_in},
'anchors': anchors4c,
'fillet': fillet
}
cl4 = RouteAnchors(design, 'line_cl4', options_line_cl4)
gui.rebuild()
gui.autoscale()
gui.rebuild() # rebuild the design and plot
gui.autoscale() #resize GUI to see QComponent
# Get a list of all the qcomponents in QDesign and then zoom on them.
all_component_names = design.components.keys()
gui.zoom_on_components(all_component_names)
#Save screenshot as a .png formatted file.
gui.screenshot()
# Screenshot the canvas only as a .png formatted file.
gui.figure.savefig('shot.png')
from IPython.display import Image, display
_disp_ops = dict(width=500)
display(Image('shot.png', **_disp_ops))
# Closing the Qiskit Metal GUI
gui.main_window.close()
```
| github_jupyter |

# Quantum Process Tomography
* **Last Updated:** June 17, 2019
* **Requires:** qiskit-terra 0.8, qiskit-ignis 0.1.1, qiskit-aer 0.2
This notebook contains examples for using the ``ignis.verification.tomography`` process tomography module.
```
# Needed for functions
import numpy as np
import time
# Import QISKit classes
import qiskit
from qiskit import QuantumRegister, QuantumCircuit, Aer
from qiskit.quantum_info import state_fidelity, process_fidelity
from qiskit.tools.qi.qi import outer
# Tomography functions
from qiskit.ignis.verification.tomography import process_tomography_circuits, ProcessTomographyFitter
```
## 1-qubit process tomography example
```
# Process tomography of a Hadamard gate
q = QuantumRegister(1)
circ = QuantumCircuit(q)
circ.h(q[0])
# Run circuit on unitary simulator to find ideal unitary
job = qiskit.execute(circ, Aer.get_backend('unitary_simulator'))
ideal_unitary = job.result().get_unitary(circ)
# convert to Choi-matrix in column-major convention
choi_ideal = outer(ideal_unitary.ravel(order='F'))
# Generate process tomography circuits and run on qasm simulator
qpt_circs = process_tomography_circuits(circ, q)
job = qiskit.execute(qpt_circs, Aer.get_backend('qasm_simulator'), shots=4000)
# Extract tomography data so that counts are indexed by measurement configuration
qpt_tomo = ProcessTomographyFitter(job.result(), qpt_circs)
qpt_tomo.data
# MLE Least-Squares tomographic reconstruction
t = time.time()
choi_lstsq = qpt_tomo.fit(method='lstsq')
print('Least-Sq Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 2, choi_lstsq.data / 2))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_lstsq.data, require_cptp=False)))
# CVXOPT Semidefinite-Program tomographic reconstruction
t = time.time()
choi_cvx = qpt_tomo.fit(method='cvx')
print('\nCVXOPT Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 2, choi_cvx.data / 2))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_cvx.data, require_cptp=False)))
```
## 1-qubit process tomography of two-qubit swap gate
We will prepare qubit-0 and measure qubit-1 so the reconstructed channel should be an identity.
```
# Process tomography of a Hadamard gate
q = QuantumRegister(2)
circ = QuantumCircuit(q)
circ.swap(q[0], q[1])
# Ideal channel is a unitary
ideal_unitary = np.eye(2)
choi_ideal = outer(ideal_unitary.ravel(order='F'))
# Generate process tomography circuits and run on qasm simulator
# We use the optional prepared_qubits kwarg to specify that the prepared qubit was different to measured qubit
qpt_circs = process_tomography_circuits(circ, q[1], prepared_qubits=q[0])
job = qiskit.execute(qpt_circs, Aer.get_backend('qasm_simulator'), shots=2000)
# Extract tomography data so that counts are indexed by measurement configuration
qpt_tomo = ProcessTomographyFitter(job.result(), qpt_circs)
qpt_tomo.data
# Least-Squares tomographic reconstruction
t = time.time()
choi_lstsq = qpt_tomo.fit(method='lstsq')
print('Least-Sq Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 2, choi_lstsq.data / 2))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_lstsq.data, require_cptp=False)))
# CVXOPT Semidefinite-Program tomographic reconstruction
t = time.time()
choi_cvx = qpt_tomo.fit(method='cvx')
print('\nCVXOPT Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 2, choi_cvx.data / 2))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_cvx.data, require_cptp=False)))
```
## 2-Qubit entangling circuit
```
# Bell-state entangling circuit
q = QuantumRegister(2)
circ = QuantumCircuit(q)
circ.h(q[0])
circ.cx(q[0], q[1])
# Run circuit on unitary simulator to find ideal unitary
job = qiskit.execute(circ, Aer.get_backend('unitary_simulator'))
ideal_unitary = job.result().get_unitary(circ)
# convert to Choi-matrix in column-major convention
choi_ideal = outer(ideal_unitary.ravel(order='F'))
# Generate process tomography circuits and run on qasm simulator
qpt_circs = process_tomography_circuits(circ, q)
job = qiskit.execute(qpt_circs, Aer.get_backend('qasm_simulator'), shots=2000)
# Extract tomography data so that counts are indexed by measurement configuration
qpt_tomo = ProcessTomographyFitter(job.result(), qpt_circs)
t = time.time()
choi_lstsq = qpt_tomo.fit(method='lstsq')
print('Least-Sq Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 4, choi_lstsq.data / 4))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_lstsq.data, require_cptp=False)))
t = time.time()
choi_cvx = qpt_tomo.fit(method='cvx')
print('\nCVXOPT Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 4, choi_cvx.data / 4))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_cvx.data, require_cptp=False)))
```
## Using SIC-POVM preparation basis
```
# Process tomography of a Hadamard gate
q = QuantumRegister(1)
circ = QuantumCircuit(q)
circ.h(q[0])
# Run circuit on unitary simulator to find ideal unitary
job = qiskit.execute(circ, Aer.get_backend('unitary_simulator'))
ideal_unitary = job.result().get_unitary(circ)
# convert to Choi-matrix in column-major convention
choi_ideal = outer(ideal_unitary.ravel(order='F'))
# Generate process tomography circuits and run on qasm simulator
qpt_circs = process_tomography_circuits(circ, q, prep_labels='SIC', prep_basis='SIC')
job = qiskit.execute(qpt_circs, Aer.get_backend('qasm_simulator'), shots=2000)
# Extract tomography data so that counts are indexed by measurement configuration
qpt_tomo = ProcessTomographyFitter(job.result(), qpt_circs, prep_basis='SIC')
qpt_tomo.data
# MLE Least-Squares tomographic reconstruction
t = time.time()
choi_lstsq = qpt_tomo.fit(method='lstsq')
print('Least-Sq Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 2, choi_lstsq.data / 2))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_lstsq.data, require_cptp=False)))
# CVXOPT Semidefinite-Program tomographic reconstruction
t = time.time()
choi_cvx = qpt_tomo.fit(method='cvx')
print('\nCVXOPT Fitter')
print('fit time:', time.time() - t)
print('fit fidelity (state):', state_fidelity(choi_ideal / 2, choi_cvx.data / 2))
print('fit fidelity (process):', np.real(process_fidelity(choi_ideal, choi_cvx.data, require_cptp=False)))
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
---
title: "Card fraud model training"
date: 2021-06-04
type: technical_note
draft: false
---
## Create experiment
```
def experiment_wrapper():
import os
import sys
import uuid
import random
import tensorflow as tf
from tensorflow.keras.callbacks import TensorBoard
from hops import model as hops_model
from hops import hdfs
import hsfs
# Create a connection
connection = hsfs.connection(engine='training')
# Get the feature store handle for the project's feature store
fs = connection.get_feature_store()
# Get training dataset
td_meta = fs.get_training_dataset("card_fraud_model", 1)
input_dim = 9
encoding_dim = 4
BATCH_SIZE = 32
EPOCHS = 5
train_input = td_meta.tf_data(target_name=None, is_training=True)
train_input_not_processed = train_input.tf_record_dataset()
def custom_impl(example):
feature_names = [td_feature.name for td_feature in td_meta.schema]
x = [tf.cast(example[feature_name], tf.float32) for feature_name in feature_names]
return x,x
train_input_custum_processed = train_input_not_processed.map(lambda value: custom_impl(value))\
.shuffle(EPOCHS * BATCH_SIZE)\
.repeat(EPOCHS * BATCH_SIZE)\
.cache()\
.batch(BATCH_SIZE, drop_remainder=True)\
.prefetch(tf.data.experimental.AUTOTUNE)
autoencoder = tf.keras.Sequential()
autoencoder.add(tf.keras.layers.Dense(16, activation='selu', input_shape=(input_dim,)))
autoencoder.add(tf.keras.layers.Dense(8, activation='selu'))
autoencoder.add(tf.keras.layers.Dense(4, activation='linear', name="bottleneck"))
autoencoder.add(tf.keras.layers.Dense(8, activation='selu'))
autoencoder.add(tf.keras.layers.Dense(16, activation='selu'))
autoencoder.add(tf.keras.layers.Dense(input_dim, activation='selu'))
# Compile the model.
#autoencoder.compile(optimizer='adam', loss='binary_crossentropy')
autoencoder.compile(loss=tf.keras.losses.MeanSquaredError(),
optimizer= tf.keras.optimizers.Adam(0.001),
metrics=tf.keras.metrics.MeanSquaredError()
)
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir=tensorboard.logdir()),
tf.keras.callbacks.ModelCheckpoint(filepath=tensorboard.logdir()),
]
history = autoencoder.fit(
train_input_custum_processed,
verbose=0,
epochs=EPOCHS,
steps_per_epoch=1,
validation_data=train_input_custum_processed,
validation_steps=1,
callbacks=callbacks
)
metrics = {'loss': history.history['loss'][0]}
# Export model
# WARNING(break-tutorial-inline-code): The following code snippet is
# in-lined in tutorials, please update tutorial documents accordingly
# whenever code changes.
export_path = os.getcwd() + '/model-' + str(uuid.uuid4())
print('Exporting trained model to: {}'.format(export_path))
tf.saved_model.save(autoencoder, export_path)
print('Done exporting!')
hops_model.export(export_path, "cardfraudmodel", metrics=metrics)
return metrics
```
## Launch experiment
```
from hops import experiment
from hops import hdfs
experiment.launch(experiment_wrapper, name='card fraud model', local_logdir=True, metric_key='loss')
```
| github_jupyter |
# Assignment 3
## Implementation: EM and Gaussian mixtures
```
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal as mv_normal
import matplotlib.mlab as mlab
from scipy.stats import chi2
from matplotlib.patches import Ellipse
```
We start off by loading the training data:
```
train_data = np.loadtxt('data/EMGaussian.train')
test_data = np.loadtxt('data/EMGaussian.test')
```
We will define a helper function that will help us compute the Gaussian pdf. This method will be used to plot the contours as well.
```
def mv_gauss(X, Y, mu, cov):
sigma_x = np.sqrt(cov[0,0])
sigma_y = np.sqrt(cov[1,1])
sigma_xy = np.sqrt(cov[0,1])
mu_x = mu[0]
mu_y = mu[1]
return mlab.bivariate_normal(X, Y, sigma_x, sigma_y, mu_x, mu_y, sigma_xy)
# Credit to:
# http://www.nhsilbert.net/source/2014/06/bivariate-normal-ellipse-plotting-in-python/
def plot_cov_ellipse(cov, pos, volume=.5, ax=None, fc='none', ec=[0,0,0], a=1, lw=1):
"""
Plots an ellipse enclosing *volume* based on the specified covariance
matrix (*cov*) and location (*pos*). Additional keyword arguments are passed on to the
ellipse patch artist.
Parameters
----------
cov : The 2x2 covariance matrix to base the ellipse on
pos : The location of the center of the ellipse. Expects a 2-element
sequence of [x0, y0].
volume : The volume inside the ellipse; defaults to 0.5
ax : The axis that the ellipse will be plotted on. Defaults to the
current axis.
"""
def eigsorted(cov):
vals, vecs = np.linalg.eigh(cov)
order = vals.argsort()[::-1]
return vals[order], vecs[:,order]
if ax is None:
ax = plt.gca()
vals, vecs = eigsorted(cov)
theta = np.degrees(np.arctan2(*vecs[:,0][::-1]))
kwrg = {'facecolor':fc, 'edgecolor':ec, 'alpha':a, 'linewidth':lw}
# Width and height are "full" widths, not radius
width, height = 2 * np.sqrt(chi2.ppf(volume,2)) * np.sqrt(vals)
ellip = Ellipse(xy=pos, width=width, height=height, angle=theta, **kwrg)
ax.add_artist(ellip)
```
### Implementation of the K-means algorithm
```
class K_means:
def __init__(self, k=4, n_dims=2):
self.k = k
self.n_dims = n_dims
def train(self, train_data):
# Initialize the cluster means
self.means = np.random.rand(self.k, self.n_dims) * np.max(train_data, axis=0)
n_iter = 0
# Matrix where each row is a z_n assignment vector associated with a data point
old_Z = np.zeros(shape=(train_data.shape[0], self.k))
self.Z = np.zeros(shape=(train_data.shape[0], self.k))
while(not self._converged(old_Z, n_iter)):
old_Z = np.array(self.Z)
self.Z = np.zeros(shape=(train_data.shape[0], self.k))
# First phase, we evaluate the value of the latent cluster assignment variables
for i, train_point in enumerate(train_data):
distances = np.linalg.norm(self.means - train_point, axis=1)**2
self.Z[i][np.argmin(distances)] = 1
# Second phase, the values of the cluster means are computed
self.means = self.Z.T.dot(train_data) / np.sum(self.Z.T, axis=1).reshape(self.k, 1)
n_iter += 1
def assign_cluster(self, data):
# Will hold the cluster that each data point belongs to
clusters = np.zeros(data.shape[0], dtype=int)
for i, x in enumerate(data):
distances = np.linalg.norm(self.means - x, axis=1)**2
clusters[i] = np.argmin(distances)
return clusters
# Helper function that checks the convergence of the K-means algorithm
def _converged(self, old_Z, n_iter):
if n_iter == 0:
return False
elif np.array_equal(old_Z, self.Z):
return True
else:
return False
kmeans = K_means()
kmeans.train(train_data)
means1 = kmeans.means
clusters1 = kmeans.assign_cluster(train_data)
kmeans.train(train_data)
means2 = kmeans.means
clusters2 = kmeans.assign_cluster(train_data)
kmeans.train(train_data)
means3 = kmeans.means
clusters3 = kmeans.assign_cluster(train_data)
```
#### Graphical representation of the data
```
plt.scatter(train_data[:,0], train_data[:,1], marker='x', c=clusters1, alpha=0.4)
plt.scatter(means1[:,0], means1[:,1], marker='v', color='red', alpha=0.8)
plt.title('K-means')
plt.show()
```
### EM algorithm for a Gaussian mixture with covariance matrix proportional to identity matrix
```
class EM_GMM_isotropic:
def __init__(self, k=4, n_dims=2):
self.k = k
self.n_dims = n_dims
def train(self, train_data, means, clusters, MAX_ITER = 100):
# We start off by initializing our gaussian mixture parameters with the parameters given to us
self.means = means
self.sigmas2 = np.ones(self.k)
# posterior probabilities or the weights N x K matrix
self.taus = np.zeros(shape=(train_data.shape[0], self.k))
self.pi = np.bincount(clusters) / clusters.shape[0]
n_iter = 0
while(n_iter < MAX_ITER):
# E step
for i in xrange(self.k):
cov = self.sigmas2[i] * np.eye(self.n_dims)
self.taus[:, i] = self.pi[i] * mv_normal.pdf(train_data, self.means[i], cov, allow_singular=True)
# normalize the taus to get posterior probabilities
self.taus = (self.taus.T / np.sum(self.taus, axis=1)).T
# M step
# Compute the new means and covariance matrices
for i in xrange(self.k):
# We compute the divisor in a variable because we need it in every other computation later on
tau_sum = np.sum(self.taus[:, i])
# First the mean for cluster i
self.means[i] = np.sum(self.taus[:, i].reshape(self.taus.shape[0], 1) * train_data, axis=0)
self.means[i] /= tau_sum
# Now we compute the new sigmas^2
accum = 0
for n in xrange(train_data.shape[0]):
distance = train_data[n] - self.means[i]
accum += self.taus[n,i] * np.linalg.norm(distance)**2
self.sigmas2[i] = accum/( 2* tau_sum)
self.pi[i] = tau_sum / train_data.shape[0]
n_iter += 1
def assign_cluster(self, data):
taus = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
cov = self.sigmas2[i] * np.eye(2)
taus[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], cov, True)
clusters = np.zeros(data.shape[0], dtype=int)
for i, x in enumerate(data):
clusters[i] = np.argmax(taus[i, :])
return clusters
def normalized_log_likelihood(self, data):
like = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
cov = self.sigmas2[i] * np.eye(2)
like[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], cov, True)
loglike = np.log(np.sum(like, axis=1))
loglike = np.sum(loglike) / data.shape[0]
return loglike
```
#### Graphical representation of the data
```
kmeans = K_means(k=4)
kmeans.train(train_data)
means = kmeans.means
clusters = kmeans.assign_cluster(train_data)
gmm = EM_GMM_isotropic(k=4)
gmm.train(train_data, means, clusters, MAX_ITER=500)
```
We plot the training data and test data together with colors to represent their estimated class
```
gmm_clusters_train = gmm.assign_cluster(train_data)
gmm_cluster_test = gmm.assign_cluster(test_data)
plt.scatter(train_data[:,0], train_data[:,1], marker='x', c=gmm_clusters_train, alpha=0.4)
plt.scatter(test_data[:,0], test_data[:,1], marker='x', c=gmm_cluster_test, alpha=0.4)
plt.scatter(gmm.means[:,0], gmm.means[:,1], marker='v', color='red', alpha=0.8)
delta = 0.5
x = np.arange(-10.0, 10, delta)
y = np.arange(-10.0, 10, delta)
X, Y = np.meshgrid(x, y)
for (mu, sigma) in zip(gmm.means, gmm.sigmas2):
cov = sigma * np.eye(2)
plot_cov_ellipse(cov, mu, volume=0.9, a=0.9, lw=1)
plt.title('EM for GMM with Isotropic Gaussians Training Data + Test Data')
plt.show()
```
We see that the ellipses containing 90% of the mass are circles. (The axes are on different scales, that is why they appear oval). This is because we assumed that the Gaussians in the mixtures were **isotropic**.
```
test_loglik = gmm.normalized_log_likelihood(test_data)
train_loglik = gmm.normalized_log_likelihood(train_data)
print 'The test log likelihood: ' + str(test_loglik)
print 'The training data log likelihood: ' + str(train_loglik)
```
### EM algorithm for a Gaussian mixture with general covariance matrix
```
class EM_GMM:
def __init__(self, k=4, n_dims=2):
self.k = k
self.n_dims = n_dims
def train(self, train_data, means, clusters, MAX_ITER = 100):
# We start off by initializing our gaussian mixture parameters with the parameters given to us
self.means = means
self.covs = [np.eye(self.n_dims)] * self.k
# compute the sample covariance of each cluster
for i in xrange(self.k):
self.covs[i] = np.cov(train_data[np.where(clusters==i)[0],:], rowvar=False)
# posterior probabilities or the weights N x K matrix
self.taus = np.zeros(shape=(train_data.shape[0], self.k))
self.pi = np.bincount(clusters) / clusters.shape[0]
n_iter = 0
while(n_iter < MAX_ITER):
# E step
for i in xrange(self.k):
self.taus[:, i] = self.pi[i] * mv_normal.pdf(train_data, self.means[i], self.covs[i], True)
# normalize the taus to get posterior probabilities
self.taus = (self.taus.T / np.sum(self.taus, axis=1)).T
# M step
# Compute the new means and covariance matrices
for i in xrange(self.k):
tau_sum = np.sum(self.taus[:, i])
# First the mean for cluster i
self.means[i] = (np.sum(self.taus[:, i].reshape(self.taus.shape[0], 1) * train_data, axis=0) / tau_sum)
distance = train_data - self.means[i]
self.covs[i] = (distance.T.dot(self.taus[:, i].reshape(self.taus.shape[0], 1) * distance) / tau_sum)
self.pi[i] = tau_sum / train_data.shape[0]
n_iter += 1
def assign_cluster(self, data):
taus = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
taus[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], self.covs[i], True)
clusters = np.zeros(data.shape[0], dtype=int)
for i, x in enumerate(data):
clusters[i] = np.argmax(taus[i, :])
return clusters
def normalized_log_likelihood(self, data):
like = np.zeros(shape=(data.shape[0], self.k))
for i in xrange(self.k):
like[:, i] = self.pi[i] * mv_normal.pdf(data, self.means[i], self.covs[i], True)
loglike = np.log(np.sum(like, axis=1))
loglike = np.sum(loglike) / data.shape[0]
return loglike
```
#### Graphical representation of the data
```
gmm = EM_GMM(k=4)
gmm.train(train_data, means, clusters, MAX_ITER=2000)
```
We plot the training data and test data together with colors to represent their estimated class
```
gmm_clusters_train = gmm.assign_cluster(train_data)
gmm_cluster_test = gmm.assign_cluster(test_data)
plt.scatter(train_data[:,0], train_data[:,1], marker='x', c=gmm_clusters_train, alpha=0.4)
plt.scatter(test_data[:,0], test_data[:,1], marker='x', c=gmm_cluster_test, alpha=0.4)
delta = 0.5
x = np.arange(-10.0, 10, delta)
y = np.arange(-10.0, 10, delta)
X, Y = np.meshgrid(x, y)
for (mu, cov) in zip(gmm.means, gmm.covs):
plot_cov_ellipse(cov, mu, volume=0.8, a=0.9, lw=1)
plt.title('EM for GMM Training Data + Test Data')
plt.show()
```
We notice in this case that our model fits the data much better. This is because we removed the constraints that the Gaussians were isotropic. We assume a more general form of the covariance matrices.
```
test_loglik = gmm.normalized_log_likelihood(test_data)
train_loglik = gmm.normalized_log_likelihood(train_data)
print 'The test log likelihood: ' + str(test_loglik)
print 'The training data log likelihood: ' + str(train_loglik)
```
For EM with isotropic gaussians we get the following log likelihoods:
`
The test log likelihood: -5.38819545252
The training data log likelihood: -5.29104864112
`
For EM with Gaussians with general covariance matrices we get:
`
The test log likelihood: -4.81795630691
The training data log likelihood: -4.65543134984
`
We have that the log-likelihood is higher in the latter case. This is to be expected because a mixture with general covariance matrices will fit our data better as we can see on the scatter plots.
| github_jupyter |
```
%matplotlib inline
```
Cross Compilation and RPC
=========================
**Author**: `Ziheng Jiang <https://github.com/ZihengJiang/>`_, `Lianmin Zheng <https://github.com/merrymercy/>`_
This tutorial introduces cross compilation and remote device
execution with RPC in TVM.
With cross compilation and RPC, you can **compile program on your
local machine then run it on the remote device**. It is useful when
the resource of remote devices is limited, like Raspberry Pi and mobile
platforms. In this tutorial, we will take Raspberry Pi for CPU example
and Firefly-RK3399 for opencl example.
Build TVM Runtime on Device
---------------------------
The first step is to build tvm runtime on the remote device.
<div class="alert alert-info"><h4>Note</h4><p>All instructions in both this section and next section should be
executed on the target device, e.g. Raspberry Pi. And we assume it
has Linux running.</p></div>
Since we do compilation on local machine, the remote device is only used
for running the generated code. We only need to build tvm runtime on
the remote device.
.. code-block:: bash
git clone --recursive https://github.com/dmlc/tvm
cd tvm
make runtime -j2
After building runtime successfully, we need to set environment variables
in :code:`~/.bashrc` file. We can edit :code:`~/.bashrc`
using :code:`vi ~/.bashrc` and add the line below (Assuming your TVM
directory is in :code:`~/tvm`):
.. code-block:: bash
export PYTHONPATH=$PYTHONPATH:~/tvm/python
To update the environment variables, execute :code:`source ~/.bashrc`.
Set Up RPC Server on Device
---------------------------
To start an RPC server, run the following command on your remote device
(Which is Raspberry Pi in this example).
.. code-block:: bash
python -m tvm.exec.rpc_server --host 0.0.0.0 --port=9090
If you see the line below, it means the RPC server started
successfully on your device.
.. code-block:: bash
INFO:root:RPCServer: bind to 0.0.0.0:9090
Declare and Cross Compile Kernel on Local Machine
-------------------------------------------------
<div class="alert alert-info"><h4>Note</h4><p>Now we back to the local machine, which has a full TVM installed
(with LLVM).</p></div>
Here we will declare a simple kernel on the local machine:
```
import numpy as np
import tvm
from tvm import rpc
from tvm.contrib import util
n = tvm.convert(1024)
A = tvm.placeholder((n,), name='A')
B = tvm.compute((n,), lambda i: A[i] + 1.0, name='B')
s = tvm.create_schedule(B.op)
```
Then we cross compile the kernel.
The target should be 'llvm -target=armv7l-linux-gnueabihf' for
Raspberry Pi 3B, but we use 'llvm' here to make this tutorial runnable
on our webpage building server. See the detailed note in the following block.
```
local_demo = True
if local_demo:
target = 'llvm'
else:
target = 'llvm -target=armv7l-linux-gnueabihf'
func = tvm.build(s, [A, B], target=target, name='add_one')
# save the lib at a local temp folder
temp = util.tempdir()
path = temp.relpath('lib.tar')
func.export_library(path)
```
<div class="alert alert-info"><h4>Note</h4><p>To run this tutorial with a real remote device, change :code:`local_demo`
to False and replace :code:`target` in :code:`build` with the true
target triple of your device. The target triple which might be
different for different devices. For example, it is
:code:`'llvm -target=armv7l-linux-gnueabihf'` for Raspberry Pi 3B and
:code:`'llvm -target=aarch64-linux-gnu'` for RK3399.
Usually, you can query the target by execute :code:`gcc -v` on your
device, and look for the line starting with :code:`Target:`
(Though it may be still a loose configuration.)
Besides :code:`-target`, you can also set other compilation options
like:
* -mcpu=<cpuname>
Specify a specific chip in the current architecture to generate code for. By default this is inferred from the target triple and autodetected to the current architecture.
* -mattr=a1,+a2,-a3,...
Override or control specific attributes of the target, such as whether SIMD operations are enabled or not. The default set of attributes is set by the current CPU.
To get the list of available attributes, you can do:
.. code-block:: bash
llc -mtriple=<your device target triple> -mattr=help
These options are consistent with `llc <http://llvm.org/docs/CommandGuide/llc.html>`_.
It is recommended to set target triple and feature set to contain specific
feature available, so we can take full advantage of the features of the
board.
You can find more details about cross compilation attributes from
`LLVM guide of cross compilation <https://clang.llvm.org/docs/CrossCompilation.html>`_.</p></div>
Run CPU Kernel Remotely by RPC
------------------------------
We show how to run the generated cpu kernel on the remote device.
First we obtain an RPC session from remote device.
```
if local_demo:
remote = rpc.LocalSession()
else:
# The following is my environment, change this to the IP address of your target device
host = '10.77.1.162'
port = 9090
remote = rpc.connect(host, port)
```
Upload the lib to the remote device, then invoke a device local
compiler to relink them. Now `func` is a remote module object.
```
remote.upload(path)
func = remote.load_module('lib.tar')
# create arrays on the remote device
ctx = remote.cpu()
a = tvm.nd.array(np.random.uniform(size=1024).astype(A.dtype), ctx)
b = tvm.nd.array(np.zeros(1024, dtype=A.dtype), ctx)
# the function will run on the remote device
func(a, b)
np.testing.assert_equal(b.asnumpy(), a.asnumpy() + 1)
```
When you want to evaluate the performance of the kernel on the remote
device, it is important to avoid the overhead of network.
:code:`time_evaluator` will returns a remote function that runs the
function over number times, measures the cost per run on the remote
device and returns the measured cost. Network overhead is excluded.
```
time_f = func.time_evaluator(func.entry_name, ctx, number=10)
cost = time_f(a, b).mean
print('%g secs/op' % cost)
```
Run OpenCL Kernel Remotely by RPC
---------------------------------
As for remote OpenCL devices, the workflow is almost the same as above.
You can define the kernel, upload files, and run by RPC.
<div class="alert alert-info"><h4>Note</h4><p>Raspberry Pi does not support OpenCL, the following code is tested on
Firefly-RK3399. You may follow this `tutorial <https://gist.github.com/mli/585aed2cec0b5178b1a510f9f236afa2>`_
to setup the OS and OpenCL driver for RK3399.
Also we need to build the runtime with OpenCL enabled on rk3399 board. In the tvm
root directory, execute</p></div>
.. code-block:: bash
cp cmake/config.cmake .
sed -i "s/USE_OPENCL OFF/USE_OPENCL ON/" config.cmake
make runtime -j4
The following function shows how we run OpenCL kernel remotely
```
def run_opencl():
# NOTE: This is the setting for my rk3399 board. You need to modify
# them according to your environment.
target_host = "llvm -target=aarch64-linux-gnu"
opencl_device_host = '10.77.1.145'
opencl_device_port = 9090
# create scheule for the above "add one" compute decleration
s = tvm.create_schedule(B.op)
xo, xi = s[B].split(B.op.axis[0], factor=32)
s[B].bind(xo, tvm.thread_axis("blockIdx.x"))
s[B].bind(xi, tvm.thread_axis("threadIdx.x"))
func = tvm.build(s, [A, B], "opencl", target_host=target_host)
remote = rpc.connect(opencl_device_host, opencl_device_port)
# export and upload
path = temp.relpath('lib_cl.tar')
func.export_library(path)
remote.upload(path)
func = remote.load_module('lib_cl.tar')
# run
ctx = remote.cl()
a = tvm.nd.array(np.random.uniform(size=1024).astype(A.dtype), ctx)
b = tvm.nd.array(np.zeros(1024, dtype=A.dtype), ctx)
func(a, b)
np.testing.assert_equal(b.asnumpy(), a.asnumpy() + 1)
print("OpenCP test passed!")
```
Summary
-------
This tutorial provides a walk through of cross compilation and RPC
features in TVM.
- Set up RPC server on the remote device.
- Set up target device configuration to cross compile kernel on the
local machine.
- Upload and run the kernel remotely by RPC API.
| github_jupyter |
# Data Wrangling
# Introduction
This project focused on wrangling data from the WeRateDogs Twitter account using Python, documented in a Jupyter Notebook (wrangle_act.ipynb). This Twitter account rates dogs with humorous commentary. The rating denominator is usually 10, however, the numerators are usually greater than 10.
[They’re Good Dogs Brent](http://knowyourmeme.com/memes/theyre-good-dogs-brent)
wrangle WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. WeRateDogs has over 4 million followers and has received international media coverage.
WeRateDogs downloaded their Twitter archive and sent it to Udacity via email exclusively for us to use in this project. This archive contains basic tweet data (tweet ID, timestamp, text, etc.) for all 5000+ of their tweets as they stood on August 1, 2017.
The goal of this project is to wrangle the WeRateDogs Twitter data to create interesting and trustworthy analyses and visualizations. The challenge lies in the fact that the Twitter archive is great, but it only contains very basic tweet information that comes in JSON format. I needed to gather, asses and clean the Twitter data for a worthy analysis and visualization.
## The Data
### Enhanced Twitter Archive
The WeRateDogs Twitter archive contains basic tweet data for all 5000+ of their tweets, but not everything. One column the archive does contain though: each tweet's text, which I used to extract rating, dog name, and dog "stage" (i.e. doggo, floofer, pupper, and puppo) to make this Twitter archive "enhanced.".We manually downloaded this file manually by clicking the following link: [twitter_archive_enhanced.csv](https://d17h27t6h515a5.cloudfront.net/topher/2017/August/59a4e958_twitter-archive-enhanced/twitter-archive-enhanced.csv)
### Additional Data via the Twitter API
Back to the basic-ness of Twitter archives: retweet count and favorite count are two of the notable column omissions. Fortunately, this additional data can be gathered by anyone from Twitter's API. Well, "anyone" who has access to data for the 3000 most recent tweets, at least. But we, because we have the WeRateDogs Twitter archive and specifically the tweet IDs within it, can gather this data for all 5000+. And guess what? We're going to query Twitter's API to gather this valuable data.
### Image Predictions File
he tweet image predictions, i.e., what breed of dog (or other object, animal, etc.) is present in each tweet according to a neural network. This file (image_predictions.tsv) hosted on Udacity's servers and we downloaded it programmatically using python Requests library on the following (URL of the file: https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv)
## Key Points
Key points to keep in mind when data wrangling for this project:
* We only want original ratings (no retweets) that have images. Though there are 5000+ tweets in the dataset, not all are dog ratings and some are retweets.
* Fully assessing and cleaning the entire dataset requires exceptional effort so only a subset of its issues (eight (8) quality issues and two (2) tidiness issues at minimum) need to be assessed and cleaned.
* Cleaning includes merging individual pieces of data according to the rules of tidy data.
* The fact that the rating numerators are greater than the denominators does not need to be cleaned. This unique rating system is a big part of the popularity of WeRateDogs.
* We do not need to gather the tweets beyond August 1st, 2017. We can, but note that we won't be able to gather the image predictions for these tweets since we don't have access to the algorithm used.
# Project Details
Fully assessing and cleaning the entire dataset would require exceptional effort so only a subset of its issues (eight quality issues and two tidiness issues at minimum) needed to be assessed and cleaned.
The tasks for this project were:
* Data wrangling, which consists of:
* Gathering data
* Assessing data
* Cleaning data
* Storing, analyzing, and visualizing our wrangled data
* Reporting on 1) our data wrangling efforts and 2) our data analyses and visualizations
```
import pandas as pd
import numpy as np
import requests
import tweepy
import os
import json
import time
import re
import matplotlib.pyplot as plt
import warnings
```
# Gather
```
# read csv as a Pandas DataFrame
twitter_archive = pd.read_csv('./Data/twitter-archive-enhanced.csv')
twitter_archive.head()
twitter_archive.info()
# Use requests library to download tsv file
url="https://d17h27t6h515a5.cloudfront.net/topher/2017/August/599fd2ad_image-predictions/image-predictions.tsv"
response = requests.get(url)
with open('./Data/image_predictions.tsv', 'wb') as file:
file.write(response.content)
image_predictions = pd.read_csv('./Data/image_predictions.tsv', sep='\t')
image_predictions.info()
```
**Query the Twitter API for each tweet's JSON data using Python's Tweepy library and store each tweet's entire set of JSON data in a file.**
```
CONSUMER_KEY = ""
CONSUMER_SECRET = ""
OAUTH_TOKEN = ""
OAUTH_TOKEN_SECRET = ""
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(OAUTH_TOKEN, OAUTH_TOKEN_SECRET)
api = tweepy.API(auth)
# List of the error tweets
error_list = []
# List of tweets
df_list = []
# Calculate the time of execution
start = time.time()
# For loop which will add each available tweet json to df_list
for tweet_id in twitter_archive['tweet_id']:
try:
tweet = api.get_status(tweet_id, tweet_mode='extended',
wait_on_rate_limit = True, wait_on_rate_limit_notify = True)._json
favorites = tweet['favorite_count'] # How many favorites the tweet had
retweets = tweet['retweet_count'] # Count of the retweet
user_followers = tweet['user']['followers_count'] # How many followers the user had
user_favourites = tweet['user']['favourites_count'] # How many favorites the user had
date_time = tweet['created_at'] # The date and time of the creation
df_list.append({'tweet_id': int(tweet_id),
'favorites': int(favorites),
'retweets': int(retweets),
'user_followers': int(user_followers),
'user_favourites': int(user_favourites),
'date_time': pd.to_datetime(date_time)})
except Exception as e:
print(str(tweet_id)+ " _ " + str(e))
error_list.append(tweet_id)
# Calculate the time of excution
end = time.time()
print(end - start)
# lengh of the result
print("The lengh of the result", len(df_list))
# The tweet_id of the errors
print("The lengh of the errors", len(error_list))
```
From the above results:
- We reached the limit of the tweepy API three times but wait_on_rate_limit automatically wait for rate limits to re-establish and wait_on_rate_limit_notify print a notification when Tweepy is waiting.
- We could get 2344 tweet_id correctly with 12 errors
- The total time was about 3023 seconds (~ 50.5 min)
```
print("The length of the result", len(df_list))
# Create DataFrames from list of dictionaries
json_tweets = pd.DataFrame(df_list, columns = ['tweet_id', 'favorites', 'retweets',
'user_followers', 'user_favourites', 'date_time'])
# Save the dataFrame in file
json_tweets.to_csv('tweet_json.txt', encoding = 'utf-8', index=False)
# Read the saved tweet_json.txt file into a dataframe
tweet_data = pd.read_csv('tweet_json.txt', encoding = 'utf-8')
tweet_data
tweet_data.info()
```
## Gather: Summary
Gathering is the first step in the data wrangling process.
- Obtaining data
- Getting data from an existing file (twitter-archive-enhanced.csv) Reading from csv file using pandas
- Downloading a file from the internet (image-predictions.tsv) Downloading file using requests
- Querying an API (tweet_json.txt) Get JSON object of all the tweet_ids using Tweepy
- Importing that data into our programming environment (Jupyter Notebook)
## Assessing
```
# Print some random examples
twitter_archive.sample(10)
# Assessing the data programmaticaly
twitter_archive.info()
twitter_archive.describe()
twitter_archive['rating_numerator'].value_counts()
twitter_archive['rating_denominator'].value_counts()
twitter_archive['name'].value_counts()
# View descriptive statistics of twitter_archive
twitter_archive.describe()
image_predictions
image_predictions.info()
image_predictions['jpg_url'].value_counts()
image_predictions[image_predictions['jpg_url'] == 'https://pbs.twimg.com/media/DF6hr6BUMAAzZgT.jpg']
# View number of entries for each source
twitter_archive.source.value_counts()
#For rating that don't follow pattern
twitter_archive[twitter_archive['rating_numerator'] > 20]
#unusual names
twitter_archive[twitter_archive['name'].apply(len) < 3]
#Orignal Tweets
twitter_archive[twitter_archive['retweeted_status_id'].isnull()]
```
## Quality
*Completeness, Validity, Accuracy, Consistency => a.k.a content issues*
**twitter_archive dataset**
- in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id should be integers/strings instead of float.
- retweeted_status_timestamp, timestamp should be datetime instead of object (string).
- The numerator and denominator columns have invalid values.
- In several columns null objects are non-null (None to NaN).
- Name column have invalid names i.e 'None', 'a', 'an' and less than 3 characters.
- We only want original ratings (no retweets) that have images.
- We may want to change this columns type (in_reply_to_status_id, in_reply_to_user_id, retweeted_status_id, retweeted_status_user_id and tweet_id) to string because We don't want any operations on them.
- Sources difficult to read.
**image_predictions dataset**
- Missing values from images dataset (2075 rows instead of 2356)
- Some tweet_ids have the same jpg_url
- Some tweets are have 2 different tweet_id one redirect to the other (Dataset contains retweets)
**tweet_data dataset**
- This tweet_id (666020888022790149) duplicated 8 times
## Tidiness
Untidy data => a.k.a structural issues
- No need to all the informations in images dataset, (tweet_id and jpg_url what matters)
- Dog "stage" variable in four columns: doggo, floofer, pupper, puppo
- Join 'tweet_info' and 'image_predictions' to 'twitter_archive'
## Cleaning
Cleaning our data is the third step in data wrangling. It is where we will fix the quality and tidiness issues that we identified in the assess step.
```
#copy dataframes
tweet_data_clean = tweet_data.copy()
twitter_archive_clean = twitter_archive.copy()
image_predictions_clean= image_predictions.copy()
```
#### Define
Add tweet_info and image_predictions to twitter_archive table.
#### Code
```
twitter_archive_clean = pd.merge(left=twitter_archive_clean,
right=tweet_data_clean, left_on='tweet_id', right_on='tweet_id', how='inner')
twitter_archive_clean = twitter_archive_clean.merge(image_predictions_clean, on='tweet_id', how='inner')
```
#### Test
```
twitter_archive_clean.info()
```
#### Define
Melt the 'doggo', 'floofer', 'pupper' and 'puppo' columns into one column 'dog_stage'.
#### Code
```
# Select the columns to melt and to remain
MELTS_COLUMNS = ['doggo', 'floofer', 'pupper', 'puppo']
STAY_COLUMNS = [x for x in twitter_archive_clean.columns.tolist() if x not in MELTS_COLUMNS]
# Melt the the columns into values
twitter_archive_clean = pd.melt(twitter_archive_clean, id_vars = STAY_COLUMNS, value_vars = MELTS_COLUMNS,
var_name = 'stages', value_name = 'dog_stage')
# Delete column 'stages'
twitter_archive_clean = twitter_archive_clean.drop('stages', 1)
```
#### Test
```
print(twitter_archive_clean.dog_stage.value_counts())
print(len(twitter_archive_clean))
```
#### Clean
Clean rows and columns that we will not need
#### Code
```
# Delete the retweets
twitter_archive_clean = twitter_archive_clean[pd.isnull(twitter_archive_clean.retweeted_status_id)]
# Delete duplicated tweet_id
twitter_archive_clean = twitter_archive_clean.drop_duplicates()
# Delete tweets with no pictures
twitter_archive_clean = twitter_archive_clean.dropna(subset = ['jpg_url'])
# small test
len(twitter_archive_clean)
# Delete columns related to retweet we don't need anymore
twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_id', 1)
twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_user_id', 1)
twitter_archive_clean = twitter_archive_clean.drop('retweeted_status_timestamp', 1)
# Delete column date_time we imported from the API, it has the same values as timestamp column
twitter_archive_clean = twitter_archive_clean.drop('date_time', 1)
# small test
list(twitter_archive_clean)
#Delete dog_stage duplicates
twitter_archive_clean = twitter_archive_clean.sort_values('dog_stage').drop_duplicates('tweet_id', keep = 'last')
```
#### Test
```
print(twitter_archive_clean.dog_stage.value_counts())
print(len(twitter_archive_clean))
```
#### Define
Get rid of image prediction columns
#### Code
```
# We will store the fisrt true algorithm with it's level of confidence
prediction_algorithm = []
confidence_level = []
# Get_prediction_confidence function:
# search the first true algorithm and append it to a list with it's level of confidence
# if flase prediction_algorthm will have a value of NaN
def get_prediction_confidence(dataframe):
if dataframe['p1_dog'] == True:
prediction_algorithm.append(dataframe['p1'])
confidence_level.append(dataframe['p1_conf'])
elif dataframe['p2_dog'] == True:
prediction_algorithm.append(dataframe['p2'])
confidence_level.append(dataframe['p2_conf'])
elif dataframe['p3_dog'] == True:
prediction_algorithm.append(dataframe['p3'])
confidence_level.append(dataframe['p3_conf'])
else:
prediction_algorithm.append('NaN')
confidence_level.append(0)
twitter_archive_clean.apply(get_prediction_confidence, axis=1)
twitter_archive_clean['prediction_algorithm'] = prediction_algorithm
twitter_archive_clean['confidence_level'] = confidence_level
```
#### Test
```
list(twitter_archive_clean)
# Delete the columns of image prediction information
twitter_archive_clean = twitter_archive_clean.drop(['img_num', 'p1', 'p1_conf', 'p1_dog', 'p2', 'p2_conf', 'p2_dog', 'p3', 'p3_conf', 'p3_dog'], 1)
list(twitter_archive_clean)
# let's concentrate on low values.. let's dig more
twitter_archive_clean.info()
print('in_reply_to_user_id ')
print(twitter_archive_clean['in_reply_to_user_id'].value_counts())
print('source ')
print(twitter_archive_clean['source'].value_counts())
print('user_favourites ')
print(twitter_archive_clean['user_favourites'].value_counts())
```
#### Notes
- One value in ***in_reply_to_user_id*** so we will delete the columns of reply all of them replying to @dog_rates.
- ***source** has 3 types, we will clean that column and made them clean.
- **user_favourites** has 2 values and they are close.
```
# drop the following columns 'in_reply_to_status_id', 'in_reply_to_user_id', 'user_favourites'
twitter_archive_clean = twitter_archive_clean.drop(['in_reply_to_status_id', 'in_reply_to_user_id', 'user_favourites'], 1)
# Clean the content of source column
twitter_archive_clean['source'] = twitter_archive_clean['source'].apply(lambda x: re.findall(r'>(.*)<', x)[0])
# Test
twitter_archive_clean
```
#### Define
Fix rating numerator and denominators that are not actually ratings
#### Code
```
# View all occurences where there are more than one #/# in 'text' column
text_ratings_to_fix = twitter_archive_clean[twitter_archive_clean.text.str.contains( r"(\d+\.?\d*\/\d+\.?\d*\D+\d+\.?\d*\/\d+\.?\d*)")].text
text_ratings_to_fix
for entry in text_ratings_to_fix:
mask = twitter_archive_clean.text == entry
column_name1 = 'rating_numerator'
column_name2 = 'rating_denominator'
twitter_archive_clean.loc[mask, column_name1] = re.findall(r"\d+\.?\d*\/\d+\.?\d*\D+(\d+\.?\d*)\/\d+\.?\d*", entry)
twitter_archive_clean.loc[mask, column_name2] = 10
twitter_archive_clean[twitter_archive_clean.text.isin(text_ratings_to_fix)]
```
#### Define
Fix rating numerator that have decimals.
#### Code
```
# View tweets with decimals in rating in 'text' column
twitter_archive_clean[twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")]
# Set correct numerators for specific tweets
twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 883482846933004288) & (twitter_archive_clean['rating_numerator'] == 5), ['rating_numerator']] = 13.5
twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 786709082849828864) & (twitter_archive_clean['rating_numerator'] == 75), ['rating_numerator']] = 9.75
twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 778027034220126208) & (twitter_archive_clean['rating_numerator'] == 27), ['rating_numerator']] = 11.27
twitter_archive_clean.loc[(twitter_archive_clean['tweet_id'] == 680494726643068929) & (twitter_archive_clean['rating_numerator'] == 26), ['rating_numerator']] = 11.26
```
#### Test
```
twitter_archive_clean[twitter_archive_clean.text.str.contains(r"(\d+\.\d*\/\d+)")]
```
#### Define
Get Dogs gender column from text column
#### Code
```
# Loop on all the texts and check if it has one of pronouns of male or female
# and append the result in a list
male = ['He', 'he', 'him', 'his', "he's", 'himself']
female = ['She', 'she', 'her', 'hers', 'herself', "she's"]
dog_gender = []
for text in twitter_archive_clean['text']:
# Male
if any(map(lambda v:v in male, text.split())):
dog_gender.append('male')
# Female
elif any(map(lambda v:v in female, text.split())):
dog_gender.append('female')
# If group or not specified
else:
dog_gender.append('NaN')
# Test
len(dog_gender)
# Save the result in a new column 'dog_name'
twitter_archive_clean['dog_gender'] = dog_gender
```
#### Test
```
print("dog_gender count \n", twitter_archive_clean.dog_gender.value_counts())
```
#### Define
Convert the null values to None type
#### Code
```
twitter_archive_clean.loc[twitter_archive_clean['prediction_algorithm'] == 'NaN', 'prediction_algorithm'] = None
twitter_archive_clean.loc[twitter_archive_clean['dog_gender'] == 'NaN', 'dog_gender'] = None
twitter_archive_clean.loc[twitter_archive_clean['rating_numerator'] == 'NaN', 'rating_numerator'] = 0
#twitter_archive_clean.loc[twitter_archive_clean['rating_denominator'] == 'NaN', 'rating_denominator'] = 0
```
#### Test
```
twitter_archive_clean.info()
```
#### Define
Change datatypes .
#### Code
```
twitter_archive_clean['tweet_id'] = twitter_archive_clean['tweet_id'].astype(str)
twitter_archive_clean['timestamp'] = pd.to_datetime(twitter_archive_clean.timestamp)
twitter_archive_clean['source'] = twitter_archive_clean['source'].astype('category')
twitter_archive_clean['favorites'] = twitter_archive_clean['favorites'].astype(int)
twitter_archive_clean['retweets'] = twitter_archive_clean['retweets'].astype(int)
twitter_archive_clean['user_followers'] = twitter_archive_clean['user_followers'].astype(int)
twitter_archive_clean['dog_stage'] = twitter_archive_clean['dog_stage'].astype('category')
twitter_archive_clean['rating_numerator'] = twitter_archive_clean['rating_numerator'].astype(float)
twitter_archive_clean['rating_denominator'] = twitter_archive_clean['rating_denominator'].astype(float)
twitter_archive_clean['dog_gender'] = twitter_archive_clean['dog_gender'].astype('category')
```
#### Test
```
twitter_archive_clean.dtypes
```
#### Store
```
# Save clean DataFrame to csv file
twitter_archive_clean.drop(twitter_archive_clean.columns[twitter_archive_clean.columns.str.contains('Unnamed',case = False)],axis = 1)
twitter_archive_clean.to_csv('./Data/twitter_archive_master.csv', encoding = 'utf-8', index=False)
twitter_archive_clean = pd.read_csv('./Data/twitter_archive_master.csv')
twitter_archive_clean.info()
```
*** visualizations in the act_report.ipynd notebook ***
| github_jupyter |
```
import tensorflow as tf
import numpy as np
import keras
import pandas as pd
x_train = pd.read_csv('trainingfeatures.csv').drop(columns=['Unnamed: 0'])
y_train = pd.read_csv('traininglabels.csv').drop(columns=['Unnamed: 0'])
x_test = pd.read_csv('testingfeatures.csv').drop(columns=['Unnamed: 0'])
y_test = pd.read_csv('testinglabels.csv').drop(columns=['Unnamed: 0'])
temp_x_train=[]
for row in x_train.iterrows():
index, data = row
temp_x_train.append(data.tolist())
temp_y_train=[]
for row in y_train.iterrows():
index, data = row
temp_y_train.append(data.tolist())
temp_x_test=[]
for row in x_test.iterrows():
index, data = row
temp_x_test.append(data.tolist())
temp_y_test=[]
for row in y_test.iterrows():
index, data = row
temp_y_test.append(data.tolist())
x= np.array(temp_x_train)
y=np.array(keras.utils.to_categorical(y_train))
num_input = 3 # MNIST data input
num_classes = 4 # MNIST total classes (0-3 digits)
```
## Part D Change Epochs
Activation = tanh + relu + softmax
Loss = cross_entropy
Epochs = **10000->5000**
```
EPOCHS = 5000
BATCH_SIZE = 1000
display_step = 500
with tf.name_scope('Inputs_D'):
X = tf.placeholder("float", [None, num_input],name='Features_D')
Y = tf.placeholder("float", [None, num_classes],name='Label_D')
# using two numpy arrays
features, labels = (X, Y)
# make a simple model
def Neuron(x):
with tf.name_scope('layer1_D'):
net = tf.layers.dense(x, 100, activation=tf.nn.relu)
with tf.name_scope('layer2_D'):
net = tf.layers.dense(net, 50, activation=tf.tanh)
with tf.name_scope('layer3_D'):
net = tf.layers.dense(net, 20, activation=tf.nn.softmax)
with tf.name_scope('out_layer_D'):
prediction = tf.layers.dense(net, 4)
return prediction
prediction = Neuron(X)
#loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=Y))
loss = tf.losses.mean_squared_error(labels=Y, predictions=prediction)
tf.summary.scalar('loss_D',loss)
#tf.losses.mean_squared_error(prediction, y) # pass the second value
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.summary.scalar('acuracy_D',accuracy)
#from iter.get_net() as label
train_op = tf.train.AdamOptimizer().minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())
merge_summary= tf.summary.merge_all()
writer = tf.summary.FileWriter('C:/Users/BoyangWei.LAPTOP-SRSNTDRH/7390/TensorFlow/files/D')
writer.add_graph(sess.graph)
for i in range(EPOCHS):
_, loss_value,acc_value = sess.run([train_op, loss,accuracy],feed_dict={X: x, Y: y})
if i% display_step == 0:
print("Iter: {}, Loss: {:.4f}".format(i+1, loss_value))
print("Accurancy: " +str(acc_value))
summary=sess.run(merge_summary,feed_dict={X: x,Y: y})
writer.add_summary(summary,i)
correct_pred = tf.equal(tf.argmax(prediction, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
print("Test accuracy: "+ str(accuracy.eval({X: np.array(temp_x_test), Y: np.array(keras.utils.to_categorical(y_test))})))
```
* Change the number of epochs initialization. How does it effect the accuracy?
**Answer:** The Auccracy gets decreased.
* How does it effect how quickly the network plateaus?
**Answer:** The plateaus gets longer.
| github_jupyter |
# Setup dell'ambiente e degli strumenti di lavoro
Questo breve documento vi insegnerà le basi degli ambienti di lavoro necessari per questo corso.
# Installare Python
Cos'è Python?
https://docs.python.org/3/tutorial/index.html
Python è un linguaggio di programmazione potente e facile da imparare. Ha efficienti strutture dati di alto livello e un approccio semplice ma efficace alla programmazione orientata agli oggetti. L'elegante sintassi e la tipizzazione dinamica di Python, insieme alla sua natura interpretativa, ne fanno un linguaggio ideale per lo scripting e il rapido sviluppo di applicazioni in molte aree della maggior parte delle piattaforme.
L'interprete Python e l'ampia libreria standard sono liberamente disponibili in forma sorgente o binaria per tutte le principali piattaforme dal sito web Python, https://www.python.org/, e possono essere liberamente distribuiti. Lo stesso sito contiene anche distribuzioni e puntatori a molti moduli Python di terze parti gratuiti, programmi e strumenti, e documentazione aggiuntiva.
L'interprete Python è facilmente estensibile con nuove funzioni e tipi di dati implementati in C o C++ (o altri linguaggi richiamabili da C). Python è anche adatto come linguaggio di estensione per applicazioni personalizzabili.
Cosa si può fare con Python? Quasi tutto, e la maggior parte delle cose abbastanza facilmente. Argomenti come l'analisi dei dati, l'apprendimento automatico, lo sviluppo web, le applicazioni desktop, la robotica e altro ancora sono tutte cose che si possono immediatamente iniziare a fare con Python senza troppa fatica. Personalmente, ho usato Python per addestrare alcune IA, per aiutare le aziende a rilevare le malattie, per aiutare a individuare frodi e abusi contro i server, per creare giochi, per scambiare azioni, e ho costruito e aiutato a costruire molteplici attività con Python.
Python, ma più in generale la programmazione, è lo strumento del nostro secolo che vi consente di lavorare con i computers su qualsiasi applicazione e a qualsiasi livello! \
https://pythonprogramming.net/introduction-learn-python-3-tutorials/
Se volete installare il pitone sulla vostra macchina, potete scaricarlo da qui: https://www.python.org/downloads/
La versione python usata in questo corso è il python 3.6 perché è più versatile per tutte le biblioteche di Data Science usate in questo corso.
In alcune macchine (mac, linux) sarà installata di default la versione 2.7 di python, vi chiediamo di non usare la versione 2.x per problemi di incompatibilità.
Installate la versione base dal sito web se volete iniziare a giocare con python!
## Anaconda
- Cos'è l'anaconda
- Installare
- GUI vs linea di comando
Anaconda è una distribuzione open source dei linguaggi di programmazione Python e R ed è utilizzato nella scienza dei dati, nell'apprendimento automatico, nelle applicazioni di apprendimento profondo che mirano a semplificare la gestione e la distribuzione dei pacchetti.
La distribuzione Anaconda è usata da oltre 7 milioni di utenti, e comprende più di 300 pacchetti di scienza dei dati adatti a Windows, Linux e MacOS.
Contiene tutti i pacchetti necessari per iniziare a sviluppare con Python ed è la distribuzione che raccomandiamo perché è molto facile da imparare e da usare.
Se vuoi installare Anaconda scarica la versione 3.X da qui: https://www.anaconda.com/distribution/
L'Anaconda ha due tipi di interazione:
- Approccio grafico
- Approccio basato su terminale
L'approccio grafico è con l'Anaconda Navigator una GUI che può aiutare ad utilizzare gli strumenti
<img src="resources/anaconda_navigator.png">
Nell'immagine qui sopra puoi vedere alcune parti diverse:
- La parte blu: è dove puoi gestire alcuni diversi ambienti python-conda (ne parleremo nel prossimo cpt.)
- La parte rossa: è dove è possibile modificare gli ambienti installati e le relative app
- La parte gialla: sono le app installate in un ambiente specifico che puoi usare
Se apri la parte blu (ambienti) puoi trovare tutti gli ambienti, tutti i pacchetti e puoi creare nuovi ambienti, pacchetti e librerie o disinstallare e gestire gli altri già in
<img src="resources/anaconda_environments.png">
Il modo migliore per usare anaconda è con il terminale, dopo l'installazione apri CMD (o la tua app terminale) e puoi interagire con Anaconda usando il comando: conda
<img src="resources/conda-terminal.png">
Ecco qualche utile comando:
- Informazioni sull'installazione di Conda
- Per vedere i tuoi ambienti
- Elenco dei pacchetti nei tuoi ambienti
- Aggiorna anaconda
### Virtual Environments
- Che cos'è un ambiente virtuale
- Crea un nuovo ambiente virtuale
- Installa i pacchetti python in librerie e pacchetti (conda vs pip)
- Cambia ambiente e usa ambienti diversi
Lo scopo principale degli ambienti virtuali Python (chiamato anche venv) è quello di creare un ambiente isolato per i progetti Python.
Ciò significa che ogni progetto può avere le proprie dipendenze, indipendentemente dalle dipendenze di ogni altro progetto.
Nel nostro piccolo esempio qui sopra, avremmo solo bisogno di creare un ambiente virtuale separato sia per ProjectA che ProjectB, e saremmo pronti a partire.
Ogni ambiente, a sua volta, sarebbe in grado di dipendere da qualunque versione di ProjectC scelga, indipendentemente dall'altro.
Il bello di questo è che non ci sono limiti al numero di ambienti che puoi avere dato che sono solo directory che contengono alcuni script.
Inoltre, possono essere facilmente creati utilizzando gli strumenti da riga di comando virtualenv o pyenv.
È possibile creare un ambiente virtuale con Python predefinito, ma utilizziamo ambienti con Anaconda.
Per informazioni standard su Python relative agli ambienti virtuali, vedere questo link di seguito: \
https://realpython.com/python-virtual-environments-a-primer/
Ecco alcuni utili comandi da utilizzare con Anaconda per creare, controllare, validare e aggiornare un Conda Venv
ATTENZIONE: se sei su Windows, usa CMD (come amministratore se possibile) e cerca di evitare Powershell fino a quando non sei sicuro di questa tecnologia
__Per visualizzare le informazioni di Conda sull'installazione__
```Bash
conda -v
```
__Controllare che Anaconda sia aggiornato__
```bash
conda update conda
```
__Crea un nuovo ambiente virtuale (venv) con una specifica versione di Python__ \
Ricorda di sostituire x.x con la tua versione di Python (usiamo principalmente la versione 3.6) e "yourenvname" con il nome del tuo ambiente
```Bash
conda create -n yourenvname python = x.x anaconda
```
Se vuoi creare un ambiente vuoto senza le librerie di conda predefinite puoi fare:
```Bash
conda create -n nomeutente python = x.x
```
senza l'etichetta anaconda
__Attivare l'ambiente di Anaconda__
```bash
conda activate yourenvname
```
__Per installare un nuovo pacchetto nel tuo nuovo ambiente puoi ...__
```Bash
conda install -n nomeutente [pacchetto]
```
ma se sei già nel tuo ambiente conda puoi semplicemente fare:
```Bash
conda install [pacchetto]
```
* sempre senza la parentesi []
__Per uscire dal proprio ambiente virtuale__
```Bash
conda disattivato
```
__Se si desidera eliminare l'ambiente virtuale anaconda__
```Bash
conda remove -n nomeutente -all
```
__Se vuoi vedere i tuoi ambienti virtuali anaconda installati__
```Bash
Elenco conda env
```
__Se vuoi rimuovere il tuo ambiente conda__
```Bash
conda remove --name yourenvname --all
```
Esistono 2 tipi di scenari che è possibile seguire per installare nuovi pacchetti o librerie Python in Conda:
- Usando pip
- Usando conda
Entrambi sono due gestori di librerie, il primo è il gestore python predefinito e il secondo è il gestore predefinito di Anaconda.
Le librerie disponibili da entrambi i gestori possono essere diversi, quindi ti suggeriamo di utilizzare entrambi i managers ma dando la priorità all'uso di Conda.
AVVERTENZA: se si utilizza pip, è necessario che l'ambiente sia attivato ed essere al suo interno.
Se vuoi qualche altra informazione vedi questo articolo (specialmente se vuoi usare un file requisito.yml personalizzato per le tue librerie Python)
https://towardsdatascience.com/getting-started-with-python-environments-using-conda-32e9f2779307
## Jupyter
Jupyter Notebook è un'applicazione Web open source che consente di creare e condividere documenti che contengono codice in tempo reale, equazioni, visualizzazioni e testo narrativo.
Gli usi includono: pulizia e trasformazione dei dati, simulazione numerica, modellistica statistica, visualizzazione dei dati, apprendimento automatico e molto altro.
È lo strumento predefinito per questo laboratorio ed è uno degli strumenti comuni per Data Science utilizzati in tutto il mondo.
Jupyter è installato per impostazione predefinita all'interno dell'ambiente conda di base, ma se si desidera utilizzare all'interno del nuovo ambiente virtuale conda, è necessario installarlo.
Per installare jupyter all'interno di un ambiente di conda devi:
1. attiva l'ambiente virtuale creato (venv)
2. avvia `conda install jupyter`
3. eseguire jupyter digitando `jupyter notebook`
Ogni volta che vuoi lanciare il notebook Jupyter con il tuo ambiente virtuale conda personalizzato devi:
1. attiva la tua conda env
2. eseguire: `jupyter notebook` all'interno del terminale
quindi apparirà una nuova finestra del browser e potrai usare Jupyter da lì con il tuo Venv.
Se vuoi chiudere Jupyter
1. salva il tuo lavoro
2. chiudere le schede del browser
3. premere: CRTL + C all'interno delle finestre della console per terminare tutti i kernel e il server jupyter
#### Imposta la cartella di progetto predefinita di Jupyter
È possibile impostare la cartella principale principale di Jupyter predefinita con questa semplice guida
Utilizzare il file di configurazione del notebook jupyter:
Dopo aver installato Anaconda ..
1. Apri cmd (o Anaconda Prompt) ed esegui jupyter notebook --generate-config.
2. Questo scrive un file in C: \ Users \ nomeutente \ .jupyter \ jupyter _notebook_ config.py.
3. Passare al percorso del file e aprirlo in un editor
4. Cerca la seguente riga nel file: # c.NotebookApp.notebook _dir = ''
5. Sostituisci con c.NotebookApp.notebook_ dir = '/ the / path / to / home / folder /'
6. Assicurati di usare le barre rovesciate nel tuo percorso e di usare / home / user / invece di ~ / per la tua home directory, le barre rovesciate potrebbero essere usate se inserite tra virgolette anche se il nome della cartella contiene spazi come tali: "D: \ yourUserName \ Qualsiasi cartella \ Altre cartelle \ "
7. Rimuovere il # all'inizio della riga per consentire l'esecuzione della riga
Se vuoi estendere e aggiornare Jupyter con nuove funzionalità, puoi seguire questa guida:
https://ndres.me/post/best-jupyter-notebook-extensions/
### Jupyter Lab
JupyterLab is a web-based interactive development environment for Jupyter notebooks, code, and data.
JupyterLab is flexible: configure and arrange the user interface to support a wide range of workflows in data science, scientific computing, and machine learning.
JupyterLab is extensible and modular: write plugins that add new components and integrate with existing ones.
JupyterLab è un ambiente di sviluppo interattivo Web per i Jupyter notebooks.
JupyterLab è flessibile: configura e organizza l'interfaccia utente per supportare un'ampia gamma di flussi di lavoro nella scienza dei dati, informatica scientifica e machine learning.
JupyterLab è estensibile e modulare: scrivi plugin che aggiungono nuovi componenti e si integrano con quelli esistenti.
Rispetto ai notebook jupyter, jupyter lab è una singola pagina web con molte più funzionalità e un'interfaccia estesa, è quasi un IDE più complesso.
Per installare jupyter lab in conda devi:
1. attiva il tuo conda venv
2. avvia `conda install jupyterlab`
3. eseguire jupyter digitando `jupyter lab`
Ogni volta che vuoi lanciare il notebook Jupyter con il tuo ambiente virtuale conda personalizzato devi:
1. attiva la tua conda env
2. eseguire: `jupyter lab` all'interno del terminale
#### Problemi noti
Utilizzando jupyter su un environment creato a mano è possibile che non vi trovi i pacchetti installati, questo perchè si sta utilizzando jupyter installato nell'environment di default oppure in un altro ambiente, ma non nell'ambiente di riferimento della libreria installata.
Quando questo si verifica ricordate di installare jupyter all'interno del vostro nuovo ambiente (environment) di lavoro.
## Visual Studio Code
https://code.visualstudio.com/
Visual Studio Code è un editor di codice sorgente sviluppato da Microsoft per Windows, Linux e macOS. Include supporto per debug, controllo Git incorporato e GitHub, evidenziazione della sintassi, completamento del codice intelligente, frammenti e refactoring del codice.
È un IDE utile per sviluppare applicazioni potenti e complesse con Python ed è consigliato quando si desidera creare, progettare, progettare e costruire applicazioni o codici di produzione di grandi dimensioni.
Visual Studio Code è compatibile con Python e puoi seguire questa guida per utilizzare con:
https://code.visualstudio.com/docs/python/python-tutorial
Con il codice Visual Studio puoi anche usare celle di codice come il quaderno Jupyter.
Non sono gli stessi, ma l'uso è abbastanza simile grazie a IPython che è il pacchetto base su cui è stato creato Jupyter.
Per utilizzare i notebook, seguire questa guida:
https://code.visualstudio.com/docs/python/jupyter-support
Ecco alcune estensioni utili che si possono installare su Visual Studio Code:
- Anaconda extension pack
- Code Runner
- Git History
- Git Lens
- Live share
- Powershell
- Python
- Project manager
- Shell launcher
- vscode-icons
# Git
- Cos'è Git?
- Perché Git?
- Come usarlo
- Corso suggerito per GIT
- Utilizzando Github
Git è un software di controllo versione distribuito
Creato da Linus Torvalds nel 2005 per gestire il codice Linux
Può essere utilizzato dalla riga di comando
Disponibile anche su Windows
Potrebbe avere un repository "centrale" più importante degli altri
È lo strumento fondamentale di base per cooperare in gruppo, condividendo codice e "cosa di programmazione" tra loro.
Lo scopo di Git è gestire un progetto, o un insieme di file, mentre cambiano nel tempo.
Git archivia queste informazioni in una struttura di dati chiamata repository. Un repository git contiene, tra le altre cose, quanto segue: Un set di oggetti commit.
Ci sono anche aziende che estendono e usano git per molti scopi, due esempi sono: Github e GitLab.
GitHub è un servizio di hosting di repository Git, ma aggiunge molte delle sue funzionalità. Mentre Git è uno strumento da riga di comando, GitHub fornisce un'interfaccia grafica basata sul Web.
Fornisce inoltre il controllo degli accessi e diverse funzionalità di collaborazione, come wiki e strumenti di gestione delle attività di base per ogni progetto.
I servizi di gestione dei repository di controllo versione come Github e GitLab sono un componente chiave nel flusso di lavoro di sviluppo software. Negli ultimi anni, GitHub e GitLab si sono posizionati come assistenti utili per gli sviluppatori, in particolare quando lavorano in team di grandi dimensioni.
Entrambi, GitLab e GitHub sono repository Git basati sul web.
Lo scopo di Git è gestire i progetti di sviluppo software e i suoi file, poiché cambiano nel tempo. Git archivia queste informazioni in una struttura di dati chiamata repository.
Tale repository git contiene un set di oggetti commit e un set di riferimenti per eseguire il commit di oggetti.
Un repository git è un luogo centrale in cui gli sviluppatori archiviano, condividono, testano e collaborano a progetti Web.
Ci sono alcune differenze tra Gitlab e Github, ma i punti chiave sono gli stessi.
#### Installa Git
Scarica git da qui: https://git-scm.com/downloads usando l'emulazione POSIX in Windows.
O per i geek, puoi seguire questa guida per Windows con sottosistema Linux:
https://docs.microsoft.com/en-us/windows/wsl/about
#### Guida semplice su Git
http://rogerdudler.github.io/git-guide/
#### Git tutorial interattivo
https://learngitbranching.js.org/
#### Usare Github
L'uso di GitHub è assolutamente consigliato e consigliato per familiarizzare con questi strumenti per questo corso.
Ti consigliamo di creare un account su GitHub e utilizzarlo per i progetti e il codice che creerai in questo lab e percorso.
Usa questo tutorial per capire come usare GitHub
https://product.hubspot.com/blog/git-and-github-tutorial-for-beginners
| github_jupyter |
# Theano, Lasagne
and why they matter
### got no lasagne?
Install the __bleeding edge__ version from here: http://lasagne.readthedocs.org/en/latest/user/installation.html
# Warming up
* Implement a function that computes the sum of squares of numbers from 0 to N
* Use numpy or python
* An array of numbers 0 to N - numpy.arange(N)
```
import numpy as np
def sum_squares(N):
return < student.Implement_me() >
% % time
sum_squares(10**8)
```
# theano teaser
Doing the very same thing
```
import theano
import theano.tensor as T
# I gonna be function parameter
N = T.scalar("a dimension", dtype='int32')
# i am a recipe on how to produce sum of squares of arange of N given N
result = (T.arange(N)**2).sum()
# Compiling the recipe of computing "result" given N
sum_function = theano.function(inputs=[N], outputs=result)
% % time
sum_function(10**8)
```
# How does it work?
* 1 You define inputs f your future function;
* 2 You write a recipe for some transformation of inputs;
* 3 You compile it;
* You have just got a function!
* The gobbledegooky version: you define a function as symbolic computation graph.
* There are two main kinвs of entities: "Inputs" and "Transformations"
* Both can be numbers, vectors, matrices, tensors, etc.
* Both can be integers, floats of booleans (uint8) of various size.
* An input is a placeholder for function parameters.
* N from example above
* Transformations are the recipes for computing something given inputs and transformation
* (T.arange(N)^2).sum() are 3 sequential transformations of N
* Doubles all functions of numpy vector syntax
* You can almost always go with replacing "np.function" with "T.function" aka "theano.tensor.function"
* np.mean -> T.mean
* np.arange -> T.arange
* np.cumsum -> T.cumsum
* and so on.
* builtin operations also work that way
* np.arange(10).mean() -> T.arange(10).mean()
* Once upon a blue moon the functions have different names or locations (e.g. T.extra_ops)
* Ask us or google it
Still confused? We gonna fix that.
```
# Inputs
example_input_integer = T.scalar("scalar input", dtype='float32')
# dtype = theano.config.floatX by default
example_input_tensor = T.tensor4("four dimensional tensor input")
# не бойся, тензор нам не пригодится
input_vector = T.vector("my vector", dtype='int32') # vector of integers
# Transformations
# transofrmation: elementwise multiplication
double_the_vector = input_vector*2
# elementwise cosine
elementwise_cosine = T.cos(input_vector)
# difference between squared vector and vector itself
vector_squares = input_vector**2 - input_vector
# Practice time:
# create two vectors of size float32
my_vector = student.init_float32_vector()
my_vector2 = student.init_one_more_such_vector()
# Write a transformation(recipe):
#(vec1)*(vec2) / (sin(vec1) +1)
my_transformation = student.implementwhatwaswrittenabove()
print(my_transformation)
# it's okay it aint a number
# What's inside the transformation
theano.printing.debugprint(my_transformation)
```
# Compiling
* So far we were using "symbolic" variables and transformations
* Defining the recipe for computation, but not computing anything
* To use the recipe, one should compile it
```
inputs = [ < two vectors that my_transformation depends on > ]
outputs = [ < What do we compute (can be a list of several transformation) > ]
# The next lines compile a function that takes two vectors and computes your transformation
my_function = theano.function(
inputs, outputs,
# automatic type casting for input parameters (e.g. float64 -> float32)
allow_input_downcast=True
)
# using function with, lists:
print "using python lists:"
print my_function([1, 2, 3], [4, 5, 6])
print
# Or using numpy arrays:
# btw, that 'float' dtype is casted to secong parameter dtype which is float32
print "using numpy arrays:"
print my_function(np.arange(10),
np.linspace(5, 6, 10, dtype='float'))
```
# Debugging
* Compilation can take a while for big functions
* To avoid waiting, one can evaluate transformations without compiling
* Without compilation, the code runs slower, so consider reducing input size
```
# a dictionary of inputs
my_function_inputs = {
my_vector: [1, 2, 3],
my_vector2: [4, 5, 6]
}
# evaluate my_transformation
# has to match with compiled function output
print my_transformation.eval(my_function_inputs)
# can compute transformations on the fly
print("add 2 vectors", (my_vector + my_vector2).eval(my_function_inputs))
#!WARNING! if your transformation only depends on some inputs,
# do not provide the rest of them
print("vector's shape:", my_vector.shape.eval({
my_vector: [1, 2, 3]
}))
```
* When debugging, it's usually a good idea to reduce the scale of your computation. E.g. if you train on batches of 128 objects, debug on 2-3.
* If it's imperative that you run a large batch of data, consider compiling with mode='debug' instead
# Your turn: Mean Squared Error (2 pts)
```
# Quest #1 - implement a function that computes a mean squared error of two input vectors
# Your function has to take 2 vectors and return a single number
<student.define_inputs_and_transformations() >
compute_mse = <student.compile_function() >
# Tests
from sklearn.metrics import mean_squared_error
for n in [1, 5, 10, 10**3]:
elems = [np.arange(n), np.arange(n, 0, -1), np.zeros(n),
np.ones(n), np.random.random(n), np.random.randint(100, size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(mean_squared_error(el, el_2))
my_mse = compute_mse(el, el_2)
if not np.allclose(true_mse, my_mse):
print('Wrong result:')
print('mse(%s,%s)' % (el, el_2))
print("should be: %f, but your function returned %f" %
(true_mse, my_mse))
raise ValueError("Что-то не так")
print("All tests passed")
```
# Shared variables
* The inputs and transformations only exist when function is called
* Shared variables always stay in memory like global variables
* Shared variables can be included into a symbolic graph
* They can be set and evaluated using special methods
* but they can't change value arbitrarily during symbolic graph computation
* we'll cover that later;
* Hint: such variables are a perfect place to store network parameters
* e.g. weights or some metadata
```
# creating shared variable
shared_vector_1 = theano.shared(np.ones(10, dtype='float64'))
# evaluating shared variable (outside symbolicd graph)
print("initial value", shared_vector_1.get_value())
# within symbolic graph you use them just as any other inout or transformation, not "get value" needed
# setting new value
shared_vector_1.set_value(np.arange(5))
# getting that new value
print("new value", shared_vector_1.get_value())
# Note that the vector changed shape
# This is entirely allowed... unless your graph is hard-wired to work with some fixed shape
```
# Your turn
```
# Write a recipe (transformation) that computes an elementwise transformation of shared_vector and input_scalar
# Compile as a function of input_scalar
input_scalar = T.scalar('coefficient', dtype='float32')
scalar_times_shared = <student.write_recipe() >
shared_times_n = <student.compile_function() >
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)", shared_times_n(5)
print "shared_times_n(-0.5)", shared_times_n(-0.5)
# Changing value of vector 1 (output should change)
shared_vector_1.set_value([-1, 0, 1])
print "shared:", shared_vector_1.get_value()
print "shared_times_n(5)", shared_times_n(5)
print "shared_times_n(-0.5)", shared_times_n(-0.5)
```
# T.grad - why theano matters
* Theano can compute derivatives and gradients automatically
* Derivatives are computed symbolically, not numerically
Limitations:
* You can only compute a gradient of a __scalar__ transformation over one or several scalar or vector (or tensor) transformations or inputs.
* A transformation has to have float32 or float64 dtype throughout the whole computation graph
* derivative over an integer has no mathematical sense
```
my_scalar = T.scalar(name='input', dtype='float64')
scalar_squared = T.sum(my_scalar**2)
# a derivative of v_squared by my_vector
derivative = T.grad(scalar_squared, my_scalar)
fun = theano.function([my_scalar], scalar_squared)
grad = theano.function([my_scalar], derivative)
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3, 3)
x_squared = list(map(fun, x))
x_squared_der = list(map(grad, x))
plt.plot(x, x_squared, label="x^2")
plt.plot(x, x_squared_der, label="derivative")
plt.legend()
```
# Why that rocks
```
my_vector = T.vector('float64')
# Compute the gradient of the next weird function over my_scalar and my_vector
# warning! Trying to understand the meaning of that function may result in permanent brain damage
weird_psychotic_function = ((my_vector+my_scalar)**(1+T.var(my_vector)) + 1./T.arcsinh(my_scalar)).mean()/(my_scalar**2 + 1) + 0.01*T.sin(2*my_scalar**1.5)*(
T.sum(my_vector) * my_scalar**2)*T.exp((my_scalar-4)**2)/(1+T.exp((my_scalar-4)**2))*(1.-(T.exp(-(my_scalar-4)**2))/(1+T.exp(-(my_scalar-4)**2)))**2
der_by_scalar, der_by_vector = <student.compute_grad_over_scalar_and_vector() >
compute_weird_function = theano.function(
[my_scalar, my_vector], weird_psychotic_function)
compute_der_by_scalar = theano.function([my_scalar, my_vector], der_by_scalar)
# Plotting your derivative
vector_0 = [1, 2, 3]
scalar_space = np.linspace(0, 7)
y = [compute_weird_function(x, vector_0) for x in scalar_space]
plt.plot(scalar_space, y, label='function')
y_der_by_scalar = [compute_der_by_scalar(x, vector_0) for x in scalar_space]
plt.plot(scalar_space, y_der_by_scalar, label='derivative')
plt.grid()
plt.legend()
```
# Almost done - Updates
* updates are a way of changing shared variables at after function call.
* technically it's a dictionary {shared_variable : a recipe for new value} which is has to be provided when function is compiled
That's how it works:
```
# Multiply shared vector by a number and save the product back into shared vector
inputs = [input_scalar]
outputs = [scalar_times_shared] # return vector times scalar
my_updates = {
# and write this same result bach into shared_vector_1
shared_vector_1: scalar_times_shared
}
compute_and_save = theano.function(inputs, outputs, updates=my_updates)
shared_vector_1.set_value(np.arange(5))
# initial shared_vector_1
print("initial shared value:", shared_vector_1.get_value())
# evaluating the function (shared_vector_1 will be changed)
print("compute_and_save(2) returns", compute_and_save(2))
# evaluate new shared_vector_1
print("new shared value:", shared_vector_1.get_value())
```
# Logistic regression example (4 pts)
Implement the regular logistic regression training algorithm
Tips:
* Weights fit in as a shared variable
* X and y are potential inputs
* Compile 2 functions:
* train_function(X,y) - returns error and computes weights' new values __(through updates)__
* predict_fun(X) - just computes probabilities ("y") given data
We shall train on a two-class MNIST dataset
* please note that target y are {0,1} and not {-1,1} as in some formulae
```
from sklearn.datasets import load_digits
mnist = load_digits(2)
X, y = mnist.data, mnist.target
print("y [shape - %s]:" % (str(y.shape)), y[:10])
print("X [shape - %s]:" % (str(X.shape)))
print(X[:3])
print(y[:10])
# inputs and shareds
shared_weights = <student.code_me() >
input_X = <student.code_me() >
input_y = <student.code_me() >
predicted_y = <predicted probabilities for input_X >
loss = <logistic loss(scalar, mean over sample) >
grad = <gradient of loss over model weights >
updates = {
shared_weights: < new weights after gradient step >
}
train_function = <compile function that takes X and y, returns log loss and updates weights >
predict_function = <compile function that takes X and computes probabilities of y >
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
from sklearn.metrics import roc_auc_score
for i in range(5):
loss_i = train_function(X_train, y_train)
print("loss at iter %i:%.4f" % (i, loss_i))
print("train auc:", roc_auc_score(y_train, predict_function(X_train)))
print("test auc:", roc_auc_score(y_test, predict_function(X_test)))
print("resulting weights:")
plt.imshow(shared_weights.get_value().reshape(8, -1))
plt.colorbar()
```
# lasagne
* lasagne is a library for neural network building and training
* it's a low-level library with almost seamless integration with theano
For a demo we shall solve the same digit recognition problem, but at a different scale
* images are now 28x28
* 10 different digits
* 50k samples
```
from mnist import load_dataset
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset()
print X_train.shape, y_train.shape
import lasagne
input_X = T.tensor4("X")
# input dimention (None means "Arbitrary" and only works at the first axes [samples])
input_shape = [None, 1, 28, 28]
target_y = T.vector("target Y integer", dtype='int32')
```
Defining network architecture
```
# Input layer (auxilary)
input_layer = lasagne.layers.InputLayer(shape=input_shape, input_var=input_X)
# fully connected layer, that takes input layer and applies 50 neurons to it.
# nonlinearity here is sigmoid as in logistic regression
# you can give a name to each layer (optional)
dense_1 = lasagne.layers.DenseLayer(input_layer, num_units=50,
nonlinearity=lasagne.nonlinearities.sigmoid,
name="hidden_dense_layer")
# fully connected output layer that takes dense_1 as input and has 10 neurons (1 for each digit)
# We use softmax nonlinearity to make probabilities add up to 1
dense_output = lasagne.layers.DenseLayer(dense_1, num_units=10,
nonlinearity=lasagne.nonlinearities.softmax,
name='output')
# network prediction (theano-transformation)
y_predicted = lasagne.layers.get_output(dense_output)
# all network weights (shared variables)
all_weights = lasagne.layers.get_all_params(dense_output)
print(all_weights)
```
### Than you could simply
* define loss function manually
* compute error gradient over all weights
* define updates
* But that's a whole lot of work and life's short
* not to mention life's too short to wait for SGD to converge
Instead, we shall use Lasagne builtins
```
# Mean categorical crossentropy as a loss function - similar to logistic loss but for multiclass targets
loss = lasagne.objectives.categorical_crossentropy(
y_predicted, target_y).mean()
# prediction accuracy
accuracy = lasagne.objectives.categorical_accuracy(
y_predicted, target_y).mean()
# This function computes gradient AND composes weight updates just like you did earlier
updates_sgd = lasagne.updates.sgd(loss, all_weights, learning_rate=0.01)
# function that computes loss and updates weights
train_fun = theano.function(
[input_X, target_y], [loss, accuracy], updates=updates_sgd)
# function that just computes accuracy
accuracy_fun = theano.function([input_X, target_y], accuracy)
```
### That's all, now let's train it!
* We got a lot of data, so it's recommended that you use SGD
* So let's implement a function that splits the training sample into minibatches
```
# An auxilary function that returns mini-batches for neural network training
# Parameters
# X - a tensor of images with shape (many, 1, 28, 28), e.g. X_train
# y - a vector of answers for corresponding images e.g. Y_train
# batch_size - a single number - the intended size of each batches
# What do need to implement
# 1) Shuffle data
# - Gotta shuffle X and y the same way not to break the correspondence between X_i and y_i
# 3) Split data into minibatches of batch_size
# - If data size is not a multiple of batch_size, make one last batch smaller.
# 4) return a list (or an iterator) of pairs
# - (подгруппа картинок, ответы из y на эту подгруппу)
def iterate_minibatches(X, y, batchsize):
<return an iterable of(X_batch, y_batch) batches of images and answers for them >
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
#
# You feel lost and wish you stayed home tonight?
# Go search for a similar function at
# https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py
```
# Training loop
```
import time
num_epochs = 100 # amount of passes through the data
batch_size = 50 # number of samples processed at each function call
for epoch in range(num_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_acc = 0
train_batches = 0
start_time = time.time()
for batch in iterate_minibatches(X_train, y_train, batch_size):
inputs, targets = batch
train_err_batch, train_acc_batch = train_fun(inputs, targets)
train_err += train_err_batch
train_acc += train_acc_batch
train_batches += 1
# And a full pass over the validation data:
val_acc = 0
val_batches = 0
for batch in iterate_minibatches(X_val, y_val, batch_size):
inputs, targets = batch
val_acc += accuracy_fun(inputs, targets)
val_batches += 1
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(
epoch + 1, num_epochs, time.time() - start_time))
print(
" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches))
print(" train accuracy:\t\t{:.2f} %".format(
train_acc / train_batches * 100))
print(" validation accuracy:\t\t{:.2f} %".format(
val_acc / val_batches * 100))
test_acc = 0
test_batches = 0
for batch in iterate_minibatches(X_test, y_test, 500):
inputs, targets = batch
acc = accuracy_fun(inputs, targets)
test_acc += acc
test_batches += 1
print("Final results:")
print(" test accuracy:\t\t{:.2f} %".format(
test_acc / test_batches * 100))
if test_acc / test_batches * 100 > 99:
print("Achievement unlocked: 80lvl Warlock!")
else:
print("We need more magic!")
```
# A better network ( 4+ pts )
* The quest is to create a network that gets at least 99% at test set
* In case you tried several architectures and have a __detailed__ report - 97.5% "is fine too".
* __+1 bonus point__ each 0.1% past 99%
* More points for creative approach
__ There is a mini-report at the end that you will have to fill in. We recommend to read it first and fill in while you are iterating. __
## Tips on what can be done:
* Network size
* MOAR neurons,
* MOAR layers,
* Convolutions are almost imperative
* Пх'нглуи мглв'нафх Ктулху Р'льех вгах'нагл фхтагн!
* Regularize to prevent overfitting
* Add some L2 weight norm to the loss function, theano will do the rest
* Can be done manually or via - http://lasagne.readthedocs.org/en/latest/modules/regularization.html
* Better optimization - rmsprop, nesterov_momentum, adadelta, adagrad and so on.
* Converge faster and sometimes reach better optima
* It might make sense to tweak learning rate, other learning parameters, batch size and number of epochs
* Dropout - to prevent overfitting
* `lasagne.layers.DropoutLayer(prev_layer, p=probability_to_zero_out)`
* Convolution layers
* `network = lasagne.layers.Conv2DLayer(prev_layer,`
` num_filters = n_neurons,`
` filter_size = (filter width, filter height),`
` nonlinearity = some_nonlinearity)`
* Warning! Training convolutional networks can take long without GPU.
* If you are CPU-only, we still recomment to try a simple convolutional architecture
* a perfect option is if you can set it up to run at nighttime and check it up at the morning.
* Plenty other layers and architectures
* http://lasagne.readthedocs.org/en/latest/modules/layers.html
* batch normalization, pooling, etc
* Nonlinearities in the hidden layers
* tanh, relu, leaky relu, etc
There is a template for your solution below that you can opt to use or throw away and write it your way
```
from mnist import load_dataset
X_train, y_train, X_val, y_val, X_test, y_test = load_dataset()
print X_train.shape, y_train.shape
import lasagne
input_X = T.tensor4("X")
# input dimention (None means "Arbitrary" and only works at the first axes [samples])
input_shape = [None, 1, 28, 28]
target_y = T.vector("target Y integer", dtype='int32')
# Input layer (auxilary)
input_layer = lasagne.layers.InputLayer(shape=input_shape, input_var=input_X)
<student.code_neural_network_architecture() >
dense_output = <your network output >
# Network predictions (theano-transformation)
y_predicted = lasagne.layers.get_output(dense_output)
# All weights (shared-varaibles)
# "trainable" flag means not to return auxilary params like batch mean (for batch normalization)
all_weights = lasagne.layers.get_all_params(dense_output, trainable=True)
print(all_weights)
# loss function
loss = <loss function >
# <optionally add regularization>
accuracy = <mean accuracy score for evaluation >
# weight updates
updates = <try different update methods >
# A function that accepts X and y, returns loss functions and performs weight updates
train_fun = theano.function(
[input_X, target_y], [loss, accuracy], updates=updates_sgd)
# A function that just computes accuracy given X and y
accuracy_fun = theano.function([input_X, target_y], accuracy)
# итерации обучения
num_epochs = <how many times to iterate over the entire training set >
batch_size = <how many samples are processed at a single function call >
for epoch in range(num_epochs):
# In each epoch, we do a full pass over the training data:
train_err = 0
train_acc = 0
train_batches = 0
start_time = time.time()
for batch in iterate_minibatches(X_train, y_train, batch_size):
inputs, targets = batch
train_err_batch, train_acc_batch = train_fun(inputs, targets)
train_err += train_err_batch
train_acc += train_acc_batch
train_batches += 1
# And a full pass over the validation data:
val_acc = 0
val_batches = 0
for batch in iterate_minibatches(X_val, y_val, batch_size):
inputs, targets = batch
val_acc += accuracy_fun(inputs, targets)
val_batches += 1
# Then we print the results for this epoch:
print("Epoch {} of {} took {:.3f}s".format(
epoch + 1, num_epochs, time.time() - start_time))
print(
" training loss (in-iteration):\t\t{:.6f}".format(train_err / train_batches))
print(" train accuracy:\t\t{:.2f} %".format(
train_acc / train_batches * 100))
print(" validation accuracy:\t\t{:.2f} %".format(
val_acc / val_batches * 100))
test_acc = 0
test_batches = 0
for batch in iterate_minibatches(X_test, y_test, 500):
inputs, targets = batch
acc = accuracy_fun(inputs, targets)
test_acc += acc
test_batches += 1
print("Final results:")
print(" test accuracy:\t\t{:.2f} %".format(
test_acc / test_batches * 100))
if test_acc / test_batches * 100 > 99:
print("Achievement unlocked: 80lvl Warlock!")
else:
print("We need more magic!")
```
Report
All creative approaches are highly welcome, but at the very least it would be great to mention
* the idea;
* brief history of tweaks and improvements;
* what is the final architecture and why?
* what is the training method and, again, why?
* Any regularizations and other techniques applied and their effects;
There is no need to write strict mathematical proofs (unless you want to).
* "I tried this, this and this, and the second one turned out to be better. And i just didn't like the name of that one" - OK, but can be better
* "I have analized these and these articles|sources|blog posts, tried that and that to adapt them to my problem and the conclusions are such and such" - the ideal one
* "I took that code that demo without understanding it, but i'll never confess that and instead i'll make up some pseudoscientific explaination" - __not_ok__
### Hi, my name is `___ ___`, and here's my story
A long ago in a galaxy far far away, when it was still more than an hour before deadline, i got an idea:
##### I gonna build a neural network, that
* brief text on what was
* the original idea
* and why it was so
How could i be so naive?!
##### One day, with no signs of warning,
This thing has finally converged and
* Some explaination about what were the results,
* what worked and what didn't
* most importantly - what next steps were taken, if any
* and what were their respective outcomes
##### Finally, after __ iterations, __ mugs of [tea/coffee]
* what was the final architecture
* as well as training method and tricks
That, having wasted ____ [minutes, hours or days] of my life training, got
* accuracy on training: __
* accuracy on validation: __
* accuracy on test: __
[an optional afterword and mortal curses on assignment authors]
| github_jupyter |
# Density Functional Theory: Grid
## I. Theoretical Overview
This tutorial will discuss the basics of DFT and discuss the grid used to evaluate DFT quantities.
As with HF, DFT aims to solve the generalized eigenvalue problem:
$$\sum_{\nu} F_{\mu\nu}C_{\nu i} = \epsilon_i\sum_{\nu}S_{\mu\nu}C_{\nu i}$$
$${\bf FC} = {\bf SC\epsilon},$$
Where with HF the Fock matrix is constructed as:
$$F^{HF}_{\mu\nu} = H_{\mu\nu} + 2J[D]_{\mu\nu} - K[D]_{\mu\nu}$$
$$D_{\mu\nu} = C_{\mu i} C_{\nu i}$$
With DFT we generalize this construction slightly to:
$$F^{DFT}_{\mu\nu} = H_{\mu\nu} + 2J[D]_{\mu\nu} - \zeta K[D]_{\mu\nu} + V^{\rm{xc}}_{\mu\nu}$$
$\zeta$ is an adjustable parameter where we can very the amount of exact (HF) exchange and $V$ is the DFT potenital which typically attempts to add dynamical correlation in the self-consistent field methodolgy.
## 2. Examining the Grid
We will discuss the evaluation and manipulation of the grid.
```
import psi4
import numpy as np
%matplotlib notebook
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pkg_resources import parse_version
if parse_version(psi4.__version__) >= parse_version('1.3a1'):
build_superfunctional = psi4.driver.dft.build_superfunctional
else:
build_superfunctional = psi4.driver.dft_funcs.build_superfunctional
# Set computatation options and molecule, any single atom will do.
mol = psi4.geometry("He")
psi4.set_options({'BASIS': 'CC-PVDZ',
'DFT_SPHERICAL_POINTS': 50,
'DFT_RADIAL_POINTS': 12})
basis = psi4.core.BasisSet.build(mol, "ORBITAL", "CC-PVDZ")
sup = build_superfunctional("PBE", True)[0]
Vpot = psi4.core.VBase.build(basis, sup, "RV")
Vpot.initialize()
x, y, z, w = Vpot.get_np_xyzw()
R = np.sqrt(x **2 + y ** 2 + z **2)
fig, ax = plt.subplots()
ax.scatter(x, y, c=w)
#ax.set_xscale('log')
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
mask = R > 8
p = ax.scatter(x[mask], y[mask], z[mask], c=w[mask], marker='o')
plt.colorbar(p)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
mol = psi4.geometry("""
O
H 1 1.1
H 1 1.1 2 104
""")
mol.update_geometry()
psi4.set_options({'BASIS': ' CC-PVDZ',
'DFT_SPHERICAL_POINTS': 26,
'DFT_RADIAL_POINTS': 12})
basis = psi4.core.BasisSet.build(mol, "ORBITAL", "CC-PVDZ")
sup = build_superfunctional("PBE", True)[0]
Vpot = psi4.core.VBase.build(basis, sup, "RV")
Vpot.initialize()
x, y, z, w = Vpot.get_np_xyzw()
R = np.sqrt(x **2 + y ** 2 + z **2)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
mask = R > 0
p = ax.scatter(x[mask], y[mask], z[mask], c=w[mask], marker='o')
plt.colorbar(p)
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
```
## Refs:
- Koch, W. and Holthausen, M.C., **2001**, A Chemist’s Guide to Density Functional Theory, 2nd, Wiley-VCH, Weinheim.
- Kohn, W. and Sham, L. *J, Phys. Rev.*, **1965**, *140*, A1133- A1138
- Becke, A.D., *J. Chem. Phys.*, **1988**, *88*, 2547
- Treutler, O. and Ahlrichs, R., *J. Chem. Phys.*, **1995**, *102*, 346
- Gill, P.M.W., Johnson, B.G., and Pople, J.A., *Chem. Phys. Lett.*, **1993,209 (5), pp. 506, 16 July 1993.
| github_jupyter |
### Set GPU
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = "3"
```
## Set Dataset Name
```
# dataset_name = 'CIFAR10'
# dataset_name = 'CIFAR100'
# dataset_name = 'MNIST'
# dataset_name = 'TINYIMAGENET'
dataset_name = 'IMBALANCED_CIFAR10'
```
### Run All Now
```
# from models.resnet_stl import resnet18
import torch
import numpy as np
from tqdm import tqdm
from models.resnet_cifar import resnet18
from utils.memory import MemoryBank
from utils.train_utils import simclr_train
from utils.utils import fill_memory_bank
from utils.config import create_config
from utils.common_config import get_model, get_train_dataset, get_val_transformations, get_train_dataloader
from utils.evaluate_utils import hungarian_evaluate2, scan_evaluate
output_folder = '../results/'
if dataset_name == "CIFAR10":
output_folder += 'cifar-10/'
config_exp_path = './configs/scan/scan_cifar10.yml'
cfg_path = 'configs/CIFAR10_RESNET18.yaml'
elif dataset_name == "CIFAR100":
output_folder += 'cifar-20/'
config_exp_path = './configs/scan/scan_cifar20.yml'
cfg_path = 'configs/CIFAR100_RESNET18.yaml'
elif dataset_name == "MNIST":
output_folder += 'mnist/'
config_exp_path = './configs/scan/scan_mnist.yml'
cfg_path = 'configs/MNIST_RESNET18.yaml'
elif dataset_name == "TINYIMAGENET":
output_folder += 'tinyimagenet/'
config_exp_path = './configs/scan/scan_tinyimagenet.yml'
cfg_path = 'configs/TINYIMAGENET_RESNET18.yaml'
elif dataset_name == 'IMBALANCED_CIFAR10':
output_folder += 'imbalanced-cifar-10/'
config_exp_path = './configs/scan/scan_cifar10_im.yml'
cfg_path = 'configs/CIFAR10_RESNET18.yaml'
path_to_model = output_folder + 'scan/model.pth.tar'
temp = torch.load(path_to_model)
import argparse
config_env_path = './configs/env.yml'
p = create_config(config_env_path, config_exp_path)
model = get_model(p)
model.load_state_dict(temp['model'])
model.eval()
model.cuda();
```
train_data = get_train_dataset(p, get_val_transformations(p),
split='train', to_augmented_dataset=False)
train_dataloader = get_train_dataloader(p, train_data)
### Change batch size if you run into out of memory error
```
from pycls.datasets.data import Data
from pycls.config import cfg
cfg.merge_from_file(cfg_path)
cfg.DATASET.NAME = dataset_name
data_obj = Data(cfg)
train_data, train_size = data_obj.getDataset(save_dir='../data', isTrain=True, isDownload=True)
trainSet = [i for i in range(train_size)]
trainSet = np.array(trainSet, dtype=np.ndarray)
train_dataloader = data_obj.getSequentialDataLoader(indexes=trainSet, batch_size=256, data=train_data)
test_data, test_size = data_obj.getDataset(save_dir='../data', isTrain=False, isDownload=True)
test_dataloader = data_obj.getTestLoader(data=test_data, test_batch_size=cfg.TRAIN.BATCH_SIZE, seed_id=cfg.RNG_SEED)
import torch.nn.functional as F
@torch.no_grad()
def get_predictions(p, dataloader, model, return_features=False):
# Make predictions on a dataset with neighbors
model.eval()
predictions = [[] for _ in range(p['num_heads'])]
probs = [[] for _ in range(p['num_heads'])]
targets = []
if return_features:
ft_dim = get_feature_dimensions_backbone(p)
features = torch.zeros((len(dataloader.sampler), ft_dim)).cuda()
key_ = 'image'
ptr = 0
for row in tqdm(dataloader, desc="Extracting Self Label Predictions"):
# images = row['image']
# lbl = row['target']
images, lbl = row
images = images.cuda()
output = model(images, forward_pass='default')
for i, output_i in enumerate(output):
predictions[i].append(torch.argmax(output_i, dim=1))
targets.append(lbl)
predictions = [torch.cat(pred_, dim=0) for pred_ in predictions]
targets = torch.cat(targets, dim=0)
out = [{'predictions': pred_, 'targets': targets} for pred_, prob_ in zip(predictions, probs)]
if return_features:
return out, features.cpu()
else:
return out
# from utils.evaluate_utils import get_predictions
predictions = get_predictions(p, train_dataloader, model)
```
#### Note: Stats are irrelevant for CIFAR100
```
clustering_stats = hungarian_evaluate2(0, predictions,
class_names=train_data.classes,
compute_confusion_matrix=False,
confusion_matrix_file=os.path.join('confusion_matrix.png'))
clustering_stats
predictions[0]['predictions'].cpu()
np.save(f'{output_folder}/{dataset_name}_SCAN_cluster_ids.npy', predictions[0]['predictions'].cpu())
```
| github_jupyter |
# Pyspark
Using pyspark from a Jupyter notebook is quite straightforward when using a local spark instance. This can be installed trivially using conda, i.e.,
```
conda install pyspark
```
Once this is done, a local spark instance can be launched easily from within the notebook.
```
from pyspark import SparkContext
sc = SparkContext('local', 'test')
```
## Example: counting characters
As an example, we read a file that contains an DNA sequence (unrealistically long). We first check some properties of the file, and show the first few lines. We want to count the number of nucleotides, i.e., the total number of occurrences of `A`, `C`, `G`, and `T`.
```
!wc Data/large_dna.txt
!head -3 Data/large_dna.txt
```
Read data from a text file, the resulting data is stored in an RDD.
```
data = sc.textFile('Data/large_dna.txt')
```
The RDD has as many elements as the data file has lines. The order of the elements is the same as that of the lines in the file.
```
data.count()
data.take(3)
```
Define a function that computes the number of nucleotimes in a string, returning the result as a tuple. Note that this function is not the optimal implementation, but it is straighforward.
```
def count_nucl(seq):
return tuple(seq.count(nucl) for nucl in 'ACGT')
```
This function can be applied to each element in the RDD indepedently, in Spark terminology, it is a transformation. Note that the transformation is lazy, it will only be computed when the result values are required.
```
counts = data.map(count_nucl)
```
Next, we define a function that computes the sum of the elements of two tuples, and returns a new tuple.
```
def sum_nucl(t1, t2):
return tuple(x + y for x, y in zip(t1, t2))
total_count = counts.reduce(sum_nucl)
total_count
```
### Alternative approach
An alternative approach is to construct an RDD with key/value pairs.
```
data = sc.textFile('Data/large_dna.txt')
```
First, we create a list of nucleotides for each element in the RDD.
```
nucleotides = data.map(list)
```
For each element in the RDD, we create a key/value pair, the key is the nucleotide, the value is 1. Using the `flatMap` method ensures that the end result is an RDD with key/value pairs as a flat structure.
```
nucl = nucl_counts = nucleotides.flatMap(lambda x: ((n, 1) for n in x))
nucl.take(5)
```
The `countByKey` method will count all RDD elements that have the same key.
```
for key, value in nucl_counts.countByKey().items():
print(f'{key}: {value}')
```
## Example: counting signs
```
import numpy as np
```
RDDs can also be constructured starting from iterables such as numpy arrays.
```
data = sc.parallelize(np.random.uniform(-1.0, 1.0, (1000,)))
```
We want to count the number of positive and negative values, and cmopute the sum of all positve and negative numbers in the RDD. The first step is to transform the RDD into key/value pairs where the key is `'pos'` for numbers that are strictly positive, `'neg'` otherwise. The corresponding values are the original numbers.
```
signs = data.map(lambda x: ('pos', x) if x > 0 else ('neg', x))
signs.take(5)
```
As in the previous example, counting can be done by key.
```
counts = signs.countByKey()
for key, value in counts.items():
print(f'{key}: {value}')
```
To compute the sums, we can perform a reduction by key, using a lambda function to compute the pairwise sum.
```
sums = signs.reduceByKey(lambda x, y: x + y)
sums.take(2)
for key, value in sums.collect():
print(f'{key}: {value}')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import os
import random
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn.dummy import DummyRegressor
from sklearn.metrics import r2_score
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
import matplotlib
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
# Print TF version, keep the version in mind when you look the documentation.
print('TensorFlow version: {}'.format(tf.__version__))
```
## Read Data
```
work_dir = r'C:\Users\tangc\OneDrive\Desktop\Files\HW\Datathon'
print(os.listdir(work_dir))
train_temp_2 = pd.read_csv(work_dir + r"\2\Training\random_samples_temp_50.csv").drop(["Unnamed: 0"], axis=1)
cv_temp_2 = pd.read_csv(work_dir + r"\2\Training\random_samples_temp_10.csv").drop(["Unnamed: 0"], axis=1)
print(train_temp_2.shape)
print(train_temp_2.columns)
train_temp_2.head()
X_2_train = train_temp_2.drop(["Temp"], axis=1)
y_2_train = train_temp_2[["Temp"]]
X_2_cv = cv_temp_2.drop(["Temp"], axis=1)
y_2_cv = cv_temp_2[["Temp"]]
# Tensor Board
root_logdir = os.path.join(os.curdir, "my_logs")
def get_run_logdir():
import time
run_id = time.strftime("run_%Y_%m_%d-%H_%M_%S")
return os.path.join(root_logdir, run_id)
run_logdir = get_run_logdir()
run_logdir
tensorboard_cb = keras.callbacks.TensorBoard(run_logdir)
def r_square(y_true, y_pred):
from tensorflow.keras import backend as K
SS_res = K.sum(K.square(y_true - y_pred))
SS_tot = K.sum(K.square(y_true - K.mean(y_true)))
return -( 1 - SS_res/(SS_tot + K.epsilon()) )
keras.backend.clear_session()
del model_temp_2
inputs = keras.Input(shape=[8])
for i in range(4):
model_temp_2 = keras.layers.Dense(10 + i, activation="relu", kernel_initializer="he_normal")(inputs)
model_temp_2 = keras.layers.BatchNormalization(momentum=0.15)(model_temp_2)
for i in range(2):
model_temp_2 = keras.layers.Dense(20 + i, activation="relu", kernel_initializer="he_normal")(model_temp_2)
model_temp_2 = keras.layers.BatchNormalization(momentum=0.15)(model_temp_2)
model_temp_2 = keras.layers.Dropout(rate=0.15)(model_temp_2)
for i in range(2):
model_temp_2 = keras.layers.Dense(40 + i, activation="relu", kernel_initializer="he_normal")(model_temp_2)
model_temp_2 = keras.layers.BatchNormalization(momentum=0.15)(model_temp_2)
for i in range(2):
model_temp_2 = keras.layers.Dense(80 + i, activation="elu", kernel_initializer="he_normal")(model_temp_2)
model_temp_2 = keras.layers.BatchNormalization(momentum=0.15)(model_temp_2)
model_temp_2 = keras.layers.Dropout(rate=0.15)(model_temp_2)
for i in range(2):
model_temp_2 = keras.layers.Dense(160 + i, activation="elu", kernel_initializer="he_normal")(model_temp_2)
model_temp_2 = keras.layers.BatchNormalization(momentum=0.15)(model_temp_2)
model_temp_2 = keras.layers.Dropout(rate=0.15)(model_temp_2)
for i in range(2):
model_temp_2 = keras.layers.Dense(320 + i, activation="elu", kernel_initializer="he_normal")(model_temp_2)
model_temp_2 = keras.layers.BatchNormalization(momentum=0.15)(model_temp_2)
model_temp_2 = keras.layers.Dropout(rate=0.15)(model_temp_2)
outputs = keras.layers.Dense(1)(model_temp_2)
model_temp_2 = keras.models.Model(inputs=inputs, outputs=outputs)
model_temp_2.compile(optimizer='adam', loss=r_square)
batch_size = 32
history = model_temp_2.fit(X_2_train, y_2_train, batch_size=batch_size,
epochs = 2, validation_data = (X_2_cv, y_2_cv),
callbacks=[tensorboard_cb])
y_2_cv_pred = model_temp_2.predict(X_2_cv)
r2_score(y_2_cv, y_2_cv_pred)
plt.scatter(y_2_cv, y_2_cv_pred)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.