text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
# Import most generic modules
import importlib
import pathlib
import os
import sys
from datetime import datetime, timedelta
import pandas as pd
from IPython.display import display, Markdown
import warnings
warnings.filterwarnings("ignore")
module_path = os.path.abspath(os.path.join("../.."))
if module_path not in sys.path:
sys.path.append(module_path)
report_name = f"{datetime.now().strftime('%Y%m%d_%H%M%S')}_crypto_market"
display(Markdown(f"# Crypto Market - {datetime.now().strftime('%Y/%m/%d %H:%M:%S')}"))
```
## Bitcoin Chart
```
from gamestonk_terminal.cryptocurrency import cryptocurrency_helpers
cryptocurrency_helpers.plot_chart(
coin="btc-bitcoin", source="cp", currency="USD", days=60
)
```
## Global Market Information
```
from gamestonk_terminal.cryptocurrency.overview import coinpaprika_view
coinpaprika_view.display_global_market(export="")
```
## Hot news
```
from gamestonk_terminal.cryptocurrency.overview import pycoingecko_view
pycoingecko_view.display_news(
top=10, sortby="Posted", descend=True, links=False, export=""
)
```
## Top coins market information
```
from gamestonk_terminal.cryptocurrency.overview import coinpaprika_view
coinpaprika_view.display_all_coins_market_info(
currency="USD", sortby="volume_24h", descend=True, top=10, export=""
)
```
## Top gainers in last 24h
```
from gamestonk_terminal.cryptocurrency.discovery import pycoingecko_view
pycoingecko_view.display_gainers(
period="24h", top=10, sortby="%Change_24h", descend=False, links=False, export=""
)
```
## Top losers in last 24h
```
from gamestonk_terminal.cryptocurrency.discovery import pycoingecko_view
pycoingecko_view.display_losers(
period="24h", top=10, sortby="%Change_24h", descend=True, links=False, export=""
)
```
## Top coins with most positive sentiment
```
from gamestonk_terminal.cryptocurrency.discovery import pycoingecko_view
pycoingecko_view.display_discover(
category="positive_sentiment",
top=10,
sortby="Price_USD",
descend=False,
links=False,
export="",
)
```
## Decentralized Finance Global Market Info
```
from gamestonk_terminal.cryptocurrency.overview import pycoingecko_view
pycoingecko_view.display_global_defi_info(export="")
```
## Top DeFi Coins
```
from gamestonk_terminal.cryptocurrency.discovery import pycoingecko_view
pycoingecko_view.display_top_defi_coins(
top=10, sortby="Change_24h", descend=False, links=False, export=""
)
```
## Top Decentralized Exchanges
```
from gamestonk_terminal.cryptocurrency.discovery import pycoingecko_view
pycoingecko_view.display_top_dex(top=10, sortby="Volume_24h", descend=False, export="")
```
## Top Yield Farms
```
from gamestonk_terminal.cryptocurrency.discovery import pycoingecko_view
pycoingecko_view.display_yieldfarms(
top=10, sortby="Value_Locked", descend=False, export=""
)
!jupyter nbconvert {report_name + ".ipynb"} --to html --no-input
```
| github_jupyter |
# Use custom software_spec to create statsmodels function describing data with `ibm-watson-machine-learning`
This notebook demonstrates how to deploy in Watson Machine Learning service a python function with `statsmodel` which requires to create custom software specification using conda yaml file with all required libraries.
Some familiarity with bash is helpful. This notebook uses Python 3.8 with statsmodel.
## Learning goals
The learning goals of this notebook are:
- Working with the Watson Machine Learning instance
- Creating custom software specification
- Online deployment of python function
- Scoring data using deployed function
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Function creation](#create)
3. [Function upload](#upload)
4. [Web service creation](#deploy)
5. [Scoring](#score)
6. [Clean up](#cleanup)
7. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href=" https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>).
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform `api_key` and instance `location`.
You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve platform API Key and instance location.
API Key can be generated in the following way:
```
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
```
In result, get the value of `api_key` from the output.
Location of your WML instance can be retrieved in the following way:
```
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
```
In result, get the value of `location` from the output.
**Tip**: Your `Cloud API key` can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. You can also get a service specific url by going to the [**Endpoint URLs** section of the Watson Machine Learning docs](https://cloud.ibm.com/apidocs/machine-learning). You can check your instance location in your <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance details.
You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
**Action**: Enter your `api_key` and `location` in the following cell.
```
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First, create a space that will be used for your work. If you do not have space already created, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create one.
- Click New Deployment Space
- Create an empty space
- Select Cloud Object Storage
- Select Watson Machine Learning instance and press Create
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
<a id="create"></a>
## 2. Create function
In this section you will learn how to create deployable function
with statsmodels module calculating describition of a given data.
**Hint**: To install statsmodels execute `!pip install statsmodels`.
#### Create deploayable callable which uses stsmodels library
```
def deployable_callable():
"""
Deployable python function with score
function implemented.
"""
try:
from statsmodels.stats.descriptivestats import describe
except ModuleNotFoundError as e:
print(f"statsmodels not installed: {str(e)}")
def score(payload):
"""
Score method.
"""
try:
data = payload['input_data'][0]['values']
return {
'predictions': [
{'values': str(describe(data))}
]
}
except Exception as e:
return {'predictions': [{'values': [repr(e)]}]}
return score
```
#### Test callable locally
**Hint**: To install numpy execute `!pip install numpy`.
```
import numpy as np
data = np.random.randn(10, 10)
data_description = deployable_callable()({
"input_data": [{
"values" : data
}]
})
print(data_description["predictions"][0]["values"])
```
<a id="upload"></a>
## 3. Upload python function
In this section you will learn how to upload the python function to the Cloud.
#### Custom software_specification
Create new software specification based on default Python 3.8 environment extended by statsmodel package.
```
config_yml =\
"""name: python38
channels:
- empty
- nodefaults
dependencies:
- pip:
- statsmodels
prefix: /opt/anaconda3/envs/python38
"""
with open("config.yaml", "w", encoding="utf-8") as f:
f.write(config_yml)
base_sw_spec_uid = client.software_specifications.get_uid_by_name("default_py3.8")
!cat config.yaml
```
`config.yaml` file describes details of package extention. Now you need to store new package extention with APIClient.
```
meta_prop_pkg_extn = {
client.package_extensions.ConfigurationMetaNames.NAME: "statsmodels env",
client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "Environment with statsmodels",
client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml"
}
pkg_extn_details = client.package_extensions.store(meta_props=meta_prop_pkg_extn, file_path="config.yaml")
pkg_extn_uid = client.package_extensions.get_uid(pkg_extn_details)
pkg_extn_url = client.package_extensions.get_href(pkg_extn_details)
```
#### Create new software specification and add created package extention to it.
```
meta_prop_sw_spec = {
client.software_specifications.ConfigurationMetaNames.NAME: "statsmodels software_spec",
client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "Software specification for statsmodels",
client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION: {"guid": base_sw_spec_uid}
}
sw_spec_details = client.software_specifications.store(meta_props=meta_prop_sw_spec)
sw_spec_uid = client.software_specifications.get_uid(sw_spec_details)
client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_uid)
```
#### Get the details of created software specification
```
client.software_specifications.get_details(sw_spec_uid)
```
#### Store the function
```
meta_props = {
client.repository.FunctionMetaNames.NAME: "statsmodels function",
client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid
}
function_details = client.repository.store_function(meta_props=meta_props, function=deployable_callable)
function_uid = client.repository.get_function_uid(function_details)
```
#### Get function details
```
client.repository.get_details(function_uid)
```
**Note:** You can see that function is successfully stored in Watson Machine Learning Service.
```
client.repository.list_functions()
```
<a id="deploy"></a>
## 4. Create online deployment
You can use commands bellow to create online deployment for stored function (web service).
#### Create online deployment of a python function
```
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of statsmodels function",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
function_deployment = client.deployments.create(function_uid, meta_props=metadata)
client.deployments.list()
```
Get deployment id.
```
deployment_id = client.deployments.get_uid(function_deployment)
print(deployment_id)
```
<a id="score"></a>
## 5. Scoring
You can send new scoring records to web-service deployment using `score` method.
```
scoring_payload = {
"input_data": [{
'values': data
}]
}
predictions = client.deployments.score(deployment_id, scoring_payload)
print(data_description["predictions"][0]["values"])
```
<a id="cleanup"></a>
## 6. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
see the steps in this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 7. Summary and next steps
You successfully completed this notebook! You learned how to use Watson Machine Learning for function deployment and scoring with custom software_spec.
Check out our [Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html?context=analytics) for more samples, tutorials, documentation, how-tos, and blog posts.
### Author
**Jan Sołtysik** Intern in Watson Machine Learning.
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
| github_jupyter |
# Utilisation de threads avec le réseau
> Communication client/serveur avec utilisation de threads
- toc: true
- badges: true
- comments: false
- categories: [python, ISN]
Pour ce classeur, il faudra recopier chaque partie (client et serveur) dans un fichier python distinct et les exécuter, le cas échéant sur des machines différentes. Les programmes sont régroupés ici par commodité mais ne doivent pas être exécutés depuis l'environnement ***Jupyter*** car ce dernier s'accomode mal des threads.
## Le serveur
```
import socket
# en plus de socket, on utilise les threads
from threading import Thread
# fonction qui va gérer un client (boucle avec sortie si 'exit' reçu)
# cette fonction sera appelée dans un nouveau thread à chaque connexion
def gereClient(sockclient,addr):
sock=sockclient
while True:
data = sock.recv(BUFSIZ).decode("Utf8")
if data == 'exit':
break
else:
msg = 'echo : ' + data # notre serveur est tjs le même
sock.send(msg.encode("Utf8"))
sock.send("FIN".encode("Utf8"))
sock.close()
BUFSIZ = 1024
#HOST = socket.gethostname()
HOST='0.0.0.0' # Toutes les addresses de la machine à l'écoute
PORT = 4567
ADDR = (HOST, PORT)
sockserveur = socket.socket()
sockserveur.bind(ADDR)
# on peut éventuellement mettre un paramètre plus grand à listen
# si on veut que le serveur ne refuse pas une connexion
# alors qu'il est en train d'en traiter une autre
# (temps de passage de la connexion à un nouveau thread)
sockserveur.listen(1)
# boucle pour les connexions des clients, sans fin
while True:
print ("Serveur à l'écoute…")
sockclient, addr = sockserveur.accept()
print ('...connexion de : ', addr)
# quand un client se connecte, on crée un thread "pour lui"
# contenant la fonction de gestion de client
th=Thread(target=gereClient,args=(sockclient,addr))
th.start()
```
## Explications
La commande de reception de donnée depuis le réseau est
<PRE>sock.recv(BUFSIZ).decode("Utf8")</PRE>
Cette commande est ***bloquante***, ce qui signifie que lorsque le serveur est en attente d'un message en provenance d'un client, il ne peut rien faire d'autre, en particulier, il ne peut pas traiter les demandes d'autres clients éventuels.
Cette situation est bien sûr intenable dans le cadre d'un usage classique. Pour contourner cette difficulté, nous intégrons cette commande bloquante dans une fonction qui sera exécutée en ***parallèle*** du programme principal qui restera disponible pour traiter les autres connexions clients.
Pour exécuter une fonction en parallèle, on fait appel à la librairie ***Threading*** de Python : Un thread étant un morceau de programme s'éxécutant en parallèle du programme qui l'appelle. Nous créons donc grâce à la commande
<PRE>Thread(target=gereClient,args=(sockclient,addr))</PRE>
un appel non bloquant à la fonction gereClient, et ce pour chaque client qui se connecte.
Nous pouvons donc traiter simultanément la connexion de plusieurs clients au même serveur.
## Le client
```
from tkinter import *
from socket import *
from threading import Thread
liaison = socket(AF_INET, SOCK_STREAM) # socket client
def gestionClient():
# Communication client exécuté en parallèle dans un thread
message=""
while message.upper() != "FIN" :
message = liaison.recv(1024).decode("utf8") # Commande bloquante
listeMsg.insert(END, message) # On affiche le message reçu
etatStr.set("Connexion terminée." )
liaison.close()
def connect():
SERVEUR=ipAddr.get()
PORT=eval(ipPort.get())
try:
liaison.connect((SERVEUR, PORT))
etatStr.set("Connexion établie")
th=Thread(target=gestionClient)
th.start()
except error:
etatStr.set("La connexion a échoué.")
liaison.close()
def envoi():
liaison.send(msgStr.get().encode("utf8"))
msgStr.set("")
fenetre=Tk()
fenetre.title="Client réseau"
## textes variables
ipAddr=StringVar()
ipAddr.set('127.0.0.1')
ipPort=StringVar()
ipPort.set("4567")
etatStr=StringVar()
etatStr.set("Etat de la connection...")
msgStr=StringVar()
## Interface graphique
connFrame=Frame(fenetre,bd=1, relief=SUNKEN) # Cadre de connection
msgFrame=Frame(fenetre,bd=1, relief=SUNKEN) # Cadre d'envoi
ipEntry=Entry(connFrame,textvariable=ipAddr)
portEntry=Entry(connFrame,textvariable=ipPort)
btnConnect=Button(connFrame,text="Connexion",command=connect)
etatLbl=Label(fenetre,textvariable=etatStr)
listeMsg = Listbox(fenetre)
msgLbl=Label(msgFrame,text="Message")
msgEntry=Entry(msgFrame,textvariable=msgStr)
msgSend=Button(msgFrame,text="Envoyer",command=envoi)
## positionnement des widgets
connFrame.pack(padx=5,pady=5)
ipEntry.pack(side=LEFT,padx=5,pady=5)
portEntry.pack(side=LEFT,padx=5,pady=5)
btnConnect.pack(side=LEFT,padx=5,pady=5)
etatLbl.pack(padx=5,pady=5)
listeMsg.pack(fill=BOTH, expand=1,padx=5,pady=5)
msgFrame.pack(fill=BOTH, expand=1,padx=5,pady=5)
msgLbl.pack(side=LEFT,padx=5,pady=5)
msgEntry.pack(fill=BOTH, expand=1,side=LEFT,padx=5,pady=5)
msgSend.pack(side=LEFT,padx=5,pady=5)
fenetre.mainloop()
```
## Explications
Dans ce programme client, la majeure partie correspond au design de l'interface graphique. Nous utilisons ici le widget ***Frame*** de TKinter permettant de créer des zones dans l'interface dans laquelle la mise en page sera différente :
<PRE>pack(side=LEFT,padx=5,pady=5)</PRE>
permet de placer les composants les un à coté des autres dans les différents cadres. Les cadres eux même sont disposés avec la disposition par défaut c'est à dire verticalement.
La problématique principale de ce programme est de gérer à la fois l'écoute de messages en provenance du serveur et la réactivité de l'interface graphique. En effet, la commande
<PRE>message = liaison.recv(1024).decode("utf8")</PRE>
est bloquante, ce qui signifie que quand le client est en attente d'un message du serveur, il ne peut rien faire d'autre. En particulier, il ne peut pas réagir aux événements en provenance de l'utilisateur sur l'interface graphique. L'application est alors figée.
Pour se prémunir de ce problème, comme pour le serveur, nous devrons intégrer la commande de reception de message dans un ***Thread*** dédié qui tournera en parallèle de notre programme qui sera alors en capacité de gérer l'interface graphique.
Pour ce faire, dès que le client se connecte au serveur on crée un ***thread*** par la commande
<PRE>Thread(target=gestionClient)</PRE>
qui sera en charge de receotionner les messages du serveur et de les afficher dans la zone de texte (***Listbox***)
| github_jupyter |
# Housing data extraction and aggregation
This notebook consists of two steps:
1. Extraction of relevant sales price values of homes in the ZTRAX database[<sup>1</sup>](#fn1).
2. Filtering of the data with some QA/QC algorithms and aggregation of the remaining building-level sales prices to an average "price per sq ft." value for each grid cell in our POP sample.
<span id="fn1"> <sup>1</sup>Data provided by Zillow through the Zillow Transaction and Assessment Dataset (ZTRAX). More information on accessing the data can be found athttp://www.zillow.com/ztrax. The results and opinions are those of the author(s) and do not reflect the position of Zillow Group. </span>
## Setup
```
import os
from datetime import datetime as dt
from glob import glob
from os.path import basename, dirname, isfile, join
import geopandas as gpd
import numpy as np
import pandas as pd
import shapely as shp
import shapely.geometry
import shapely.vectorized
from dask.distributed import Client, LocalCluster, progress
from mosaiks import config as cfg
from mosaiks.utils import io
from scipy.stats.mstats import gmean
app = "housing"
cfg = io.get_filepaths(cfg, app)
chousing = cfg.housing
c_ext = chousing["data"]["ztrax"]
grid_str, _ = io.get_suffix(cfg, app)
grid_path = join(cfg.grid_dir, "grid_" + grid_str + ".npz")
sample_path = join(cfg.grid_dir, "{}.npz".format(cfg.data_suffix))
# location of raw ztrax data
ztrax_dir_raw = join(cfg.data_dir, "raw", "path", "to", "ztrax", "database")
# location of extracted ztrax data from step 1
ztrax_dir_int = join(dirname(cfg.outcomes_fpath), "states")
os.makedirs(ztrax_dir_int, exist_ok=True)
# location of state shapefile
state_path = join(
cfg.data_dir, "raw", "shapefiles", "USA", "gadm_USA_shp", "gadm_USA_1.shp"
)
# location of the 2 different databases in ZTRAX
zillow_as_path = join(ztrax_dir_raw, "Zillow_Assessor")
zillow_trans_path = join(ztrax_dir_raw, "Zillow_Transaction")
state_codes = {
"WA": "53",
"DE": "10",
"DC": "11",
"WI": "55",
"WV": "54",
"HI": "15",
"FL": "12",
"WY": "56",
"PR": "72",
"NJ": "34",
"NM": "35",
"TX": "48",
"LA": "22",
"NC": "37",
"ND": "38",
"NE": "31",
"TN": "47",
"NY": "36",
"PA": "42",
"AK": "02",
"NV": "32",
"NH": "33",
"VA": "51",
"CO": "08",
"CA": "06",
"AL": "01",
"AR": "05",
"VT": "50",
"IL": "17",
"GA": "13",
"IN": "18",
"IA": "19",
"MA": "25",
"AZ": "04",
"ID": "16",
"CT": "09",
"ME": "23",
"MD": "24",
"OK": "40",
"OH": "39",
"UT": "49",
"MO": "29",
"MN": "27",
"MI": "26",
"RI": "44",
"KS": "20",
"MT": "30",
"MS": "28",
"SC": "45",
"KY": "21",
"OR": "41",
"SD": "46",
}
```
Data is extracted by state, and uses the [dask](https://docs.dask.org/en/latest/) package to process each state in parallel if possible. Dask is configurable such that, by default, this code should run on a single shared-memory machine, using as many cores are available. If you are running this on a multi-node cluster, you will need to do some additional setup to configure dask with whatever job scheduler you might have on the cluster.
Here you will want to configure several things relevant to the machine you are working on and instantiate the dask scheduler that will run each process extracting data from the database.
- `chunksize` is the number of rows of each ZTRAZX csv to process. Setting it too high will cause memory issues. Setting it too low will require too much I/O time.
- `cluster_kwargs` consists of any `dask` kwargs you would like to pass to your Cluster instance
```
chunksize = 1000000
cluster_kwargs = {"n_workers": 1, "threads_per_worker": 4, "memory_limit": "100G"}
cluster = LocalCluster(**cluster_kwargs)
client = Client(cluster)
client
```
## Housing Price/sq ft. extraction and aggregation
This process occurs in two steps. In the first, we filter to the sales we are interested in, using the following sub-steps:
1. Selecting residential buildings only
2. Selecting only buildings with valid lat/lon coordinates
3. Selecting sales occuring on or after Jan 1 2010
4. Selecting only sales with non-null prices
In the second step, we apply additional QA/QC filters and average cells up to our sampled grid cells, using the following steps:
1. Dropping all sales under \$10k
2. Dropping all buildings under 100 sq ft
3. Dropping all sales under \$10/sq ft
4. By state, dropping all sales above the 99th percentile of remaining sales in that state
5. Keeping the latest sale if multiple sales of a given property remain.
6. Taking the mean of remaining sales within each grid cell for the POP sample
Performing this extraction, QA/QC, and aggregation in two separate steps allows one to test various QA/QC approaches without having to re-extract from the raw Zillow data.
### Load grid, sample, and shapefiles
```
grid = dict(np.load(grid_path))
sample = dict(np.load(sample_path))
state_gdf = gpd.read_file(state_path)
state_gdf["HASC_1"] = state_gdf.HASC_1.str[-2:]
state_gdf = state_gdf[~state_gdf.HASC_1.isin(["PR", "HI", "AK"])]
## turn lats and lons from full grid into dataframe with lat/lon as index and i/j as value
lons = (
pd.DataFrame(grid["lon"], columns=["lon"])
.reset_index()
.set_index("lon")
.rename(columns={"index": "j"})
)
lats = (
pd.DataFrame(grid["lat"], columns=["lat"])
.reset_index()
.set_index("lat")
.rename(columns={"index": "i"})
)
## adjust for 1-indexed i and j
lons += 1
lats += 1
# need to flip lats b/c it's monotonic decreasing
lats = lats.sort_index()
# get list of sampled data points
this_id = sample["ID"].astype(str)
sample_ix = pd.MultiIndex.from_tuples(
[tuple([int(j) for j in i.split(",")]) for i in this_id]
)
```
### Extract data
```
def get_df(func, it, print_progress=False):
result = []
for ix, i in enumerate(it):
if print_progress and (ix % 5 == 0):
print("Line {}".format(ix))
this_res = func(i)
if this_res.shape[0] > 0:
result.append(this_res)
result = pd.concat(result)
return result
def hash_rowid(df, index_cols="rowID"):
dup_ids = df["rowID"].duplicated(keep=False).sum()
df["rowID"] = df["rowID"].apply(lambda x: hash(x))
# check we havent created hash collisions
assert df["rowID"].duplicated(keep=False).sum() == dup_ids
df = df.set_index(index_cols, drop=True)
return df
def get_sampled_val(this_st, this_geom, chunksize=1000000, settings=c_ext):
# improves performance to not check this
pd.options.mode.chained_assignment = None
## GET RELEVANT FILES AND POINTS
# find points in this state
points_in_st = shp.vectorized.contains(this_geom, sample["lon"], sample["lat"])
this_id = sample["ID"][points_in_st].astype(str)
this_st_fips = state_codes[this_st]
trans_trans_main_path = join(zillow_trans_path, this_st_fips, "ZTrans", "Main.txt")
trans_trans_prop_path = join(
zillow_trans_path, this_st_fips, "ZTrans", "PropertyInfo.txt"
)
trans_asmt_main_path = join(zillow_trans_path, this_st_fips, "ZAsmt", "Main.txt")
asmt_asmt_main_path = join(zillow_as_path, this_st_fips, "ZAsmt", "Main.txt")
trans_asmt_bldg_path = join(
zillow_trans_path, this_st_fips, "ZAsmt", "Building.txt"
)
asmt_asmt_bldg_path = join(zillow_as_path, this_st_fips, "ZAsmt", "Building.txt")
trans_asmt_bldgArea_path = join(
zillow_trans_path, this_st_fips, "ZAsmt", "BuildingAreas.txt"
)
asmt_asmt_bldgArea_path = join(
zillow_as_path, this_st_fips, "ZAsmt", "BuildingAreas.txt"
)
out_path = join(ztrax_dir_int, "outcomes_all_{}_{}.pickle".format(app, this_st))
out_path_agg = out_path.replace(".pickle", "_aggregated.pickle")
if isfile(out_path_agg):
return pd.read_pickle(out_path_agg)
## ##################
## TRANSACTION/ZTRANS
## ##################
print("Loading trans/trans/main", this_st)
## Main
it = pd.read_csv(
trans_trans_main_path,
delimiter="|",
usecols=[0, 6, 24, 25, 30],
names=["rowID", "date", "price", "sale_code", "if_xfer"],
index_col=0,
chunksize=chunksize,
dtype={
"rowID": int,
"price": float,
"sale_code": str,
"if_xfer": str,
"date": str,
},
quoting=3,
)
def get_trans_trans_main(trans_info):
# drop missing dates, 0-priced sales, and intra-family transfers
trans_info = trans_info[
(trans_info["date"] != "")
& (trans_info["date"].notnull())
& (trans_info["price"] > 0)
& (trans_info["if_xfer"] != "Y")
]
trans_info["date"] = pd.to_datetime(
trans_info["date"], format="%Y-%m-%d", errors="coerce"
)
# keep only sales after date
trans_info = trans_info[
trans_info.date
> pd.to_datetime("{}-01-01".format(settings["cutoff_yr"]), errors="coerce")
]
# drop if_xfer column
trans_info = trans_info.drop(columns="if_xfer")
return trans_info
trans_trans_main = get_df(get_trans_trans_main, it)
print("Loading trans/trans/propertyInfo", this_st)
## Prop
it = pd.read_csv(
trans_trans_prop_path,
delimiter="|",
usecols=[0, 52, 53, 64],
names=["rowID", "lat", "lon", "parcelID"],
index_col=0,
dtype={"rowID": int, "lat": str, "lon": str, "parcelID": str},
chunksize=chunksize,
quoting=3,
)
def get_trans_trans_prop(prop_info):
prop_info[["lon", "lat"]] = prop_info[["lon", "lat"]].apply(
pd.to_numeric, errors="coerce"
)
return prop_info.dropna(how="all")
trans_trans_prop = get_df(get_trans_trans_prop, it)
## Merge prop and main
merged = trans_trans_main.join(trans_trans_prop, how="left")
del trans_trans_main, trans_trans_prop
## Keep only if it has parcel ID in order
# to match to land use code
merged = merged.dropna(subset=["parcelID"])
## set index to parcelID b/c no longer need rowID
merged = merged.set_index("parcelID", drop=True)
## ##################
## TRANSACTION/ZASMT
## ##################
## Main
print("Loading trans/asmt/main", this_st)
it = pd.read_csv(
trans_asmt_main_path,
delimiter="|",
usecols=[0, 1, 81, 82],
names=["rowID", "parcelID", "lat_asmt", "lon_asmt"],
quoting=3,
dtype={"rowID": str, "parcelID": str, "lat_asmt": str, "lon_asmt": str},
chunksize=chunksize,
)
def get_asmt_main(asmt_info):
asmt_info = hash_rowid(asmt_info)
asmt_info[["lon_asmt", "lat_asmt"]] = asmt_info[["lon_asmt", "lat_asmt"]].apply(
pd.to_numeric, errors="coerce"
)
return asmt_info
trans_asmt_main = get_df(get_asmt_main, it)
## Building
print("Loading trans/asmt/building", this_st)
it = pd.read_csv(
trans_asmt_bldg_path,
delimiter="|",
usecols=[0, 5],
names=["rowID", "landuse"],
quoting=3,
infer_datetime_format=True,
dtype={"rowID": str, "landuse": str},
chunksize=chunksize,
)
def get_asmt_bldg(bldg_info):
bldg_info = hash_rowid(bldg_info)
bldg_info = bldg_info.dropna()
bldg_info.landuse = bldg_info.landuse.apply(lambda x: x[:2])
bldg_info = bldg_info[
(bldg_info.landuse.isin(settings["use_codes_include"]))
& ~(bldg_info.landuse.isin(settings["use_codes_not_include"]))
]
return bldg_info
trans_asmt_bldg = get_df(get_asmt_bldg, it)
res_bldgs = trans_asmt_bldg.index
## Reindex main to only have residential buildings
trans_asmt_main = trans_asmt_main.reindex(res_bldgs).dropna(subset=["parcelID"])
## Building Area
print("Loading trans/asmt/buildingArea", this_st)
it = pd.read_csv(
trans_asmt_bldgArea_path,
delimiter="|",
usecols=[0, 1, 4],
names=["rowID", "imp", "sqft"],
dtype={"rowID": str, "imp": int, "sqft": float},
chunksize=chunksize,
quoting=3,
)
def get_asmt_bldgArea(asmt_bldgArea):
asmt_bldgArea = hash_rowid(asmt_bldgArea, index_cols=["rowID", "imp"])
# take whatever is the largest building area code as measure of sqfootage
asmt_bldgArea = asmt_bldgArea.groupby(level=[0, 1]).max()
# sum over multiple improvements on land
asmt_bldgArea = asmt_bldgArea.groupby(level=0).sum()
return asmt_bldgArea
trans_asmt_bldgArea = get_df(get_asmt_bldgArea, it)
# Merge sqft data into main
trans_asmt_main = trans_asmt_main.join(trans_asmt_bldgArea, how="left")
## Set index to parcelID
trans_asmt_main = trans_asmt_main.set_index("parcelID", drop=True)
## Inner merge with trans/trans to get properties with price data
# and residential classification
merged_trans = merged.join(trans_asmt_main, how="inner")
del trans_asmt_main, trans_asmt_bldg, trans_asmt_bldgArea, res_bldgs
# fill in lat/lon from trans/asmt/main data
for l in ["lat", "lon"]:
merged_trans[l] = merged_trans[l].fillna(merged_trans[l + "_asmt"])
del merged_trans[l + "_asmt"]
## ##################
## ASSESSOR/ZASMT
## ##################
## Main
print("Loading asessor/asmt/main", this_st)
it = pd.read_csv(
asmt_asmt_main_path,
delimiter="|",
usecols=[0, 1, 6, 81, 82],
names=["rowID", "parcelID", "extract_date", "lat_asmt", "lon_asmt"],
quoting=3,
chunksize=chunksize,
dtype={
"rowID": str,
"parcelID": str,
"extract_date": str,
"lat_asmt": str,
"lon_asmt": str,
},
)
def get_asmt_asmt_main(asmt2_info):
asmt2_info = hash_rowid(asmt2_info)
asmt2_info[["lat_asmt", "lon_asmt"]] = asmt2_info[
["lat_asmt", "lon_asmt"]
].apply(pd.to_numeric, errors="coerce")
asmt2_info["extract_date"] = pd.to_datetime(
asmt2_info["extract_date"], format="%m%Y", errors="coerce"
)
# drop if rowID, parcelID or lat/lon missing
asmt2_info = asmt2_info[
(asmt2_info.index.notnull())
& (asmt2_info[["lat_asmt", "lon_asmt", "parcelID"]].notnull().all(axis=1))
]
return asmt2_info
asmt_asmt_main = get_df(get_asmt_asmt_main, it, print_progress=True)
print("Completed asessor/asmt/main", this_st)
## Building
print("Loading asessor/asmt/building", this_st)
it = pd.read_csv(
asmt_asmt_bldg_path,
delimiter="|",
usecols=[0, 5],
names=["rowID", "landuse"],
quoting=3,
infer_datetime_format=True,
dtype={"rowID": str, "landuse": str},
chunksize=chunksize,
)
asmt_asmt_bldg = get_df(get_asmt_bldg, it)
res_bldgs_asmt = asmt_asmt_bldg.index
# Reindex main to only have residential buildings
asmt_asmt_main = asmt_asmt_main.reindex(res_bldgs_asmt).dropna()
# Take only the latest extraction date by parcel ID
# (theoretically most up-to-date location)
# IB 1/23/19: More thorough approach would be to match to sales
# first and THEN take extraction date closest to sale date
asmt_asmt_main = (
asmt_asmt_main.sort_values("extract_date")
.reset_index(drop=False)
.groupby("parcelID")
.last()
.reset_index(drop=False)
.set_index("rowID", drop=True)
)
## Building Area
print("Loading asessor/asmt/buildingArea", this_st)
it = pd.read_csv(
asmt_asmt_bldgArea_path,
delimiter="|",
usecols=[0, 1, 4],
names=["rowID", "imp", "sqft"],
dtype={"rowID": str, "imp": int, "sqft": float},
chunksize=chunksize,
quoting=3,
)
asmt_asmt_bldgArea = get_df(get_asmt_bldgArea, it)
# Merge sqft data into main
asmt_asmt_main = asmt_asmt_main.join(asmt_asmt_bldgArea, how="left").set_index(
"parcelID", drop=True
)
## Inner merge with trans/trans to get properties with price data
# and residential classification
merged_asmt = merged.join(asmt_asmt_main, how="inner")
del asmt_asmt_main, asmt_asmt_bldg, res_bldgs_asmt, asmt_asmt_bldgArea
# fill in lat/lon from trans/asmt/main data
for l in ["lat", "lon"]:
merged_asmt[l] = merged_asmt[l].fillna(merged_asmt[l + "_asmt"])
del merged_asmt[l + "_asmt"]
## ##################
## COMBINE ALL FILES
## ##################
merged_all = merged_trans.join(merged_asmt, how="outer", rsuffix="_asmt")
del merged_trans, merged_asmt
# merge data from asmt/asmt and trans/asmt
cols = ["date", "price", "sale_code", "lat", "lon", "sqft"]
cols_asmt = [c + "_asmt" for c in cols]
change_dict = dict([(cols_asmt[cx], c) for cx, c in enumerate(cols)])
merged_all = merged_all.loc[:, cols].fillna(
merged_all.loc[:, cols_asmt].rename(columns=change_dict)
)
# drop any missing that remained
merged_all = merged_all.dropna(subset=["price", "lat", "lon"])
# get column for price only if sqft is not null
merged_all["price_hasSize"] = merged_all["price"].where(
merged_all["sqft"].notnull()
)
# add state column and drop unneeded sale code
merged_all["state"] = this_st
merged_all = merged_all.drop(columns=["sale_code"])
## aggregating ##
merged_all = merged_all.sort_values(by="lon")
merged_all["j"] = pd.merge_asof(
merged_all[["lon"]], lons, left_on="lon", direction="nearest", right_index=True
).loc[:, "j"]
merged_all = merged_all.sort_values(by="lat")
merged_all["i"] = pd.merge_asof(
merged_all[["lat"]], lats, left_on="lat", direction="nearest", right_index=True
).loc[:, "i"]
# save merged but not aggregated/sampled data
merged_all.to_pickle(out_path)
## ##################
## QA/QC + Aggregation
## ##################
price_cols = ["price", "price_per_sqft"]
### QA/QC
# Drop if <= $10k
merged_all = merged_all[merged_all["price"] > 10000]
# drop price/sqft if sqft <= 100
merged_all["sqft"] = merged_all["sqft"].where(merged_all["sqft"] > 100)
merged_all["price_per_sqft"] = merged_all["price_hasSize"] / merged_all["sqft"]
# drop price/sqft if price/sqft <= $10
merged_all["price_per_sqft"] = merged_all["price_per_sqft"].where(
merged_all["price_per_sqft"] > 10
)
# Drop obs over Kth percentile by state
tile = cfg.housing["data"]["ztrax"]["pctile_clip"]
maxs = merged_all[price_cols].quantile(tile)
merged_all.loc[:, price_cols] = merged_all.loc[:, price_cols].where(
merged_all.loc[:, price_cols] <= maxs
)
merged_all = merged_all.dropna(subset=price_cols, how="all")
# Keep only latest sale
merged_all = merged_all.sort_values(by="date")
merged_all = merged_all[~merged_all.index.duplicated(keep="last")]
### Aggregate over images
# Keep just the columns you need
merged_all = merged_all.drop(columns=["date", "sqft", "price_hasSize", "state"])
## aggregating ##
grouped = merged_all[price_cols + ["i", "j"]].groupby(["i", "j"])
# collapse to grid cell using mean and geometric mean
agg_val = grouped.mean()
agg_val.columns = price_cols
agg_val_geom = grouped.agg(lambda x: gmean(x.dropna()))
agg_val_geom.columns = [p + "_geomMean" for p in price_cols]
agg_count = grouped.count()
count_cols = ["n_obs_" + p for p in price_cols]
agg_count.columns = count_cols
out_all = pd.concat([agg_val, agg_val_geom, agg_count], axis=1)
# take just sample cells (sample_ix has sampled id's)
out_df = out_all.reindex(index=sample_ix)
out_df = out_df.dropna(subset=price_cols, how="all")
# If the number of obs col is missing, there are no houses in that image tile
out_df.loc[:, count_cols] = out_df.loc[:, count_cols].fillna(0)
# combine multiindex
out_df.index = (
out_df.index.get_level_values(0).astype(str)
+ ","
+ out_df.index.get_level_values(1).astype(str)
)
out_df.index.name = "ID"
for col in count_cols:
out_df[col] = out_df[col].astype(int)
# save version with just clipped values
out_df.to_pickle(out_path.replace(".pickle", "_aggregated.pickle"))
return out_df
ftrs = client.map(
get_sampled_val,
state_gdf.HASC_1.values,
state_gdf.geometry.values,
chunksize=chunksize,
)
progress(ftrs)
data_df = pd.concat(client.gather(ftrs))
cluster.close()
client.close()
# Deal with grid cells that overlapped states
data_df.loc[:, ["price", "price_per_sqft"]] = (
data_df.loc[:, ["price", "price_per_sqft"]]
* data_df.loc[:, ["n_obs_price", "n_obs_price_per_sqft"]].values
)
data_df.loc[:, ["price_geomMean", "price_per_sqft_geomMean"]] = (
np.log(data_df.loc[:, ["price_geomMean", "price_per_sqft_geomMean"]])
* data_df.loc[:, ["n_obs_price", "n_obs_price_per_sqft"]].values
)
data_df = data_df.groupby(level=0).sum()
data_df.loc[:, ["price", "price_per_sqft"]] = (
data_df.loc[:, ["price", "price_per_sqft"]]
/ data_df.loc[:, ["n_obs_price", "n_obs_price_per_sqft"]].values
)
data_df.loc[:, ["price_geomMean", "price_per_sqft_geomMean"]] = np.exp(
data_df.loc[:, ["price_geomMean", "price_per_sqft_geomMean"]]
/ data_df.loc[:, ["n_obs_price", "n_obs_price_per_sqft"]].values
)
# save
data_df.to_csv(cfg.outcomes_fpath, index=True)
```
### Coverage stats
```
# Give a sense of coverage
total = data_df.shape[0]
coverage_price = data_df.count()["price"] / total
coverage_sqft = data_df.count()["price_per_sqft"] / total
print(total, coverage_price, coverage_sqft)
data_df.n_obs_price_per_sqft.hist(bins=range(0, 1000, 10))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
dataset = pd.read_csv('breastdata.csv',names=['id','thickness','size_uniformity',
'shape_uniformity','adhesion','cellsize',
'nuclei','chromatin','nucleoli','mitoses',
'type'])
dataset = dataset.drop('id',axis=1)
#data cleaning
#nuclei attribute has some data which contains '?'
dataset.loc[dataset['nuclei']=='?','nuclei'] = np.nan
dataset = dataset.dropna()
dataset['nuclei'] = dataset['nuclei'].astype('int')
dataset.head()
dataset.info()
dataset.describe()
#sns.pairplot(dataset)
def test_train_split(dataset, test_size = 0.25):
train_size = 1-test_size
#Separation of values for statified dataset
truedf = dataset[dataset.iloc[:,-1] == 2]
falsedf = dataset[dataset.iloc[:,-1] == 4]
#contatinating 75% of true and flase data for train set and remaining for test set
train_set = pd.concat([truedf[0:int(truedf.count()[0]*train_size)],falsedf[0:int(falsedf.count()[0]*train_size)]])
test_set = pd.concat([truedf[int(truedf.count()[0]*train_size):],falsedf[int(falsedf.count()[0]*train_size):]])
#X_train = train.drop(train.columns[-1], axis=1)
#y_train = train.drop(train.columns[:len(df.columns)-1], axis=1)
#X_test = test.drop(test.columns[-1], axis=1)
#y_test = test.drop(test.columns[:len(df.columns)-1], axis=1)
#return X_train,y_train,X_test,y_test
return train_set,test_set
train,test = test_train_split(dataset)
train_true_mean = train[train.iloc[:,-1] == 2].mean().values[0:-1]
train_true_var = train[train.iloc[:,-1] == 2].var().values[0:-1]
train_false_mean = train[train.iloc[:,-1] == 4].mean().values[0:-1]
train_false_var = train[train.iloc[:,-1] == 4].var().values[0:-1]
#(train_true_mean,train_true_var,train_false_mean,train_false_var )
p_true = (train[train.iloc[:,-1] == 2].count()/train.count())[0]
p_false = 1 - p_true
def probability(x,mean,var):
p = 1/(np.sqrt(2*np.pi*var)) * np.exp((-(x-mean)**2)/(2*var))
return np.prod(p)
def argmax_probability(data):
#for true
y_new_true = probability(data,train_true_mean,train_true_var)* p_true
#for false
y_new_false = probability(data,train_false_mean,train_false_var)* p_false
if (y_new_true>y_new_false):
return 2
else:
return 4
predictions = []
for index, row in test.iterrows():
data = row.values[:-1]
predictions.append(argmax_probability(data))
#print(len(predictions))
predicted = np.array(predictions)
actual = test['type'].values
from sklearn.metrics import confusion_matrix,classification_report
print(confusion_matrix(actual,predicted))
print(classification_report(actual,predicted))
```
| github_jupyter |
# C - Loading, Saving and Freezing Embeddings
This notebook will cover: how to load custom word embeddings in TorchText, how to save all the embeddings we learn during training and how to freeze/unfreeze embeddings during training.
## Loading Custom Embeddings
First, lets look at loading a custom set of embeddings.
Your embeddings need to be formatted so each line starts with the word followed by the values of the embedding vector, all space separated. All vectors need to have the same number of elements.
Let's look at the custom embeddings provided by these tutorials. These are 20-dimensional embeddings for 7 words.
```
with open('custom_embeddings/embeddings.txt', 'r') as f:
print(f.read())
```
Now, let's setup the fields.
```
import torch
from torchtext import data
SEED = 1234
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
TEXT = data.Field(tokenize = 'spacy')
LABEL = data.LabelField(dtype = torch.float)
```
Then, we'll load our dataset and create the validation set.
```
from torchtext import datasets
import random
train_data, test_data = datasets.IMDB.splits(TEXT, LABEL)
train_data, valid_data = train_data.split(random_state = random.seed(SEED))
```
We can only load our custom embeddings after they have been turned into a `Vectors` object.
We create a `Vector` object by passing it the location of the embeddings (`name`), a location for the cached embeddings (`cache`) and a function that will later initialize tokens in our embeddings that aren't within our dataset (`unk_init`). As have done in previous notebooks, we have initialized these to $\mathcal{N}(0,1)$.
```
import torchtext.vocab as vocab
custom_embeddings = vocab.Vectors(name = 'custom_embeddings/embeddings.txt',
cache = 'custom_embeddings',
unk_init = torch.Tensor.normal_)
```
To check the embeddings have loaded correctly we can print out the words loaded from our custom embedding.
```
print(custom_embeddings.stoi)
```
We can also directly print out the embedding values.
```
print(custom_embeddings.vectors)
```
We then build our vocabulary, passing our `Vectors` object.
Note that the `unk_init` should be declared when creating our `Vectors`, and not here!
```
MAX_VOCAB_SIZE = 25_000
TEXT.build_vocab(train_data,
max_size = MAX_VOCAB_SIZE,
vectors = custom_embeddings)
LABEL.build_vocab(train_data)
```
Now our vocabulary vectors for the words in our custom embeddings should match what we loaded.
```
TEXT.vocab.vectors[TEXT.vocab.stoi['good']]
TEXT.vocab.vectors[TEXT.vocab.stoi['bad']]
```
Words that were in our custom embeddings but not in our dataset vocabulary are initialized by the `unk_init` function we passed earlier, $\mathcal{N}(0,1)$. They are also the same size as our custom embeddings (20-dimensional).
```
TEXT.vocab.vectors[TEXT.vocab.stoi['kwjibo']]
```
The rest of the set-up is the same as it is when using the GloVe vectors, with the next step being to set-up the iterators.
```
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size = BATCH_SIZE,
device = device)
```
Then, we define our model.
```
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim,
dropout, pad_idx):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim, padding_idx = pad_idx)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1,
out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [sent len, batch size]
text = text.permute(1, 0)
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
#conv_n = [batch size, n_filters, sent len - filter_sizes[n]]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim = 1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
```
We then initialize our model, making sure `EMBEDDING_DIM` is the same as our custom embedding dimensionality, i.e. 20.
```
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 20
N_FILTERS = 100
FILTER_SIZES = [3,4,5]
OUTPUT_DIM = 1
DROPOUT = 0.5
PAD_IDX = TEXT.vocab.stoi[TEXT.pad_token]
model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT, PAD_IDX)
```
We have a lot less parameters in this model due to the smaller embedding size used.
```
def count_parameters(model):
return sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f'The model has {count_parameters(model):,} trainable parameters')
```
Next, we initialize our embedding layer to use our vocabulary vectors.
```
embeddings = TEXT.vocab.vectors
model.embedding.weight.data.copy_(embeddings)
```
Then, we initialize the unknown and padding token embeddings to all zeros.
```
UNK_IDX = TEXT.vocab.stoi[TEXT.unk_token]
model.embedding.weight.data[UNK_IDX] = torch.zeros(EMBEDDING_DIM)
model.embedding.weight.data[PAD_IDX] = torch.zeros(EMBEDDING_DIM)
```
Following standard procedure, we create our optimizer.
```
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
```
Define our loss function (criterion).
```
criterion = nn.BCEWithLogitsLoss()
```
Then place the loss function and the model on the GPU.
```
model = model.to(device)
criterion = criterion.to(device)
```
Create the function to calculate accuracy.
```
def binary_accuracy(preds, y):
rounded_preds = torch.round(torch.sigmoid(preds))
correct = (rounded_preds == y).float()
acc = correct.sum() / len(correct)
return acc
```
Then implement our training function...
```
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
...evaluation function...
```
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text).squeeze(1)
loss = criterion(predictions, batch.label)
acc = binary_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
...and our helpful function that tells us how long an epoch takes.
```
import time
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
We've finally reached training our model!
## Freezing and Unfreezing Embeddings
We're going to train our model for 10 epochs. During the first 5 epochs we are going to freeze the weights (parameters) of our embedding layer. For the last 10 epochs we'll allow our embeddings to be trained.
Why would we ever want to do this? Sometimes the pre-trained word embeddings we use will already be good enough and won't need to be fine-tuned with our model. If we keep the embeddings frozen then we don't have to calculate the gradients and update the weights for these parameters, giving us faster training times. This doesn't really apply for the model used here, but we're mainly covering it to show how it's done. Another reason is that if our model has a large amount of parameters it may make training difficult, so by freezing our pre-trained embeddings we reduce the amount of parameters needing to be learned.
To freeze the embedding weights, we set `model.embedding.weight.requires_grad` to `False`. This will cause no gradients to be calculated for the weights in the embedding layer, and thus no parameters will be updated when `optimizer.step()` is called.
Then, during training we check if `FREEZE_FOR` (which we set to 5) epochs have passed. If they have then we set `model.embedding.weight.requires_grad` to `True`, telling PyTorch that we should calculate gradients in the embedding layer and update them with our optimizer.
```
N_EPOCHS = 10
FREEZE_FOR = 5
best_valid_loss = float('inf')
#freeze embeddings
model.embedding.weight.requires_grad = unfrozen = False
for epoch in range(N_EPOCHS):
start_time = time.time()
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
print(f'Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s | Frozen? {not unfrozen}')
print(f'\tTrain Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}%')
print(f'\t Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}%')
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tutC-model.pt')
if (epoch + 1) >= FREEZE_FOR:
#unfreeze embeddings
model.embedding.weight.requires_grad = unfrozen = True
```
Another option would be to unfreeze the embeddings whenever the validation loss stops increasing using the following code snippet instead of the `FREEZE_FOR` condition:
```python
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'tutC-model.pt')
else:
#unfreeze embeddings
model.embedding.weight.requires_grad = unfrozen = True
```
```
model.load_state_dict(torch.load('tutC-model.pt'))
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}%')
```
## Saving Embeddings
We might want to re-use the embeddings we have trained here with another model. To do this, we'll write a function that will loop through our vocabulary, getting the word and embedding for each word, writing them to a text file in the same format as our custom embeddings so they can be used with TorchText again.
Currently, TorchText Vectors seem to have issues with loading certain unicode words, so we skip these by only writing words without unicode symbols. **If you know a better solution to this then let me know**
```
from tqdm import tqdm
def write_embeddings(path, embeddings, vocab):
with open(path, 'w') as f:
for i, embedding in enumerate(tqdm(embeddings)):
word = vocab.itos[i]
#skip words with unicode symbols
if len(word) != len(word.encode()):
continue
vector = ' '.join([str(i) for i in embedding.tolist()])
f.write(f'{word} {vector}\n')
```
We'll write our embeddings to `trained_embeddings.txt`.
```
write_embeddings('custom_embeddings/trained_embeddings.txt',
model.embedding.weight.data,
TEXT.vocab)
```
To double check they've written correctly, we can load them as `Vectors`.
```
trained_embeddings = vocab.Vectors(name = 'custom_embeddings/trained_embeddings.txt',
cache = 'custom_embeddings',
unk_init = torch.Tensor.normal_)
```
Finally, let's print out the first 5 rows of our loaded vectors and the same from our model's embeddings weights, checking they are the same values.
```
print(trained_embeddings.vectors[:5])
print(model.embedding.weight.data[:5])
```
All looks good! The only difference between the two is the removal of the ~50 words in the vocabulary that contain unicode symbols.
| github_jupyter |
```
%cd/content/drive/My Drive/Đồ án 2 (Sentiment Analysis Vietnamese)
from google.colab import drive
drive.mount('/content/drive')
!pip install flask_ngrok
!pip install gevent
!pip install pyvi
from warnings import simplefilter
simplefilter(action='ignore', category=FutureWarning)
from sklearn import metrics
from sklearn import preprocessing
from pyvi import ViTokenizer, ViPosTagger # thư viện NLP tiếng Việt chuyên để tiền xử lí
from gensim.utils import decode_htmlentities
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
import gensim
import os
import re
import string
import codecs
path_nag = 'sentiment_dicts/nag.txt'
path_pos = 'sentiment_dicts/pos.txt'
path_not = 'sentiment_dicts/not.txt'
with codecs.open(path_nag, 'r', encoding='UTF-8') as f:
nag = f.readlines()
nag_list = [n.replace('\n', '') for n in nag]
with codecs.open(path_pos, 'r', encoding='UTF-8') as f:
pos = f.readlines()
pos_list = [n.replace('\n', '') for n in pos]
with codecs.open(path_not, 'r', encoding='UTF-8') as f:
not_ = f.readlines()
not_list = [n.replace('\n', '') for n in not_]
def normalize_text(text):
#Remove các ký tự kéo dài: vd: đẹppppppp
text = re.sub(r'([A-Z])\1+', lambda m: m.group(1).upper(), text, flags=re.IGNORECASE)
# Chuyển thành chữ thường
text = text.lower()
#Chuẩn hóa tiếng Việt, xử lý emoj, chuẩn hóa tiếng Anh, thuật ngữ
replace_list = {
'òa': 'oà', 'óa': 'oá', 'ỏa': 'oả', 'õa': 'oã', 'ọa': 'oạ', 'òe': 'oè', 'óe': 'oé','ỏe': 'oẻ',
'õe': 'oẽ', 'ọe': 'oẹ', 'ùy': 'uỳ', 'úy': 'uý', 'ủy': 'uỷ', 'ũy': 'uỹ','ụy': 'uỵ', 'uả': 'ủa',
'ả': 'ả', 'ố': 'ố', 'u´': 'ố','ỗ': 'ỗ', 'ồ': 'ồ', 'ổ': 'ổ', 'ấ': 'ấ', 'ẫ': 'ẫ', 'ẩ': 'ẩ',
'ầ': 'ầ', 'ỏ': 'ỏ', 'ề': 'ề','ễ': 'ễ', 'ắ': 'ắ', 'ủ': 'ủ', 'ế': 'ế', 'ở': 'ở', 'ỉ': 'ỉ',
'ẻ': 'ẻ', 'àk': u' à ','aˋ': 'à', 'iˋ': 'ì', 'ă´': 'ắ','ử': 'ử', 'e˜': 'ẽ', 'y˜': 'ỹ', 'a´': 'á',
#Quy các icon về 2 loại emoj: Tích cực hoặc tiêu cực
"👹": "nagative", "👻": "positive", "💃": "positive",'🤙': ' positive ', '👍': ' positive ',
"💄": "positive", "💎": "positive", "💩": "positive","😕": "nagative", "😱": "nagative", "😸": "positive",
"😾": "nagative", "🚫": "nagative", "🤬": "nagative","🧚": "positive", "🧡": "positive",'🐶':' positive ',
'👎': ' nagative ', '😣': ' nagative ','✨': ' positive ', '❣': ' positive ','☀': ' positive ',
'♥': ' positive ', '🤩': ' positive ', 'like': ' positive ', '💌': ' positive ',
'🤣': ' positive ',"😅": 'positive', '🖤': ' positive ', '🤤': ' positive ', ':(': ' nagative ', '😢': ' nagative ',
'❤': ' positive ', '😍': ' positive ', '😘': ' positive ', '😪': ' nagative ', '😊': ' positive ',
'?': ' ? ', '😁': ' positive ', '💖': ' positive ', '😟': ' nagative ', '😭': ' nagative ',
'💯': ' positive ', '💗': ' positive ', '♡': ' positive ', '💜': ' positive ', '🤗': ' positive ',
'^^': ' positive ', '😨': ' nagative ', '☺': ' positive ', '💋': ' positive ', '👌': ' positive ',
'😖': ' nagative ', '😀': ' positive ', ':((': ' nagative ', '😡': ' nagative ', '😠': ' nagative ',
'😒': ' nagative ', '🙂': ' positive ', '😏': ' nagative ', '😝': ' positive ', '😄': ' positive ',
'😙': ' positive ', '😤': ' nagative ', '😎': ' positive ', '😆': ' positive ', '💚': ' positive ',
'✌': ' positive ', '💕': ' positive ', '😞': ' nagative ', '😓': ' nagative ', '️🆗️': ' positive ',
'😉': ' positive ', '😂': ' positive ', ':v': ' positive ', '=))': ' positive ', '😋': ' positive ',
'💓': ' positive ', '😐': ' nagative ', ':3': ' positive ', '😫': ' nagative ', '😥': ' nagative ',
'😃': ' positive ', '😬': 'positive', '😌':'positive', '💛': ' positive ', '🤝': ' positive ', '🎈': ' positive ',
'😗': ' positive ', '🤔': ' nagative ', '😑': ' nagative ', '🔥': ' nagative ', '🙏': ' nagative ',
'🆗': ' positive ', '😻': ' positive ', '💙': ' positive ', '💟': ' positive ',
'😚': ' positive ', '❌': ' nagative ', '👏': ' positive ', ';)': ' positive ', '<3': ' positive ',
'🌝': ' positive ', '🌷': ' positive ', '🌸': ' positive ', '🌺': ' positive ',
'🌼': ' positive ', '🍓': ' positive ', '🐅': ' positive ', '🐾': ' positive ', '👉': ' positive ',
'💐': ' positive ', '💞': ' positive ', '💥': ' positive ', '💪': ' positive ',
'💰': ' positive ', '😇': ' positive ', '😛': ' positive ', '😜': ' positive ',
'🙃': ' positive ', '🤑': ' positive ', '🤪': ' positive ','☹': ' nagative ', '💀': ' nagative ',
'😔': ' nagative ', '😧': ' nagative ', '😩': ' nagative ', '😰': ' nagative ', '😳': ' nagative ',
'😵': ' nagative ', '😶': ' nagative ', '🙁': ' nagative ',
#Chuẩn hóa 1 số sentiment words/English words
':))': ' positive ', ':)': ' positive ', 'ô kêi': ' ok ', 'okie': ' ok ', ' o kê ': ' ok ',
'okey': ' ok ', 'ôkê': ' ok ', 'oki': ' ok ', ' oke ': ' ok ',' okay':' ok ','okê':' ok ',
' tks ': u' cám ơn ', 'thks': u' cám ơn ', 'thanks': u' cám ơn ', 'ths': u' cám ơn ', 'thank': u' cám ơn ',
'⭐': 'star ', '*': 'star ', '🌟': 'star ', '🎉': u' positive ',
'kg ': u' không ','not': u' không ', u' kg ': u' không ', '"k ': u' không ',' kh ':u' không ','kô':u' không ','hok':u' không ',' kp ': u' không phải ',u' kô ': u' không ', '"ko ': u' không ', u' ko ': u' không ', u' k ': u' không ', 'khong': u' không ', u' hok ': u' không ',
'he he': ' positive ','hehe': ' positive ','hihi': ' positive ', 'haha': ' positive ', 'hjhj': ' positive ',
' lol ': ' nagative ',' cc ': ' nagative ','cute': u' dễ thương ','huhu': ' nagative ', ' vs ': u' với ', 'wa': ' quá ', 'wá': u' quá', 'j': u' gì ', '“': ' ',
' sz ': u' cỡ ', 'size': u' cỡ ', u' đx ': u' được ', 'dk': u' được ', 'dc': u' được ', 'đk': u' được ',
'đc': u' được ','authentic': u' chuẩn chính hãng ',u' aut ': u' chuẩn chính hãng ', u' auth ': u' chuẩn chính hãng ', 'thick': u' positive ', 'store': u' cửa hàng ',
'shop': u' cửa hàng ', 'sp': u' sản phẩm ', 'gud': u' tốt ','god': u' tốt ','wel done':' tốt ', 'good': u' tốt ', 'gút': u' tốt ',
'sấu': u' xấu ','gut': u' tốt ', u' tot ': u' tốt ', u' nice ': u' tốt ', 'perfect': 'rất tốt', 'bt': u' bình thường ',
'time': u' thời gian ', 'qá': u' quá ', u' ship ': u' giao hàng ', u' m ': u' mình ', u' mik ': u' mình ',
'ể': 'ể', 'product': 'sản phẩm', 'quality': 'chất lượng','chat':' chất ', 'excelent': 'hoàn hảo', 'bad': 'tệ','fresh': ' tươi ','sad': ' tệ ',
'date': u' hạn sử dụng ', 'hsd': u' hạn sử dụng ','quickly': u' nhanh ', 'quick': u' nhanh ','fast': u' nhanh ','delivery': u' giao hàng ',u' síp ': u' giao hàng ',
'beautiful': u' đẹp tuyệt vời ', u' tl ': u' trả lời ', u' r ': u' rồi ', u' shopE ': u' cửa hàng ',u' order ': u' đặt hàng ',
'chất lg': u' chất lượng ',u' sd ': u' sử dụng ',u' dt ': u' điện thoại ',u' nt ': u' nhắn tin ',u' tl ': u' trả lời ',u' sài ': u' xài ',u'bjo':u' bao giờ ',
'thik': u' thích ',u' sop ': u' cửa hàng ', ' fb ': ' facebook ', ' face ': ' facebook ', ' very ': u' rất ',u'quả ng ':u' quảng ',
'dep': u' đẹp ',u' xau ': u' xấu ','delicious': u' ngon ', u'hàg': u' hàng ', u'qủa': u' quả ',
'iu': u' yêu ','fake': u' giả mạo ', 'trl': 'trả lời', '><': u' positive ',
' por ': u' tệ ',' poor ': u' tệ ', 'ib':u' nhắn tin ', 'rep':u' trả lời ',u'fback':' feedback ','fedback':' feedback ',
#dưới 3* quy về 1*, trên 3* quy về 5*
'6 sao': ' 5star ','6 star': ' 5star ', '5star': ' 5star ','5 sao': ' 5star ','5sao': ' 5star ',
'starstarstarstarstar': ' 5star ', '1 sao': ' 1star ', '1sao': ' 1star ','2 sao':' 1star ','2sao':' 1star ',
'2 starstar':' 1star ','1star': ' 1star ', '0 sao': ' 1star ', '0star': ' 1star ',}
for k, v in replace_list.items():
text = text.replace(k, v)
# Loại bỏ dấu câu (Chuyển Puntuation thành space)
translator = str.maketrans(string.punctuation, ' ' * len(string.punctuation))
text = text.translate(translator)
text = ViTokenizer.tokenize(text)
texts = text.split()
len_text = len(texts)
texts = [t.replace('_', ' ') for t in texts]
for i in range(len_text):
cp_text = texts[i]
if cp_text in not_list: # Xử lý vấn đề phủ định (VD: áo này chẳng đẹp--> áo này notpos)
numb_word = 2 if len_text - i - 1 >= 4 else len_text - i - 1
for j in range(numb_word):
if texts[i + j + 1] in pos_list:
texts[i] = 'notpos'
texts[i + j + 1] = ''
if texts[i + j + 1] in nag_list:
texts[i] = 'notnag'
texts[i + j + 1] = ''
else: #Thêm feature cho những sentiment words (áo này đẹp--> áo này đẹp positive)
if cp_text in pos_list:
texts.append('positive')
elif cp_text in nag_list:
texts.append('nagative')
text = u' '.join(texts)
#remove nốt những ký tự thừa thãi
text = text.replace(u'"', u' ')
text = text.replace(u'️', u'')
text = text.replace('🏻','')
return text
import pickle
X_data = pickle.load(open('X_train.pkl', 'rb'))
y_data = pickle.load(open('y_train.pkl', 'rb'))
categorize=['Tích cực', 'Tiêu cực']
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
def BigClassifier(contents):
input_ = []
contents = normalize_text(contents)
input_.append(contents)
X_data.append(input_[0])
tfidf_vect = TfidfVectorizer(analyzer='word',max_features=30000)
tfidf_vect.fit(X_data)
X_data_tfidf = tfidf_vect.transform(X_data)
X_test_tfidf=X_data_tfidf[-1]
X_data_tfidf=X_data_tfidf[0:22520]
feature=tfidf_vect.get_feature_names()
classifier=LogisticRegression()
classifier.fit(X_data_tfidf, y_data)
test_predictions = classifier.predict(X_test_tfidf)[0]
return (categorize[test_predictions])
print(BigClassifier("Sản phẩm khong tot cho lắm"))
from flask import Flask, render_template, url_for, request
from flask_ngrok import run_with_ngrok
import re
app = Flask(__name__, template_folder = '/content/drive/My Drive/Đồ án 2 (Sentiment Analysis Vietnamese)/templates')
run_with_ngrok(app)
@app.route('/')
def index():
return render_template("classifier.html")
@app.route('/classifier')
def classify():
return render_template("classifier.html")
@app.route('/classifier', methods=['POST'])
def classify2():
if request.method == 'POST':
raw_text = request.form['rawtext']
results = BigClassifier(raw_text)
return render_template("classifier.html", results=results,raw_text=raw_text)
app.run()
```
| github_jupyter |
<center><img src='https://www.intel.com/content/dam/develop/external/us/en/images/infosim-logo-746616.png' style="width:300px"></center>
# StableNet<sup>®</sup> Weather Map Statistics
## Introduction
This script adds statistics to Weather Maps when given certain parameters as input over a CSV file. We describe the form the input file has to be of and the script's workflow with an example. It is important to note that the titles of columns in the input CSV file may not differ from those in the example file. However, the order of the columns may vary.
## Example
## Program
### Imports and code definitions
#### Import necessary python modules
```
import warnings
import requests
from requests.auth import HTTPBasicAuth
import getpass
from xml.etree import ElementTree
import xml.dom.minidom # for pretty printing XML
import re # for regular expressions
import json
import csv
import sys
```
#### Function to normalize CSV entries
```
def normalized_entry(entry, i):
if cols[i] == 'lastvalue or measurementstat':
return 'lastvalue' if entry.lower().startswith('l') else 'measurementstat'
if cols[i] == 'statistic default state':
return entry if entry != '' else '0'
if cols[i] == 'showaslabel':
return 'true' if entry.lower().startswith('t') else 'false'
if cols[i] == 'metricscale add':
return entry if entry != '' else '0'
if cols[i] == 'metricscale multiply':
return entry if entry != '' else '1'
if cols[i] == 'time multiplier':
return entry if entry != '' else '1'
if cols[i] == 'time type':
return entry if entry != '' else 'lastmonths'
if cols[i] == 'offset multiplier':
return entry if entry != '' else '0'
if cols[i] == 'offset type':
return entry if entry != '' else 'lastmonths'
if cols[i] == 'node or link':
return 'node' if entry.lower().startswith('n') else 'link'
if cols[i] == 'source or destination':
if entry == '':
return ''
return 'source' if entry.lower().startswith('s') else 'destination'
if cols[i] == 'domain':
if entry.lower().startswith('d'):
return 'device'
if entry.lower().startswith('i'):
return 'interface'
if entry.lower().startswith('me'):
return 'measurement'
if entry.lower().startswith('mon'):
return 'monitor'
if entry.lower().startswith('mod'):
return 'module'
return 'service'
return entry
```
#### Function to create and append statistic tag to Weather Map object (using \<statistics\>)
```
def append_stat_tag():
stat_attrs = {'metrickey': metric_key, 'type': stat_props['lastvalue or measurementstat'],
'title': title, 'ranges': stat_props['statistic ranges'],
'defaultstate': stat_props['statistic default state'],
'showaslabel':stat_props['showaslabel']}
statistic = ElementTree.SubElement(
el.find('statistics'), 'statistic', stat_attrs
)
ElementTree.SubElement(
statistic, 'reference', {'obid': meas_id, 'domain': 'measurement'}
)
ElementTree.SubElement(
statistic, 'metricscale',
{
'add': stat_props['metricscale add'],
'multiply': stat_props['metricscale multiply']
}
)
if stat_props['lastvalue or measurementstat'] == 'measurementstat':
ElementTree.SubElement(
statistic, 'time',
{
'multiplier': stat_props['time multiplier'],
'type': stat_props['time type'],
'average': stat_props['time average'],
'offsetmultiplier': stat_props['offset multiplier'],
'offsettype': stat_props['offset type']
}
)
```
#### Functions to obtain measurement for given Weather Map object (using /rest/tag/query)
```
def get_andtagfilter_with_two_valuetagfilters(cat1, val1, cat2, val2):
return '<andtagfilter>\
<valuetagfilter filtervalue="{}">\
<tagcategory key = "{}"/>\
</valuetagfilter>\
<valuetagfilter filtervalue="{}">\
<tagcategory key = "{}"/>\
</valuetagfilter>\
</andtagfilter>'.format(val1, cat1, val2, cat2)
def get_andtagfilter_with_valuetagfilter_and_patterntagfilter(cat1, val, cat2, pat):
return '<andtagfilter>\
<valuetagfilter filtervalue="{}">\
<tagcategory key = "{}"/>\
</valuetagfilter>\
<patterntagfilter filterpattern="{}">\
<tagcategory key = "{}"/>\
</patterntagfilter>\
</andtagfilter>'.format(val, cat1, pat, cat2)
def compute_measurement():
url = 'https://{}:{}/rest/tag/query'.format(server_ip, server_port)
filter = ''
if obj_domain == 'device':
#Here we consider Ping measurements separately because
#the name of the measurement typically is the device name
if stat_props['measurement pattern'] == 'Ping measurement':
filter += get_andtagfilter_with_two_valuetagfilters(
'Device ID', obj_id, 'Measurement Type', 'Ping')
else:
filter += get_andtagfilter_with_valuetagfilter_and_patterntagfilter(
'Device ID', obj_id, 'Measurement Name', stat_props['measurement pattern'])
if obj_domain == 'interface':
#Here we consider SNMP Interface measurements separately because
#the name of the measurement typically is the device name
if stat_props['measurement pattern'] == 'Interface measurement':
filter += get_andtagfilter_with_two_valuetagfilters(
'Interface ID', obj_id, 'Measurement Type', 'SNMP Interface')
else:
filter += get_andtagfilter_with_valuetagfilter_and_patterntagfilter(
'Interface ID', obj_id, 'Measurement Name', stat_props['measurement pattern'])
if obj_domain == 'measurement':
filter += get_andtagfilter_with_valuetagfilter_and_patterntagfilter(
'Measurement ID', obj_id, 'Measurement Name', stat_props['measurement pattern'])
if obj_domain == 'monitor':
filter += get_andtagfilter_with_valuetagfilter_and_patterntagfilter(
'Monitor ID', obj_id, 'Measurement Name', stat_props['measurement pattern'])
if obj_domain == 'module':
filter += get_andtagfilter_with_valuetagfilter_and_patterntagfilter(
'Device Module ID', obj_id, 'Measurement Name', stat_props['measurement pattern'])
if obj_domain == 'service':
filter += get_andtagfilter_with_valuetagfilter_and_patterntagfilter(
'Service ID', obj_id, 'Measurement Name', stat_props['measurement pattern'])
query = '<taggablelistqueryinput domain="Measurement">\
<tagcategories>\
<tagcategory key="Measurement ID"/>\
<tagcategory key="Measurement Name"/>\
</tagcategories>' +\
filter +\
'</taggablelistqueryinput>'
resp = requests.post(
url,
data = query,
verify = False,
auth = HTTPBasicAuth(username, pw),
headers = {'Content-Type': 'application/xml'}
)
meas = ElementTree.fromstring(resp.content)
meas_id = ''
meas_name = ''
for element in meas.iter():
if element.tag == 'tag':
if element.get('key') == 'Measurement ID':
meas_id = element.get('value')
elif element.get('key') == 'Measurement Name':
meas_name = element.get('value')
if meas_id != '' and meas_name != '':
break
return (meas_id, meas_name)
```
#### Function to obtain metric key and metric name (making use of /rest/measurements/metric/{id})
```
def compute_metric_key_and_name():
resp = requests.get(
"https://" + server_ip + ":" + server_port +
"/rest/measurements/metric/" + meas_id,
verify=False,
auth=HTTPBasicAuth(username, pw)
)
print("https://" + server_ip + ":" + server_port +
"/rest/measurements/metric/" + meas_id)
metrics = ElementTree.fromstring(resp.content)
metric_key = ''
pat1 = re.compile(stat_props['metricname'])
pat2 = re.compile(stat_props['metricunit'])
for metric in metrics.iter():
if pat1.search(str(metric.get('name'))):
if pat2.search(str(metric.get('unit'))):
if stat_props['lastvalue or measurementstat'] == 'measurementstat':
metric_key = stat_props['aggregate'] + '_'
return (metric_key + metric.get('key'), metric.get('name'))
return ('','')
```
#### Function to generate a standard statistic title (unless it is provided in the CSV file)
```
def compute_statistic_title(input_title, meas_name, metric_name):
title = ''
if input_title != '':
return input_title
else:
if stat_props['node or link'] == 'link':
title = 'Src ' if stat_props['source or destination'] == 'source' else 'Dest '
title += meas_name
return title + metric_name
```
#### Function to request the Weather Map object as XML (using /rest/weathermaps/get/{id})
```
def request_weathermap():
url = "https://{}:{}/rest/weathermaps/get/{}".format(server_ip, server_port, wmap_id)
resp = requests.get(
url,
verify=False,
auth=HTTPBasicAuth(username, pw)
)
#print (xml.dom.minidom.parseString(resp.content.decode('utf-8')).toprettyxml())
wmap = ElementTree.fromstring(resp.content)
if wmap.tag == 'error':
print('weathermap with id {} does not exist on server {}:{}'\
.format(wmap_id, server_ip, server_port))
sys.exit()
# if flag is set, delete all existing statistics
if delete_existing_stats:
for el in wmap.iter():
if el.tag == 'statistics':
el.clear()
return wmap
```
#### Function to check whether current object is relevant for the current line of the CSV file
```
def relevance_check():
if obj_domain != stat_props['domain']:
return False
#test whether the name of the weathermapnode matches stat_props['pattern for node name']
if stat_props['pattern for node name'] == '' or el.tag != 'weathermapnode':
return True
pattern = re.compile(stat_props['pattern for node name'])
if pattern.search(str(el.get('name'))) is None:
return False
return True
```
### Actual program code to handle Weather Maps
<span style="color:red">Select this cell and run in menu: "Run > Run All Above Selected Cell"</span>
#### Provide server credentials
```
#Credentials of server
server_ip = '10.20.20.113'
server_port = '5443'
username = 'infosim'
pw=getpass.getpass('Enter password for user ' + username + ' on server:')
```
#### Check server credentials and get List of Weather Maps from Server
```
warnings.filterwarnings("ignore")
resp = requests.get(
"https://"+server_ip+":"+server_port+"/rest/weathermaps/list",
verify=False,
auth=HTTPBasicAuth(username, pw)
)
tree = ElementTree.fromstring(resp.content)
if tree.tag == 'html':
print('wrong credentials inserted')
sys.exit()
for wmap in tree:
wmap_name = wmap.get('name') if wmap.get('name') is not None else ''
print('WeatherMap ' + wmap.get('obid') + ': ' + wmap_name)
```
#### Define input parameters for script
```
delete_existing_stats = True#if True, all existing statistics are deleted from the weathermap
wmap_suffix = '_TREND';
wmap_id = '1058'
csv_file_name = 'input_node_trend.csv'
```
#### Read in statistic configuration from CSV file
```
inputs = []
cols = []
with open(csv_file_name, newline='') as csvfile:
reader = csv.reader(csvfile, delimiter=';', quotechar='\'')
first_line = True
for row in reader:
if not first_line:
inputs += [{}]
for i in range(0, len(row)):
inputs[-1][cols[i]] = normalized_entry(row[i], i)
else:
for entry in row:
cols += [entry]
first_line = False
if len(row) != len(cols):
print('Malformed csv file: '
'not all lines of same length')
break
for j in range(0, len(inputs)):
print('Line: ' + str(j+1))
for i in range(0, len(cols)):
print('\t\t' + cols[i] + ': ' + inputs[j][cols[i]])
```
#### Add statistics to Weather Map XML and post it to server
```
wmap = request_weathermap() # Request Weather Map with wmap_id from server previously defined
i = 0
for stat_props in inputs:
i += 1
print('Processing line {} of input file'.format(str(i)))
for el in wmap.findall('weathermapnodes/weathermapnode')\
+ wmap.findall('weathermaplinks/weathermaplink'):
reference_tag = 'elementreference'
if stat_props['node or link'] == 'link':
reference_tag = 'sourcereference' if stat_props['source or destination'] == 'source'\
else 'destinationreference'
obj_ref = el.find(reference_tag)
if not hasattr(obj_ref, 'get'):
continue
obj_id = obj_ref.get('obid')
obj_domain = obj_ref.get('domain')
if not relevance_check():
continue
(meas_id, meas_name) = compute_measurement()
if meas_id == '' or meas_name == '':
print('{:15}'.format(obj_domain + ' ' + obj_id) + 'Requested measurement not found')
continue
(metric_key, metric_name) = compute_metric_key_and_name()
if metric_key == '':
print('{:15}'.format(obj_domain + ' ' + obj_id) + ': Requested metric not found')
continue
title = compute_statistic_title(stat_props['statistic title'],meas_name, metric_name)
append_stat_tag()
print('{:15}'.format(obj_domain + ' ' + obj_id) + ' Statistic added')
wmap.set('name', wmap.get('name') + wmap_suffix)
#print(xml.dom.minidom.parseString(ElementTree.tostring(wmap)).toprettyxml())
resp = requests.post(
"https://" + server_ip + ":" + server_port + "/rest/weathermaps/add/",
verify = False,
auth = HTTPBasicAuth(username, pw),
data = ElementTree.tostring(wmap),
headers = {'Content-Type': 'application/xml'}
)
#print (xml.dom.minidom.parseString(resp.content.decode('utf-8')).toprettyxml())
```
| github_jupyter |
## Dependencies
```
import json, warnings, shutil, glob
from jigsaw_utility_scripts import *
from scripts_step_lr_schedulers import *
from transformers import TFXLMRobertaModel, XLMRobertaConfig
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
## TPU configuration
```
strategy, tpu = set_up_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
AUTO = tf.data.experimental.AUTOTUNE
```
# Load data
```
database_base_path = '/kaggle/input/jigsaw-data-split-roberta-128-ratio-2-clean-tail/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv",
usecols=['comment_text', 'toxic', 'lang'])
print('Train samples: %d' % len(k_fold))
display(k_fold.head())
print('Validation samples: %d' % len(valid_df))
display(valid_df.head())
base_data_path = 'fold_1/'
# Unzip files
!tar -xf /kaggle/input/jigsaw-data-split-roberta-128-ratio-2-clean-tail/fold_1.tar.gz
```
# Model parameters
```
base_path = '/kaggle/input/jigsaw-transformers/XLM-RoBERTa/'
config = {
"MAX_LEN": 128,
"BATCH_SIZE": 256,
"EPOCHS": 5,
"LEARNING_RATE": 1e-5,
"ES_PATIENCE": 1,
"base_model_path": base_path + 'tf-xlm-roberta-large-tf_model.h5',
"config_path": base_path + 'xlm-roberta-large-config.json'
}
with open('config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
## Learning rate schedule
```
train_tail_len = 40463 # using tail
lr_min = 1e-6
lr_start = 1e-7
lr_max = config['LEARNING_RATE']
step_size = (len(k_fold[k_fold['fold_1'] == 'train']) + train_tail_len) // config['BATCH_SIZE']
total_steps = config['EPOCHS'] * step_size
hold_max_steps = 0
warmup_steps = step_size * 1
decay = .9997
rng = [i for i in range(0, total_steps, config['BATCH_SIZE'])]
y = [exponential_schedule_with_warmup(tf.cast(x, tf.float32), warmup_steps, hold_max_steps,
lr_start, lr_max, lr_min, decay) for x in rng]
sns.set(style="whitegrid")
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
module_config = XLMRobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFXLMRobertaModel.from_pretrained(config['base_model_path'], config=module_config)
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
cls_token = last_hidden_state[:, 0, :]
output = layers.Dense(1, activation='sigmoid', name='output')(cls_token)
model = Model(inputs=[input_ids, attention_mask], outputs=output)
return model
```
# Train
```
# Load data
x_train = np.load(base_data_path + 'x_train.npy')
y_train = np.load(base_data_path + 'y_train.npy').reshape(x_train.shape[1], 1).astype(np.float32)
x_valid_ml = np.load(database_base_path + 'x_valid.npy')
y_valid_ml = np.load(database_base_path + 'y_valid.npy').reshape(x_valid_ml.shape[1], 1).astype(np.float32)
#################### ADD TAIL ####################
x_train_tail = np.load(base_data_path + 'x_train_tail.npy')
y_train_tail = np.load(base_data_path + 'y_train_tail.npy').reshape(x_train_tail.shape[1], 1).astype(np.float32)
x_train = np.hstack([x_train, x_train_tail])
y_train = np.vstack([y_train, y_train_tail])
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid_ml.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train, y_train, config['BATCH_SIZE'], AUTO, seed=SEED))
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset(x_valid_ml, y_valid_ml, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
train_data_iter = iter(train_dist_ds)
valid_data_iter = iter(valid_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss = loss_fn(y, probabilities)
valid_auc.update_state(y, probabilities)
valid_loss.update_state(loss)
for _ in tf.range(valid_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
# Train model
with strategy.scope():
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=lambda:
exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps, hold_max_steps, lr_start,
lr_max, lr_min, decay))
loss_fn = losses.binary_crossentropy
train_auc = metrics.AUC()
valid_auc = metrics.AUC()
train_loss = metrics.Sum()
valid_loss = metrics.Sum()
metrics_dict = {'loss': train_loss, 'auc': train_auc,
'val_loss': valid_loss, 'val_auc': valid_auc}
history = custom_fit(model, metrics_dict, train_step, valid_step, train_data_iter, valid_data_iter,
step_size, valid_step_size, config['BATCH_SIZE'], config['EPOCHS'],
config['ES_PATIENCE'], save_last=False)
model.save_weights('model.h5')
# Make predictions
x_train = np.load(base_data_path + 'x_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_ml_eval = np.load(database_base_path + 'x_valid.npy')
train_preds = model.predict(get_test_dataset(x_train, config['BATCH_SIZE'], AUTO))
valid_preds = model.predict(get_test_dataset(x_valid, config['BATCH_SIZE'], AUTO))
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
k_fold.loc[k_fold['fold_1'] == 'train', 'pred_1'] = np.round(train_preds)
k_fold.loc[k_fold['fold_1'] == 'validation', 'pred_1'] = np.round(valid_preds)
valid_df['pred_1'] = valid_ml_preds
#################### Fine-tune on validation set ####################
#################### ADD TAIL ####################
x_train_ml_tail = np.load(database_base_path + 'x_valid_tail.npy')
y_train_ml_tail = np.load(database_base_path + 'y_valid_tail.npy').reshape(x_train_ml_tail.shape[1], 1).astype(np.float32)
x_train_ml_tail = np.hstack([x_valid_ml, x_train_ml_tail])
y_train_ml_tail = np.vstack([y_valid_ml, y_train_ml_tail])
valid_step_size_tail = x_train_ml_tail.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_ml_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset(x_train_ml_tail, y_train_ml_tail,
config['BATCH_SIZE'], AUTO, seed=SEED))
train_ml_data_iter = iter(train_ml_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss = loss_fn(y, probabilities)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
train_auc.update_state(y, probabilities)
train_loss.update_state(loss)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
# New optimizer
optimizer = optimizers.Adam(learning_rate=lambda:
exponential_schedule_with_warmup(tf.cast(optimizer.iterations, tf.float32),
warmup_steps, hold_max_steps, lr_start,
lr_max, lr_min, decay))
history_ml = custom_fit(model, metrics_dict, train_step, valid_step, train_ml_data_iter, valid_data_iter,
valid_step_size_tail, valid_step_size, config['BATCH_SIZE'], 2,
config['ES_PATIENCE'], save_last=False)
# Join history
for key in history_ml.keys():
history[key] += history_ml[key]
model.save_weights('model_ml.h5')
# Make predictions
valid_ml_preds = model.predict(get_test_dataset(x_valid_ml_eval, config['BATCH_SIZE'], AUTO))
valid_df['pred_ml_1'] = valid_ml_preds
### Delete data dir
shutil.rmtree(base_data_path)
```
## Model loss graph
```
plot_metrics(history)
# ML fine-tunned preds
plot_metrics(history_ml)
```
# Model evaluation
```
display(evaluate_model(k_fold, 1, label_col='toxic_int').style.applymap(color_map))
```
# Confusion matrix
```
train_set = k_fold[k_fold['fold_1'] == 'train']
validation_set = k_fold[k_fold['fold_1'] == 'validation']
plot_confusion_matrix(train_set['toxic_int'], train_set['pred_1'],
validation_set['toxic_int'], validation_set['pred_1'])
```
# Model evaluation by language
```
display(evaluate_model_lang(valid_df, 1).style.applymap(color_map))
# ML fine-tunned preds
display(evaluate_model_lang(valid_df, 1, pred_col='pred_ml').style.applymap(color_map))
```
# Visualize predictions
```
pd.set_option('max_colwidth', 120)
print('English validation set')
display(k_fold[['comment_text', 'toxic'] + [c for c in k_fold.columns if c.startswith('pred')]].head(10))
print('Multilingual validation set')
display(valid_df[['comment_text', 'toxic'] + [c for c in valid_df.columns if c.startswith('pred')]].head(10))
```
# Test set predictions
```
x_test = np.load(database_base_path + 'x_test.npy')
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE'], AUTO))
submission = pd.read_csv('/kaggle/input/jigsaw-multilingual-toxic-comment-classification/sample_submission.csv')
submission['toxic'] = test_preds
submission.to_csv('submission.csv', index=False)
display(submission.describe())
display(submission.head(10))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jpchen/playground/blob/master/torchfx_ppl.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Useful program transformations for PPLs
*@neerajprad, @jpchen, @xiaoyan0*
This notebook contains example probabilistic inference workflows as well as a complete example written in both JAX as well as PyTorch for comparison purposes. The code snippets are from the Leapfrog integrator that is used within a very popular MCMC inference algorithm called the No-U-Turn Sampler (NUTS) [[1](#scrollTo=3URBZvuod6-3&line=1&uniqifier=1)].
This code does not contain dependency on a particular PPL so that it is easier to compare differences between the two frameworks, particularly as it relates to features that will be important for any PyTorch-based PPL. Examples in Section 1 are small snippets in PyTorch for illustration purposes and code in section 2 further exemplifies these points by comparing a complete implementation of a Leapfrog integrator across JAX and PyTorch.
1. [Control Flow Examples](#scrollTo=0Pc-BvLTtcAM&uniqifier=1)
- [Composition with grad](#scrollTo=mfmeUyh8voEM&uniqifier=1)
- [Stochastic Control Flow](#scrollTo=TaVj14XQvDm-&uniqifier=1)
- [Composition with Looping Primitives](#scrollTo=PsV7hBZzwpdl&uniqifier=1)
- [Composition with JIT](#scrollTo=pjouiqlzutPX&uniqifier=1)
2. [JAX NUTS example: Leapfrog integrator](#scrollTo=rztOk3U_YUF8&uniqifier=1)
3. [PyTorch NUTS example: Leapfrog integrator](#scrollTo=hV_wTvKG9Xli&uniqifier=1)
4. [Helpful References](#scrollTo=3URBZvuod6-3&uniqifier=1)
---
## 1. Control Flow Examples
```
%reset -sf
#install nightly version for access to torch.vmap
!pip install --upgrade --pre torch==1.9.0.dev20210219 torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu102/torch_nightly.html
import torch
import torch.distributions as dist
from torch.autograd import grad
print('pytorch version: ', torch.__version__)
```
### a) Composition with grad
At the core of the algorithm is a symplectic integrator that requires taking iterative gradients [[1](#scrollTo=PsV7hBZzwpdl&line=1&uniqifier=1)]. `vmap` or `jit` at the outermost loop would need to be able to handle this.
```
# This is one of the operations in the leapfrog integrator in [1]. The full integrator is
# in the complete example in Sec. 2.
def grad_fn(node, params):
# compute grad (of a stochastic value)
grad = torch.autograd.grad(node, params)
# update node
new_node = node + grad[0]
return new_node
params = [torch.tensor(0., requires_grad=True), torch.tensor(1., requires_grad=True)]
node = dist.Normal(*params).log_prob(torch.tensor(0.3))
grad_fn(node, params)
```
### b) Stochastic Control Flow
In most workflows we need a way to determine control flow pseudorandomly. JAX handles this through a local functional [RNG](https://jax.readthedocs.io/en/latest/jax.random.html). This is to say that both `jit` and `vmap` need to be able to handle the stochastic `accept > 0` condition. This is easier in JAX because there is no global RNG, and the JIT traces through the RNG splitting to be able to do this. Supporting this in PyTorch will be hard without a corresponding functional random number generator.
```
# (this is the Metropolis Hastings correction step in [1])
def accept_or_reject(v):
# sample from prng. conditioned on the inputs in practice
accept_prob = dist.Normal(0., 1.).log_prob(v)
# accept `accept_prob` % of the time
accept = dist.Bernoulli(logits=accept_prob).sample()
if accept > 0:
return torch.tensor(1.)
return torch.tensor(0.)
```
### c) Compositon with Looping Primitives
In Bean Machine we have a set of nodes we are updating at each iteration. Since the updates are conditional on the dependency graph, we do this in a for-loop.
```
# looping primitives for node updates with conditionals
# outermost loop, composing all of the functions above
def update_nodes(nodes):
# loop through, update some nodes given a condition
for n in nodes:
if accept_or_reject(n.val).item() > 0:
# update node value (technically also the nodes in the markov blanket)
# todo: do this in a non-mutating way
proposed_value = dist.Normal(0., 1.).sample()
n.val = proposed_value
```
### d) Composition with JIT
A functional grad that composes with JIT and actually inlines the backward operations (for optimization like JAX will be really useful. The issue here is that we need to set `requires_grad` to `True` in `_potential_grad` and the tracer complains about that. In the example below, a functional version of `grad` will be really useful so that the JIT does not complain about inserting a constant with `requires_grad` set to True.
```
def fn(x):
# It will be nice to have a functional grad variant that does not require us
# to set `requires_grad` to True.
x.requires_grad_(True)
y = x**3
grad = torch.autograd.grad(y, x)
x.requires_grad_(False)
return x + grad
torch.jit.trace(fn, torch.tensor(2.))
```
## JAX NUTS example: Code for leapfrog integrator
```
import jax
import jax.numpy as jnp
import numpy as np
print('jax version: ', jax.__version__)
from collections import namedtuple
import jax
from jax import grad, jit, partial, random, value_and_grad, lax
from jax.flatten_util import ravel_pytree
import jax.numpy as np
from jax import random
from jax.tree_util import register_pytree_node, tree_multimap
# (q, p) -> (position (param value), momentum)
IntegratorState = namedtuple("IntegratorState", ["q", "p", "potential_energy", "q_grad"])
# a tree-like JAX primitive that allows program transformations
# to work on Python containers (https://jax.readthedocs.io/en/latest/pytrees.html)
register_pytree_node(
IntegratorState,
lambda xs: (tuple(xs), None),
lambda _, xs: IntegratorState(*xs)
)
def leapfrog(potential_fn, kinetic_fn):
r"""
Second order symplectic integrator that uses the leapfrog algorithm
for position `q` and momentum `p`.
:param potential_fn: Python callable that computes the potential energy
given input parameters. The input parameters to `potential_fn` can be
any python collection type.
:param kinetic_fn: Python callable that returns the kinetic energy given
inverse mass matrix and momentum.
:return: a pair of (`init_fn`, `update_fn`).
"""
def init_fn(q, p):
"""
:param q: Position of the particle.
:param p: Momentum of the particle.
:return: initial state for the integrator.
"""
potential_energy, q_grad = value_and_grad(potential_fn)(q)
return IntegratorState(q, p, potential_energy, q_grad)
def update_fn(step_size, inverse_mass_matrix, state):
"""
:param float step_size: Size of a single step.
:param inverse_mass_matrix: Inverse of mass matrix, which is used to
calculate kinetic energy.
:param state: Current state of the integrator.
:return: new state for the integrator.
"""
q, p, _, q_grad = state
# maps a function over a pytree, returning a new pytree
p = tree_multimap(lambda p, q_grad: p - 0.5 * step_size * q_grad, p, q_grad) # p(n+1/2)
p_grad = grad(kinetic_fn, argnums=1)(inverse_mass_matrix, p)
q = tree_multimap(lambda q, p_grad: q + step_size * p_grad, q, p_grad) # q(n+1)
potential_energy, q_grad = value_and_grad(potential_fn)(q)
p = tree_multimap(lambda p, q_grad: p - 0.5 * step_size * q_grad, p, q_grad) # p(n+1)
return IntegratorState(q, p, potential_energy, q_grad)
return init_fn, update_fn
def kinetic_fn(inverse_mass_matrix, p):
# flattens the pytree
p, _ = ravel_pytree(p)
if inverse_mass_matrix.ndim == 2:
v = np.matmul(inverse_mass_matrix, p)
elif inverse_mass_matrix.ndim == 1:
v = np.multiply(inverse_mass_matrix, p)
return 0.5 * np.dot(v, p)
```
Note that jax provides some great utilities that let us operate on [pytrees](https://jax.readthedocs.io/en/latest/jax.tree_util.html?highlight=pytree#module-jax.tree_util) which can be any python container type that support packing/unpacking implementations. e.g. `tree_multimap` above. this lets us write generic code without imposing any assumptions on the client code. e.g. `potential_fn` could be a list or a simple array and the integrator code remains the same.
```
D = 1000
true_mean, true_std = np.ones(D), np.ones(D) * 2.
def potential_fn(q):
"""
- log density for the normal distribution
"""
return 0.5 * np.sum(((q['z'] - true_mean) / true_std) ** 2)
# U-turn termination condition
# For demonstration purpose - this won't result in a correct MCMC proposal
def is_u_turning(q_i, q_f, p_f):
return np.less(np.dot((q_f['z'] - q_i['z']), p_f['z']), 0)
# Run leapfrog until termination condition is met
def get_final_state(ke, pe, m_inv, step_size, q_i, p_i):
lf_init, lf_update = leapfrog(pe, ke)
lf_init_state = lf_init(q_i, p_i)
q_f, p_f, _, _ = lax.while_loop(lambda x: is_u_turning(q_i, x[0], x[1]),
lambda x: lf_update(step_size, m_inv, x),
lf_init_state)
return (q_f, p_f)
```
### jit and grad composition
Note that we are jit compiling the integrator which includes grad computation.
```
q_i = {'z': np.zeros(D)}
p_i = lambda i: {'z': random.normal(random.PRNGKey(i), (D,))}
inv_mass_matrix = np.eye(D)
step_size = 0.001
fn = jit(get_final_state, static_argnums=(0, 1))
timefn = lambda i: fn(kinetic_fn, potential_fn, inv_mass_matrix,
step_size, q_i, p_i(i))
# Run only once in a loop; otherwise the best number reported
# does not include compilation time.
%timeit -n 1 -r 1 [timefn(0) for i in range(10)]
```
### Lets add vmap to do this in parallel
Note that this requires:
- composition of `vmap` with `jit` for `get_final_state`.
- composition of `vmap` with control flow primitive `while` in `get_final_state`.
- also composition of `vmap` and `jit` with `grad` in leapfrog.
```
# Draw K in parallel
K = 50
q_i = {'z': random.normal(random.PRNGKey(1), (K, D))}
p_i = {'z': random.normal(random.PRNGKey(2), (K, D))}
jax.vmap(lambda z: fn(kinetic_fn, potential_fn, inv_mass_matrix,
step_size, *z))((q_i, p_i))
```
## PyTorch NUTS example: LeapFrog Integrator
```
def leapfrog(q, p, potential_fn, inverse_mass_matrix, step_size, num_steps=1, q_grads=None):
r"""
Second order symplectic integrator that uses the velocity leapfrog algorithm.
:param dict q: dictionary of sample site names and their current values
(type :class:`~torch.Tensor`).
:param dict p: dictionary of sample site names and corresponding momenta
(type :class:`~torch.Tensor`).
:param callable potential_fn: function that returns potential energy given q
for each sample site. The negative gradient of the function with respect
to ``q`` determines the rate of change of the corresponding sites'
momenta ``r``.
:param torch.Tensor inverse_mass_matrix: a tensor :math:`M^{-1}` which is used
to calculate kinetic energy: :math:`E_{kinetic} = \frac{1}{2}z^T M^{-1} q`.
Here :math:`M` can be a 1D tensor (diagonal matrix) or a 2D tensor (dense matrix).
:param float step_size: step size for each time step iteration.
:param int num_steps: number of discrete time steps over which to integrate.
:param torch.Tensor q_grads: optional gradients of potential energy at current ``q``.
:return tuple (q_next, p_next, q_grads, potential_energy): next position and momenta,
together with the potential energy and its gradient w.r.t. ``q_next``.
"""
q_next = q.copy()
p_next = p.copy()
for _ in range(num_steps):
q_next, p_next, q_grads, potential_energy = _single_step(q_next,
p_next,
potential_fn,
inverse_mass_matrix,
step_size,
q_grads)
return q_next, p_next, q_grads, potential_energy
def _single_step(q, p, potential_fn, inverse_mass_matrix, step_size, q_grads=None):
r"""
Single step leapfrog that modifies the `q`, `p` dicts in place.
"""
q_grads = _potential_grad(potential_fn, q)[0] if q_grads is None else q_grads
for site_name in p:
p[site_name] = p[site_name] + 0.5 * step_size * (-q_grads[site_name]) # p(n+1/2)
p_grads = _kinetic_grad(inverse_mass_matrix, p)
for site_name in q:
q[site_name] = q[site_name] + step_size * p_grads[site_name] # q(n+1)
q_grads, potential_energy = _potential_grad(potential_fn, q)
for site_name in p:
p[site_name] = p[site_name] + 0.5 * step_size * (-q_grads[site_name]) # p(n+1)
return q, p, q_grads, potential_energy
def _potential_grad(potential_fn, q):
q_keys, q_nodes = zip(*q.items())
for node in q_nodes:
node.requires_grad_(True)
potential_energy = potential_fn(q)
grads = torch.autograd.grad(potential_energy, q_nodes)
for node in q_nodes:
node.requires_grad_(False)
return dict(zip(q_keys, grads)), potential_energy.detach()
def _kinetic_grad(inverse_mass_matrix, p):
p_flat = torch.cat([p[site_name].reshape(-1) for site_name in sorted(p)])
if inverse_mass_matrix.dim() == 1:
grads_flat = inverse_mass_matrix * p_flat
else:
grads_flat = inverse_mass_matrix.matmul(p_flat)
# unpacking
grads = {}
pos = 0
for site_name in sorted(p):
next_pos = pos + p[site_name].numel()
grads[site_name] = grads_flat[pos:next_pos].reshape(p[site_name].shape)
pos = next_pos
assert pos == grads_flat.size(0)
return grads
D = 1000
true_mean, true_std = 1, 2.
def potential_fn(params):
return 0.5 * torch.sum(((params['z'] - true_mean) / true_std) ** 2)
# U-turn termination condition
# For demonstration purpose - this won't result in a correct MCMC proposal
def is_u_turning(q_i, q_f, p_f):
return torch.dot((q_f['z'] - q_i['z']), p_f['z']) < 0.
# Run leapfrog until termination condition is met
def get_final_state(pe, m_inv, step_size, q_i, p_i):
q, p = q_i, p_i
q_grads = None
while not is_u_turning(q_i, q, p):
q, p, q_grads, _ = leapfrog(q, p, pe, m_inv, step_size, q_grads=q_grads)
return (q, p)
q_i = {'z': torch.zeros(D)}
p_i = {'z': torch.randn(D)}
inv_mass_matrix = torch.eye(D)
step_size = 0.001
num_steps = 10000
%timeit -n 1 -r 1 get_final_state(potential_fn, inv_mass_matrix, step_size, q_i, p_i)
```
### jit, grad, and vmap
Unlike in JAX, the PyTorch JIT cannot yet inline and optimize grad calls.
```
torch.jit.trace(lambda q, p: get_final_state(potential_fn, inv_mass_matrix, step_size, q, p), (q_i, p_i))
```
And there remain unsupported ops for vmap batching.
```
K = 50
q_i = {'z': torch.randn(K, D)}
p_i = {'z': torch.randn(K, D)}
torch.vmap(lambda q, p: get_final_state(potential_fn, inv_mass_matrix, step_size, q, p))(q_i, p_i)
```
## Helpful References
1. [NUTS paper](https://arxiv.org/abs/1111.4246)
2. [NUTS JAX implementation](https://github.com/pyro-ppl/numpyro/blob/master/numpyro/infer/hmc.py)
a. [Iterative NUTS numpyro (unrolling the recursive algorithm for JITting and vmap)](https://github.com/pyro-ppl/numpyro/wiki/Iterative-NUTS)
b. [Iterative NUTS tfp](https://github.com/tensorflow/probability/blob/master/discussion/technical_note_on_unrolled_nuts.md)
3. [NUTS PyTorch implementation (fbinternal)](https://www.internalfb.com/intern/diffusion/FBS/browse/master/fbcode/beanmachine/beanmachine/ppl/inference/proposer/single_site_no_u_turn_sampler_proposer.py?commit=68b1672d648dba714d3f7c2ce13494b01925b103&lines=18)
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.compose import ColumnTransformer, TransformedTargetRegressor
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OrdinalEncoder, LabelEncoder, OneHotEncoder, StandardScaler, MinMaxScaler, \
RobustScaler, FunctionTransformer
from sklearn.linear_model import LinearRegression, LassoCV, RidgeCV
from sklearn.experimental import enable_hist_gradient_boosting
from sklearn.ensemble import RandomForestRegressor, StackingRegressor, HistGradientBoostingRegressor
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error, make_scorer, mean_squared_log_error
from sklearn.model_selection import train_test_split, cross_validate, cross_val_predict, cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import set_config
def score_model(model, X, Y):
scores = cross_validate(
model, X, Y,
scoring=['r2', 'neg_mean_absolute_error', 'neg_mean_squared_error'], cv=2,
n_jobs=-1, verbose=0)
rmsle_score = cross_val_score(model, X, Y, cv=2, scoring=make_scorer(neg_rmsle))
mse_score = -1 * scores['test_neg_mean_squared_error'].mean()
mse_std = scores['test_neg_mean_squared_error'].std()
mae_score = -1 * scores['test_neg_mean_absolute_error'].mean()
mae_std = scores['test_neg_mean_absolute_error'].std()
r2_score_mean = scores['test_r2'].mean()
r2_std = scores['test_r2'].std()
print('[CV] MSE: %.4f (%.4f)' % (mse_score, mse_std))
print('[CV] MAE: %.4f (%.4f)' % (mae_score, mae_std))
print('[CV] R^2: %.4f (%.4f)' % (r2_score_mean, r2_std))
print('[CV] RMSLE: %.6f (%.4f)' % (rmsle_score.mean(), rmsle_score.std()))
np.random.seed(42)
set_config(display='diagram')
plt.rcParams['figure.figsize'] = (12, 8)
sns.set_theme(style="whitegrid")
train_df = pd.read_csv("data/train.csv")
test_df = pd.read_csv("data/test.csv")
num_features = [f for f in train_df.columns if train_df.dtypes[f] != 'object']
num_features.remove('Id')
num_features.remove('SalePrice')
cat_features = [f for f in train_df.columns if train_df.dtypes[f] == 'object']
for feature in (
'PoolQC',
'FireplaceQu',
'Alley',
'Fence',
'MiscFeature',
'BsmtQual',
'BsmtCond',
'BsmtExposure',
'BsmtFinType1',
'BsmtFinType2',
'GarageType',
'GarageFinish',
'GarageQual',
'GarageCond',
'BsmtQual',
'BsmtCond',
'BsmtExposure',
'BsmtFinType1',
'BsmtFinType2',
'MasVnrType',
'MSSubClass',
):
train_df[feature] = train_df[feature].fillna('None')
test_df[feature] = test_df[feature].fillna('None')
for feature in (
'BsmtFinSF1',
'BsmtFinSF2',
'BsmtUnfSF',
'TotalBsmtSF',
'BsmtFullBath',
'BsmtHalfBath',
'MasVnrArea',
'GarageCars',
'GarageArea',
'GarageYrBlt',
):
train_df[feature] = train_df[feature].fillna(0)
test_df[feature] = test_df[feature].fillna(0)
for feature in (
'Electrical',
'KitchenQual',
'Exterior1st',
'Exterior2nd',
'SaleType',
'MSZoning',
'Utilities',
):
train_df[feature] = train_df[feature].fillna(train_df[feature].mode()[0])
test_df[feature] = test_df[feature].fillna(test_df[feature].mode()[0])
train_df['Functional'] = train_df['Functional'].fillna('Typical')
test_df['Functional'] = test_df['Functional'].fillna('Typical')
# Remove outliers
train_df.drop(
train_df[(train_df["GrLivArea"] > 4000) & (train_df["SalePrice"] < 700000)].index
);
ordinal_feature_mapping = {
'ExterQual': {'Po': 0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex': 4},
'ExterCond': {'Po': 0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex': 4},
'BsmtQual': {'None': 0, 'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5},
'BsmtCond': {'None': 0, 'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5},
'BsmtFinType1': {'None': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6},
'BsmtFinType2': {'None': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6},
'HeatingQC': {'Po': 0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex': 4},
'KitchenQual': {'Po': 0, 'Fa': 1, 'TA': 2, 'Gd': 3, 'Ex': 4},
'FireplaceQu': {'None': 0, 'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5},
'GarageFinish': {'None': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3},
'GarageQual': {'None': 0, 'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5},
'GarageCond': {'None': 0, 'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5},
'PoolQC': {'None': 0, 'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5},
'Fence': {'None': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4},
'PavedDrive': {'N': 0, 'P': 1, 'Y': 2},
'CentralAir': {'N': 0, 'Y': 1},
'Alley': {'None': 0, 'Pave': 1, 'Grvl': 2},
'Street': {'Pave': 0, 'Grvl': 1},
}
non_ordinal_cat_features = list(set(cat_features) - set(ordinal_feature_mapping.keys()))
for cat_feature in non_ordinal_cat_features:
train_df[cat_feature + 'Enc'] = LabelEncoder().fit_transform(train_df[cat_feature])
test_df[cat_feature + 'Enc'] = LabelEncoder().fit_transform(test_df[cat_feature])
for ordinal_feature, feature_mapping in ordinal_feature_mapping.items():
train_df[ordinal_feature + 'Enc'] = train_df[ordinal_feature].map(feature_mapping)
test_df[ordinal_feature + 'Enc'] = test_df[ordinal_feature].map(feature_mapping)
def neg_rmsle(y_true, y_pred):
return -1 * np.sqrt(mean_squared_log_error(y_true, y_pred))
baselineFeatures = [
'1stFlrSF',
'2ndFlrSF',
'BsmtFinSF1',
'BsmtFinSF2',
'BsmtUnfSF',
'OverallQual',
'GarageCars',
'OverallCond',
'Neighborhood',
'MSSubClass',
'LotShape',
'LandSlope',
'BsmtCondEnc',
'BsmtQualEnc',
]
X = train_df[baselineFeatures]
Y = train_df['SalePrice']
X_train, X_validation, y_train, y_validation = train_test_split(X, Y, test_size=0.3, random_state=42)
subclassCategories = [20, 30, 40, 45, 50, 60, 70, 75, 80, 85, 90, 120, 150, 160, 180, 190]
basementFinishCategories = ['None', 'Unf', 'LwQ', 'Rec', 'BLQ', 'ALQ', 'GLQ']
neighborhoodCategories = train_df['Neighborhood'].unique()
lotConfigCategories = train_df['LotConfig'].unique()
lotShapeCategories = train_df['LotShape'].unique()
landSlopeCategories = train_df['LandSlope'].unique()
# Build feature transformer
logTransformer = FunctionTransformer(func=np.log1p, inverse_func=np.expm1)
basementCondTransformer = Pipeline([
('basement_condition_impute', SimpleImputer(strategy="constant", fill_value='None')),
('basement_condition_onehot', OneHotEncoder()),
])
basementAreaTransformer = Pipeline([
('basement_area_impute', SimpleImputer(strategy="constant", fill_value=0)),
('basement_area_log', logTransformer),
])
featureTransformer = ColumnTransformer([
('basement_area_transformer', basementAreaTransformer, ['BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF']),
('neighborhood_onehot', OneHotEncoder(categories=[neighborhoodCategories]), ['Neighborhood']),
('subclass_onehot', OneHotEncoder(categories=[subclassCategories]), ['MSSubClass']),
('lot_shape_onehot', OneHotEncoder(categories=[lotShapeCategories]), ['LotShape']),
('land_slope_onehot', OneHotEncoder(categories=[landSlopeCategories]), ['LandSlope']),
],
remainder='passthrough'
)
linRegrPipeline = Pipeline([
("preprocessing", featureTransformer),
("regression", TransformedTargetRegressor(regressor=LinearRegression(), func=np.log1p, inverse_func=np.expm1)),
])
linRegrPipeline
linRegrPipeline.fit(X_train, y_train)
y_train_predicted = linRegrPipeline.predict(X_train)
y_validation_predicted_lr = linRegrPipeline.predict(X_validation)
print('[Train] MSE: %.2f' % mean_squared_error(y_train, y_train_predicted))
print('[Train] MAE: %.2f' % mean_absolute_error(y_train, y_train_predicted))
print('[Train] R^2: %.2f' % r2_score(y_train, y_train_predicted))
print('[Test] MSE: %.2f' % mean_squared_error(y_validation, y_validation_predicted_lr))
print('[Test] MAE: %.2f' % mean_absolute_error(y_validation, y_validation_predicted_lr))
print('[Test] R^2: %.2f' % r2_score(y_validation, y_validation_predicted_lr))
score_model(linRegrPipeline, X, Y)
# CV=6: MSE: 1417944709.38, MAE: 17956.26, R^2: 0.76
# CV=5: MSE: 1398586541.44, MAE: 17957.81, R^2: 0.78
# CV=4: MSE: 1312457856.75, MAE: 17934.16, R^2: 0.79
# CV=3: MSE: 1358452005.21, MAE: 18075.38, R^2: 0.78
# CV=2: MSE: 1174227509.88, MAE: 18218.52, R^2: 0.81
from sklearn.model_selection import learning_curve
def plot_learning_curve(estimator, X_train, y_train, cv, train_sizes=np.linspace(0.1, 1, 10)):
plt.style.use('seaborn-darkgrid')
train_sizes, train_scores, test_scores = learning_curve(
estimator, X_train, y_train,
scoring='neg_mean_squared_error',
cv=cv,
n_jobs=-1,
train_sizes=train_sizes,
shuffle=True,
random_state=42
)
train_mean_scores = np.mean(train_scores, axis=1)
test_mean_scores = np.mean(test_scores, axis=1)
plt.title('Learning curve')
plt.plot(train_sizes, train_mean_scores, 'y', label='Train Learning curve')
plt.plot(train_sizes, test_mean_scores, 'b', label='Test Learning curve')
plt.legend()
plot_learning_curve(linRegrPipeline, X, Y, cv=5)
def plot_regression_results(ax, y_true, y_pred, title):
"""Scatter plot of the predicted vs true targets."""
ax.plot([y_true.min(), y_true.max()],
[y_true.min(), y_true.max()],
'--r', linewidth=2)
ax.scatter(y_true, y_pred, alpha=0.2)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.spines['left'].set_position(('outward', 10))
ax.spines['bottom'].set_position(('outward', 10))
ax.set_xlim([y_true.min(), y_true.max()])
ax.set_ylim([y_true.min(), y_true.max()])
ax.set_xlabel('True')
ax.set_ylabel('Predicted')
ax.set_title(title)
fig, ax0 = plt.subplots(1, 1, figsize=(9, 7))
plot_regression_results(
ax0,
y_true=y_validation,
y_pred=y_validation_predicted_lr,
title="Linear Regression Model"
)
# Fit model again on the whole training set before submition prediction
# 0.14288 is the best score this model gets
linRegrPipeline.fit(X, Y)
x_test = test_df[baselineFeatures]
y_test_predicted = linRegrPipeline.predict(x_test)
submission_df = pd.DataFrame({
'Id': test_df['Id'],
'SalePrice': y_test_predicted,
})
submission_df.to_csv('./data/submission_linear_regression.csv', index=False)
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from PIL import Image
import matplotlib.pyplot as plt
import torchvision.transforms as transforms
import torchvision.models as models
import numpy as np
import copy
import os
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
```
## 定义图片的输入输出
```
def image_loader(image_name,imsize):
"""图片load函数
"""
# 转换图片大小
loader = transforms.Compose([
transforms.Resize(imsize), # scale imported image
transforms.ToTensor()]) # transform it into a torch tensor
image = Image.open(image_name)
# fake batch dimension required to fit network's input dimensions
image = loader(image).unsqueeze(0)
return image.to(device, torch.float)
def image_util(img_size=512,style_img="./images/picasso.jpg", content_img="./images/dancing.jpg"):
"""返回style_image和content_image
需要保证两张图片的大小是一样的
"""
imsize = img_size if torch.cuda.is_available() else 128 # use small size if no gpu
# 加载图片
style_img = image_loader(image_name=style_img, imsize=img_size)
content_img = image_loader(image_name=content_img, imsize=img_size)
# 判断是否加载成功
print("Style Image Size:{}".format(style_img.size()))
print("Content Image Size:{}".format(content_img.size()))
assert style_img.size() == content_img.size(), \
"we need to import style and content images of the same size"
return style_img, content_img
```
## 定义Content Loss
```
class ContentLoss(nn.Module):
def __init__(self, target,):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.target = target.detach()
def forward(self, input):
self.loss = F.mse_loss(input, self.target)
return input
```
## 定义Style Loss
```
# 我们首先定义 Gram Matrix
def gram_matrix(input):
a, b, c, d = input.size() # a=batch size(=1)
# b=number of feature maps
# (c,d)=dimensions of a f. map (N=c*d)
features = input.view(a * b, c * d) # resise F_XL into \hat F_XL
G = torch.mm(features, features.t()) # compute the gram product
# print(G)
# 对Gram Matrix做正规化, 除总的大小
return G.div(a * b * c * d)
x_input = torch.from_numpy(np.array([[[[1,2],[3,4]],[[5,6],[7,8]],[[9,10],[11,12]]]])).float()
x_input.size()
gram_matrix(x_input)
# 接着我们就可以定义Style Loss了
class StyleLoss(nn.Module):
def __init__(self, target_feature):
super(StyleLoss, self).__init__()
self.target = gram_matrix(target_feature).detach()
def forward(self, input):
G = gram_matrix(input)
self.loss = F.mse_loss(G, self.target)
return input
```
## 基于VGG16网络的修改
```
# -------------------
# 模型的标准化
# 因为原始的VGG网络对图片做了normalization, 所在要把下面的Normalization放在新的网络的第一层
# -------------------
class Normalization(nn.Module):
def __init__(self, mean, std):
super(Normalization, self).__init__()
# .view the mean and std to make them [C x 1 x 1] so that they can
# directly work with image Tensor of shape [B x C x H x W].
# B is batch size. C is number of channels. H is height and W is width.
self.mean = mean.view(-1, 1, 1)
self.std = std.view(-1, 1, 1)
def forward(self, img):
# normalize img
return (img - self.mean) / self.std
# --------------------------------
# 网络结构的修改, 生成一个style的网络
# --------------------------------
def get_style_model_and_losses(cnn, normalization_mean, normalization_std,
style_img, content_img,
content_layers,
style_layers):
# 复制cnn的网络部分
cnn = copy.deepcopy(cnn)
# normalization module
normalization = Normalization(normalization_mean, normalization_std).to(device)
# just in order to have an iterable access to or list of content/syle
# losses
content_losses = []
style_losses = []
# assuming that cnn is a nn.Sequential, so we make a new nn.Sequential
# to put in modules that are supposed to be activated sequentially
# 之后逐层向model里面增加内容
model = nn.Sequential(normalization)
i = 0 # increment every time we see a conv
for layer in cnn.children():
if isinstance(layer, nn.Conv2d):
i += 1
name = 'conv_{}'.format(i)
elif isinstance(layer, nn.ReLU):
name = 'relu_{}'.format(i)
# The in-place version doesn't play very nicely with the ContentLoss
# and StyleLoss we insert below. So we replace with out-of-place
# ones here.
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
name = 'pool_{}'.format(i)
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(i)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
model.add_module(name, layer)
if name in content_layers:
# add content loss:
target = model(content_img).detach()
content_loss = ContentLoss(target)
model.add_module("content_loss_{}".format(i), content_loss)
content_losses.append(content_loss)
if name in style_layers:
# add style loss:
target_feature = model(style_img).detach()
style_loss = StyleLoss(target_feature)
model.add_module("style_loss_{}".format(i), style_loss)
style_losses.append(style_loss)
# now we trim off the layers after the last content and style losses\
# 只需要算到最后一个style loss或是content loss用到的layer就可以了, 后面的可以去掉
for i in range(len(model) - 1, -1, -1):
if isinstance(model[i], ContentLoss) or isinstance(model[i], StyleLoss):
break
model = model[:(i + 1)]
# 返回的是修改后的Model, style_losses和content_losses的list
return model, style_losses, content_losses
```
## 定义优化算法
```
def get_input_optimizer(input_img):
# 这里要对图片做梯度下降
optimizer = optim.LBFGS([input_img.requires_grad_()])
return optimizer
```
## 定义传播函数
```
def run_style_transfer(cnn, normalization_mean, normalization_std, content_img, style_img, input_img, content_layers,style_layers, num_steps=300, style_weight=1000000, content_weight=1):
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn, normalization_mean, normalization_std, style_img, content_img, content_layers, style_layers)
optimizer = get_input_optimizer(input_img)
print('Optimizing..')
run = [0]
while run[0] <= num_steps:
def closure():
# correct the values of updated input image
input_img.data.clamp_(0, 1)
optimizer.zero_grad()
model(input_img) # 前向传播
style_score = 0
content_score = 0
for sl in style_losses:
style_score += sl.loss
for cl in content_losses:
content_score += cl.loss
style_score *= style_weight
content_score *= content_weight
# loss为style loss 和 content loss的和
loss = style_score + content_score
loss.backward() # 反向传播
# 打印loss的变化情况
run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.item(), content_score.item()))
print()
return style_score + content_score
# 进行参数优化
optimizer.step(closure)
# a last correction...
# 数值范围的纠正, 使其范围在0-1之间
input_img.data.clamp_(0, 1)
return input_img
```
## 开始训练
```
# 加载content image和style image
style_img,content_img = image_util(img_size=444,style_img="./images/style/rose.jpg", content_img="./images/content/face.jpg")
# input image使用content image
input_img = content_img.clone()
# 加载预训练好的模型
cnn = models.vgg19(pretrained=True).features.to(device).eval()
# 模型标准化的值
cnn_normalization_mean = torch.tensor([0.485, 0.456, 0.406]).to(device)
cnn_normalization_std = torch.tensor([0.229, 0.224, 0.225]).to(device)
# 定义要计算loss的层
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']
# 模型进行计算
output = run_style_transfer(cnn, cnn_normalization_mean, cnn_normalization_std, content_img, style_img, input_img, content_layers=content_layers_default, style_layers=style_layers_default, num_steps=300, style_weight=100000, content_weight=1)
```
## 图片显示
```
image = output.cpu().clone()
image = image.squeeze(0)
unloader = transforms.ToPILImage()
unloader(image)
```
| github_jupyter |
# Genotype PLINK file quality control
This workflow implements some prelimary data QC steps for PLINK input files. VCF format of inputs will be converted to PLINK before performing QC.
## Overview
This notebook includes workflow for
- Compute kinship matrix in sample and estimate related individuals
- Genotype and sample QC: by MAF, missing data and HWE
- LD pruning for follow up PCA analysis on genotype, as needed
A potential limitation is that the workflow requires all samples and chromosomes to be merged as one single file, in order to perform both sample and variant level QC. However, in our experience using this pipeline with 200K exomes with 15 million variants, this pipeline works on the single merged PLINK file.
## Methods
Depending on the context of your problem, the workflow can be executed in two ways:
1. Run `qc` command to perform genotype data QC and LD pruning to generate a subset of variants in preparation for analysis such as PCA.
2. Run `king` first on either the original or a subset of common variants to identify unrelated individuals. The `king` pipeline will split samples to related and unrelated individuals. Then you perform `qc` on these individuals only and finally extract the same set of QC-ed variants for related individuals.
## Input format
The whole genome PLINK bim/bed/fam bundle. For input in VCF format and/or per-chromosome VCF or PLINK format, please use `vcf_to_plink` and `merge_plink` in [genotype formatting](genotype_formatting.html) pipeline to convert them to PLINK file bundle.
## Default QC parameters
- Kinship coefficient for related individuals: 0.0625
- MAF default: 0
- Above default includes both common and are variant
- Recommand MAF for common variant analysis: 0.01
- Recommand MAF for PCA: 0.05
- Variant level missingness threshold: 0.1
- Sample level missingness threshold: 0.1
- LD pruning via PLINK for PCA analysis:
- window 50
- shift 10
- r2 0.1
## Minimal working example
Minimal working example data-set as well as the singularity container `bioinfo.sif` can be downloaded from [Google Drive](https://drive.google.com/drive/u/0/folders/1ahIZGnmjcGwSd-BI91C9ayd_Ya8sB2ed).
The `chr1_chr6` data-set was merged from `chr1` and `chr6` data, using `merge_plink` command from [genotype formatting](genotype_formatting.html) pipeline.
### Example 1: perform QC on both rare and common variants
```
sos run GWAS_QC.ipynb qc_no_prune \
--cwd output/genotype \
--genoFile output/genotype/chr1_chr6.bed \
--container container/bioinfo.sif
```
### Example 2: QC common variants in unrelated individuals and extract those variants from related individuals
Determine and split between related and unrelated individuals,
```
sos run GWAS_QC.ipynb king \
--cwd output/genotype \
--genoFile output/genotype/chr1_chr6.bed \
--name 20220110 \
--container container/bioinfo.sif
```
Variant level and sample level QC on unrelated individuals, in preparation for PCA analysis:
```
sos run GWAS_QC.ipynb qc \
--cwd output/genotype \
--genoFile output/genotype/chr1_chr6.20220110.unrelated.bed \
--maf-filter 0.01 \
--name for_pca \
--container container/bioinfo.sif
```
Extract previously selected variants from related individuals in preparation for PCA, only applying missingness filter at sample level,
```
sos run GWAS_QC.ipynb qc_no_prune \
--cwd output/genotype \
--genoFile output/genotype/chr1_chr6.20220110.related.bed \
--keep-variants output/genotype/chr1_chr6.20220110.unrelated.for_pca.filtered.prune.in \
--maf-filter 0 --geno-filter 0 --mind-filter 0.1 --hwe-filter 0 \
--name for_pca \
--container container/bioinfo.sif
```
## Command interface
```
sos run GWAS_QC.ipynb -h
[global]
# the output directory for generated files
parameter: cwd = path
# A string to identify your analysis run
parameter: name = ""
# PLINK binary files
parameter: genoFile = paths
# The path to the file that contains the list of samples to remove (format FID, IID)
parameter: remove_samples = path('.')
# The path to the file that contains the list of samples to keep (format FID, IID)
parameter: keep_samples = path('.')
# The path to the file that contains the list of variants to keep
parameter: keep_variants = path('.')
# The path to the file that contains the list of variants to exclude
parameter: exclude_variants = path('.')
# Kinship coefficient threshold for related individuals
# (e.g first degree above 0.25, second degree above 0.125, third degree above 0.0625)
parameter: kinship = 0.0625
# For cluster jobs, number commands to run per job
parameter: job_size = 1
# Wall clock time expected
parameter: walltime = "5h"
# Memory expected
parameter: mem = "16G"
# Number of threads
parameter: numThreads = 20
# Software container option
parameter: container = ""
if not container:
container = None
# use this function to edit memory string for PLINK input
from sos.utils import expand_size
cwd = path(f"{cwd:a}")
```
## Estimate kinship in the sample
The output is a list of related individuals, as well as the kinship matrix
```
# Inference of relationships in the sample to identify closely related individuals
[king_1]
# PLINK binary file
parameter: kin_maf = 0.01
input: genoFile
output: f'{cwd}/{_input:bn}{("."+name) if name else ""}.kin0'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: container=container, expand= "${ }", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
plink2 \
--bfile ${_input:n} \
--make-king-table \
--king-table-filter ${kinship} \
${('--keep %s' % keep_samples) if keep_samples.is_file() else ""} \
${('--remove %s' % remove_samples) if remove_samples.is_file() else ""} \
--min-af ${kin_maf} \
--max-af ${1-kin_maf} \
--out ${_output:n} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e6}
# Select a list of unrelated individual with an attempt to maximize the unrelated individuals selected from the data
[king_2]
# If set to true, the unrelated individuals in a family will be kept without being reported.
# Otherwise (use `--no-maximize-unrelated`) the entire family will be removed
# Note that attempting to maximize unrelated individuals is computationally intensive on large data.
parameter: maximize_unrelated = True
output: f'{_input:n}.related_id'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
R: container=container, expand= "${ }", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout'
library(dplyr)
library(igraph)
# Remove related individuals while keeping maximum number of individuals
# this function is simplified from:
# https://rdrr.io/cran/plinkQC/src/R/utils.R
#' @param relatedness [data.frame] containing pair-wise relatedness estimates
#' (in column [relatednessRelatedness]) for individual 1 (in column
#' [relatednessIID1] and individual 2 (in column [relatednessIID1]). Columns
#' relatednessIID1, relatednessIID2 and relatednessRelatedness have to present,
#' while additional columns such as family IDs can be present. Default column
#' names correspond to column names in output of plink --genome
#' (\url{https://www.cog-genomics.org/plink/1.9/ibd}). All original
#' columns for pair-wise highIBDTh fails will be returned in fail_IBD.
#' @param relatednessTh [double] Threshold for filtering related individuals.
#' Individuals, whose pair-wise relatedness estimates are greater than this
#' threshold are considered related.
relatednessFilter <- function(relatedness,
relatednessTh,
relatednessIID1="IID1",
relatednessIID2="IID2",
relatednessRelatedness="KINSHIP") {
# format data
if (!(relatednessIID1 %in% names(relatedness))) {
stop(paste("Column", relatednessIID1, "for relatedness not found!"))
}
if (!(relatednessIID2 %in% names(relatedness))) {
stop(paste("Column", relatednessIID1, "for relatedness not found!"))
}
if (!(relatednessRelatedness %in% names(relatedness))) {
stop(paste("Column", relatednessRelatedness,
"for relatedness not found!"))
}
iid1_index <- which(colnames(relatedness) == relatednessIID1)
iid2_index <- which(colnames(relatedness) == relatednessIID2)
relatedness[,iid1_index] <- as.character(relatedness[,iid1_index])
relatedness[,iid2_index] <- as.character(relatedness[,iid2_index])
relatedness_names <- names(relatedness)
names(relatedness)[iid1_index] <- "IID1"
names(relatedness)[iid2_index] <- "IID2"
names(relatedness)[names(relatedness) == relatednessRelatedness] <- "M"
# Remove symmetric IID rows
relatedness_original <- relatedness
relatedness <- dplyr::select_(relatedness, ~IID1, ~IID2, ~M)
sortedIDs <- data.frame(t(apply(relatedness, 1, function(pair) {
c(sort(c(pair[1], pair[2])))
})), stringsAsFactors=FALSE)
keepIndex <- which(!duplicated(sortedIDs))
relatedness_original <- relatedness_original[keepIndex,]
relatedness <- relatedness[keepIndex,]
# individuals with at least one pair-wise comparison > relatednessTh
# return NULL to failIDs if no one fails the relatedness check
highRelated <- dplyr::filter_(relatedness, ~M > relatednessTh)
if (nrow(highRelated) == 0) {
return(list(relatednessFails=NULL, failIDs=NULL))
}
# all samples with related individuals
allRelated <- c(highRelated$IID1, highRelated$IID2)
uniqueIIDs <- unique(allRelated)
# Further selection of samples with relatives in cohort
multipleRelative <- unique(allRelated[duplicated(allRelated)])
singleRelative <- uniqueIIDs[!uniqueIIDs %in% multipleRelative]
highRelatedMultiple <- highRelated[highRelated$IID1 %in% multipleRelative |
highRelated$IID2 %in% multipleRelative,]
highRelatedSingle <- highRelated[highRelated$IID1 %in% singleRelative &
highRelated$IID2 %in% singleRelative,]
# Only one related samples per individual
if(length(singleRelative) != 0) {
# randomly choose one to exclude
failIDs_single <- highRelatedSingle[,1]
} else {
failIDs_single <- NULL
}
# An individual has multiple relatives
if(length(multipleRelative) != 0) {
relatedPerID <- lapply(multipleRelative, function(x) {
tmp <- highRelatedMultiple[rowSums(
cbind(highRelatedMultiple$IID1 %in% x,
highRelatedMultiple$IID2 %in% x)) != 0,1:2]
rel <- unique(unlist(tmp))
return(rel)
})
names(relatedPerID) <- multipleRelative
keepIDs_multiple <- lapply(relatedPerID, function(x) {
pairwise <- t(combn(x, 2))
index <- (highRelatedMultiple$IID1 %in% pairwise[,1] &
highRelatedMultiple$IID2 %in% pairwise[,2]) |
(highRelatedMultiple$IID1 %in% pairwise[,2] &
highRelatedMultiple$IID2 %in% pairwise[,1])
combination <- highRelatedMultiple[index,]
combination_graph <- igraph::graph_from_data_frame(combination,
directed=FALSE)
all_iv_set <- igraph::ivs(combination_graph)
length_iv_set <- sapply(all_iv_set, function(x) length(x))
if (all(length_iv_set == 1)) {
# check how often they occurr elsewhere
occurrence <- sapply(x, function(id) {
sum(sapply(relatedPerID, function(idlist) id %in% idlist))
})
# if occurrence the same everywhere, pick the first, else keep
# the one with minimum occurrence elsewhere
if (length(unique(occurrence)) == 1) {
nonRelated <- sort(x)[1]
} else {
nonRelated <- names(occurrence)[which.min(occurrence)]
}
} else {
nonRelated <- all_iv_set[which.max(length_iv_set)]
}
return(nonRelated)
})
keepIDs_multiple <- unique(unlist(keepIDs_multiple))
failIDs_multiple <- c(multipleRelative[!multipleRelative %in%
keepIDs_multiple])
} else {
failIDs_multiple <- NULL
}
allFailIIDs <- c(failIDs_single, failIDs_multiple)
relatednessFails <- lapply(allFailIIDs, function(id) {
fail_inorder <- relatedness_original$IID1 == id &
relatedness_original$M > relatednessTh
fail_inreverse <- relatedness_original$IID2 == id &
relatedness_original$M > relatednessTh
if (any(fail_inreverse)) {
inreverse <- relatedness_original[fail_inreverse, ]
id1 <- iid1_index
id2 <- iid2_index
inreverse[,c(id1, id2)] <- inreverse[,c(id2, id1)]
names(inreverse) <- relatedness_names
} else {
inreverse <- NULL
}
inorder <- relatedness_original[fail_inorder, ]
names(inorder) <- relatedness_names
return(rbind(inorder, inreverse))
})
relatednessFails <- do.call(rbind, relatednessFails)
if (nrow(relatednessFails) == 0) {
relatednessFails <- NULL
failIDs <- NULL
} else {
names(relatednessFails) <- relatedness_names
rownames(relatednessFails) <- 1:nrow(relatednessFails)
uniqueFails <- relatednessFails[!duplicated(relatednessFails[,iid1_index]),]
failIDs <- uniqueFails[,iid1_index]
}
return(list(relatednessFails=relatednessFails, failIDs=failIDs))
}
# main code
kin0 <- read.table(${_input:r}, header=F, stringsAsFactor=F)
colnames(kin0) <- c("FID1","ID1","FID2","ID2","NSNP","HETHET","IBS0","KINSHIP")
if (${"TRUE" if maximize_unrelated else "FALSE"}) {
rel <- relatednessFilter(kin0, ${kinship}, "ID1", "ID2", "KINSHIP")$failIDs
tmp1 <- kin0[,1:2]
tmp2 <- kin0[,3:4]
colnames(tmp1) = colnames(tmp2) = c("FID", "ID")
# Get the family ID for these rels so there are two columns FID and IID in the output
lookup <- dplyr::distinct(rbind(tmp1,tmp2))
dat <- lookup[which(lookup[,2] %in% rel),]
} else {
rel <- kin0 %>% filter(KINSHIP >= ${kinship})
IID <- sort(unique(unlist(rel[, c("ID1", "ID2")])))
dat <- data.frame(IID)
dat <- dat %>%
mutate(FID = IID) %>%
select(FID, IID)
}
cat("There are", nrow(dat),"related individuals using a kinship threshold of ${kinship}\n")
write.table(dat,${_output:r}, quote=FALSE, row.names=FALSE, col.names=FALSE)
# Split genotype data into related and unrelated samples, if related individuals are detected
[king_3]
input: output_from(2), genoFile
output: unrelated_bed = f'{cwd}/{_input[0]:bn}.unrelated.bed',
related_bed = f'{cwd}/{_input[0]:bn}.related.bed'
related_id = [x.strip() for x in open(_input[0]).readlines()]
stop_if(len(related_id) == 0)
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output[0]:bn}'
bash: expand= "${ }", stderr = f'{_output[0]:n}.stderr', stdout = f'{_output[0]:n}.stdout', container = container
plink2 \
--bfile ${_input[1]:n} \
--remove ${_input[0]} \
${('--keep %s' % keep_samples) if keep_samples.is_file() else ""} \
--make-bed \
--out ${_output[0]:n} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e6} --new-id-max-allele-len 1000 --set-all-var-ids chr@:#_\$r_\$a
plink2 \
--bfile ${_input[1]:n} \
--keep ${_input[0]} \
--make-bed \
--out ${_output[1]:n} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e6} --new-id-max-allele-len 1000 --set-all-var-ids chr@:#_\$r_\$a
```
## Genotype and sample QC
QC the genetic data based on MAF, sample and variant missigness and Hardy-Weinberg Equilibrium (HWE).
In this step you may also provide a list of samples to keep, for example in the case when you would like to subset a sample based on their ancestries to perform independent analyses on each of these groups.
The default parameters are set to reflect some suggestions in Table 1 of [this paper](https://dx.doi.org/10.1002%2Fmpr.1608).
```
# Filter SNPs and select individuals
[qc_no_prune, qc_1 (basic QC filters)]
# minimum MAF filter to use. 0 means do not apply this filter.
parameter: maf_filter = 0.0
# maximum MAF filter to use. 0 means do not apply this filter.
parameter: maf_max_filter = 0.0
# Maximum missingess per-variant
parameter: geno_filter = 0.1
# Maximum missingness per-sample
parameter: mind_filter = 0.1
# HWE filter
parameter: hwe_filter = 1e-06
fail_if(not (keep_samples.is_file() or keep_samples == path('.')), msg = f'Cannot find ``{keep_samples}``')
fail_if(not (keep_variants.is_file() or keep_variants == path('.')), msg = f'Cannot find ``{keep_variants}``')
fail_if(not (remove_samples.is_file() or remove_samples == path('.')), msg = f'Cannot find ``{remove_samples}``')
input: genoFile, group_by=1
output: f'{cwd}/{_input:bn}{("."+name) if name else ""}.filtered{".extracted" if keep_variants.is_file() else ""}.bed'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}'
bash: container=container, volumes=[f'{cwd}:{cwd}'], expand= "${ }", stderr = f'{_output:n}.stderr', stdout = f'{_output:n}.stdout'
plink2 \
--bfile ${_input:n} \
${('--maf %s' % maf_filter) if maf_filter > 0 else ''} \
${('--max-maf %s' % maf_max_filter) if maf_max_filter > 0 else ''} \
${('--geno %s' % geno_filter) if geno_filter > 0 else ''} \
${('--hwe %s' % hwe_filter) if hwe_filter > 0 else ''} \
${('--mind %s' % mind_filter) if mind_filter > 0 else ''} \
${('--keep %s' % keep_samples) if keep_samples.is_file() else ""} \
${('--remove %s' % remove_samples) if remove_samples.is_file() else ""} \
${('--exclude %s' % exclude_variants) if exclude_variants.is_file() else ""} \
${('--extract %s' % keep_variants) if keep_variants.is_file() else ""} \
--make-bed \
--out ${_output:n} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e6} --new-id-max-allele-len 1000 --set-all-var-ids chr@:#_\$r_\$a
# LD prunning and remove related individuals (both ind of a pair)
[qc_2 (LD pruning)]
# Window size
parameter: window = 50
# Shift window every 10 snps
parameter: shift = 10
parameter: r2 = 0.1
stop_if(r2==0)
output: bed=f'{cwd}/{_input:bn}.prune.bed', prune=f'{cwd}/{_input:bn}.prune.in'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output[0]:bn}'
bash: container=container, expand= "${ }", stderr = f'{_output[0]:n}.stderr', stdout = f'{_output[0]:n}.stdout'
plink \
--bfile ${_input:n} \
--indep-pairwise ${window} ${shift} ${r2} \
--out ${_output["prune"]:nn} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e6}
plink \
--bfile ${_input:n} \
--extract ${_output['prune']} \
--make-bed \
--out ${_output['bed']:n} \
--threads ${numThreads} \
--memory ${int(expand_size(mem) * 0.9)/1e6}
```
| github_jupyter |
## 3dgfx the math
This is pretty much a collection of notes mostly inspired by [Computer Graphics, Fall 2009](https://www.youtube.com/playlist?list=PL_w_qWAQZtAZhtzPI5pkAtcUVgmzdAP8g). Yeah it's an old course but it's very good and covers a lot of essentials in a fast pace.
This is by no means a substitute for watching it yourself; please do if you're tyring to figure out this stuff. This is more like a companion guide where essential materials are made real by executable examples where possible and relevant.
We will focus only on the math. Any programming (besides the code in this book) will be out of scope. The target audience is experience programmers who already have (or should have) seen this stuff and need an extended cheat sheet in the form of a refresher.
> "All the meat I've eaten. I forgive myself"
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
## vector space
A **vector space** (also called a **linear space**) is a collection of objects called **vectors** which may be added together and multiplied by numbers (called **scalars** in this context). The operations of multiplication and addition must satisfy certain requirements called *axioms*.
A vector space over a field $F$ is a set $V$ together with two operations that satisfy the eight axioms below. Elements of $V$ are commonly called vectors. Elements of $F$ are commonly called scalars.
The first operation, called *vector addition* or just *addition* takes any two vectors $\vec{v}$ and $\vec{w}$ and assigns them to a third vector commonly written as $\vec{v} + \vec{w}$ and called the sum of these two vectors. The second operation, *vector multiplication* takes any scalar $a$ and any vector $\vec{v}$ and produces a new vector $a\vec{v}$.
### axioms
In the list below, let $\vec{u}$, $\vec{v}$ and $\vec{w}$ be arbitrary vectors in $V$, and $a$ and $b$ scalars in $F$.
#### associativity of addition
$\vec{u} + (\vec{v} + \vec{w}) = (\vec{u} + \vec{v}) + \vec{w}$
#### communativity of addition
$\vec{u} + \vec{w} = \vec{w} + \vec{u}$
#### identity element of addition
There exists an element $0 \in V$, called the *zero vector* such that $\vec{v} + 0 = \vec{v}$ for all $\vec{v} \in V$.
#### inverse elements of addition
For every element $\vec{v} \in V$ there exists an element $-\vec{v} \in V$ called the *additive inverse* of $\vec{v}$ such that $\vec{v} + (-\vec{v}) = 0$.
#### compatibility of scalar multiplication with field multiplication
$a(b\vec{v}) = (ab)\vec{v}$
#### identity element of scalar multiplication
$1\vec{v} = \vec{v}$, where $1$ denotes the multiplicative identity in $F$.
#### distributivity of scalar multiplication with respect to vector addition
$a(\vec{u} + \vec{v}) = a\vec{u} + a\vec{v}$
#### distributivity of scalar multiplication with respect to field addition
$(a + b)\vec{v} = a\vec{v} + b\vec{v}$
When the scalar field $F$ is the real numbers $\mathbb{R}$, the vector space is called a *real vector space*. When the scalar field is complex numbers, the vector space is called a *complex vector space*.
## affine space
In an **affine space**, there is no distinguished point that serves as an origin. Hence, no vector has a fixed origin and no vector can be uniquely associated to a *point*. In an affine space, there are instead *displacement vectors* also called *translation vectors* or *simply translations*, between two points of the space.
Thus it makes sense to subtract two points of the space, giving a translation vector, but it does not make sense to add two points of the space.
$\vec{v} = P_2 - P_1$
Likewise, it makes sense to add a vector to a point of an affine space, resulting in a new point translated from the starting point by that vector.
$P_2 = P_1 + \vec{v}$
Note that we can interpolate any point $P_t$ on the line through points $P_1$ and $P_2$ by scaling the translation vector with a factor ${t}$.
$P_t = P_1 + t\vec{v} = P_1 + t(P_2 - P_1)$
This is demonstrated in the plot below.
```
def plot_line(p1, p2, style='b', **kwargs):
p1x, p1y = p1
p2x, p2y = p2
plt.plot([p1x, p2x], [p1y, p2y], style, **kwargs)
P1 = np.array([1, 1])
P2 = np.array([3, 2])
t = 0.38
Pt = P1 + t * (P2 - P1)
plot_line(P1, P2, label=r'$t(P_2 - P_1) = t\vec{v}$')
plt.plot(*P1, 'ko')
plt.plot(*P2, 'ko')
plt.plot(*Pt, 'ro')
plt.legend()
ax = plt.axes()
ax.set_xlim(0, 4)
ax.set_ylim(0, 3)
ax.annotate('$P_1$', (P1[0], P1[1]), xytext=(P1[0], P1[1] - 0.20))
ax.annotate('$P_2$', (P2[0], P2[1]), xytext=(P2[0], P2[1] + 0.10))
ax.annotate('$P_{t=%.2f}$' % t, (Pt[0], Pt[1]), xytext=(Pt[0], Pt[1] - 0.20))
```
We can also write this differently:
$P_t = P1 + t(P_2 - P_1) = (1 - t)P_1 + tP_2$.
We can see this by refactoring it:
$(1 - t)P_1 + tP_2 = P_1 - tP_1 + tP_2 = P_1 + t(P_2 - P_1)$.
The benefit of writing it like $(1 - t)P_1 + tP$ is that now we have something that is known as an *affine combination*.
## affine combination
An **affine combination**, also sometimes called an *affine sequence*, of vectors ${x_1}, \ldots, {x_n}$ is a vector $\underset{i=1}{\overset{n}{\sum}}{\alpha_i}\cdot{x_i} = {\alpha_1}{x_1} + {\alpha_2}{x_2} + \cdots + {\alpha_n}{x_n}$ called a *linear combination* of ${x_1}, \ldots, {x_n}$ in which the sum of the coefficients is 1, thus: $\underset{i=1}{\overset{n}{\sum}}{\alpha_i} = 1$.
Here the vectors ${x_i}$ are elements of a given vector space $V$ over a field $K$ and the coefficients ${\alpha_i}$ are scalars in $K$. This concept is important, for example, in *Euclidean geometry*.
The act of taking an affine combination commutes with any *affine transformation* $T$ in the sense that $T{\underset{i=1}{\overset{n}{\sum}}}{\alpha_i}\cdot{x_i} = \underset{i=1}{\overset{n}{\sum}}{\alpha_i}\cdot{T}{x_i}$.
In particular, any affine combination of the [fixed points](https://en.wikipedia.org/wiki/Fixed_point_(mathematics) of a given affine transformation $T$ is also a fixed point of $T$, so the set of fixed points of $T$ forms an affine subspace (in 3D: a line or a plane, and the trivial cases, a point or the whole space).
The important takeaway from the mumbo jumbo above is that, if we have some kind of thing that resembles $t_1{P_1} + t_2{P_2} + \cdots + t_n{P_n}$ and $t_1 + t_2 + \cdots + t_n = 1$ then we're dealing with an affine combination.
### interlude: thinking about affine space
As a thought experiment it's fun to stop and think about what it means to be in affine space.
First we cannot just describe a point and say where it is. If you are in affine space you can be only described by your position relative to the combination you are in.
We also found that we can use triangles to define a point in two-dimensional affine space. It would make sense that we could use lines to represent points in one-dimensional affine space. If a triangle is $P_t = t_1{P_1} + t_2{P_2} + t_2{P_3}$ then it it will also make sense that a line will be $P_t = t_1{P_1} + t_2{P_2}$.
Even with the small amount of computations we have done we can already see that the affine space model is a really nice way of thinking abou
### barycentric coordinates
In the context of a triangle, **barycentric coordinates** are also known as *area coordinates* or *areal coordinates*, because the coordinates of $P$ with respect to triangle $ABC$ are equivalent to the (signed) ratios of the areas of $PBC$, $PCA$ and $PAB$ to the area of the reference triangle $ABC$.
Areal and trilinear coordinates are used for similar purposes in geometry.
In order to figure out what this all means, we'll start with a triangle ${P_1}{P_2}{P_3}$ and a point ${P}$ inside of it. The code below defines these points and a `triangle_example1` function so we can re-use it. The function just plots the triangle, the points and some annotations.
```
P1 = 0, 1
P2 = 2, 5
P3 = 4, 2
P = 1.8, 2.9
def triangle_example1():
plot_line(P1, P2)
plot_line(P2, P3)
plot_line(P3, P1)
plt.plot(*P1, 'ko')
plt.plot(*P2, 'ko')
plt.plot(*P3, 'ko')
plt.plot(*P, 'ro')
ax = plt.axes()
ax.set_xlim(-0.5, 4.5)
ax.set_ylim(0.5, 5.5)
ax.annotate('$P_1$', P1, xytext=(P1[0] - 0.2, P1[1] + 0.2))
ax.annotate('$P_2$', P2, xytext=(P2[0] + 0.2, P2[1] + 0))
ax.annotate('$P_3$', P3, xytext=(P3[0] - 0.1, P3[1] - 0.3))
ax.annotate('$P$', P, xytext=(P[0] + 0.1, P[1] + 0.2))
ax.set_aspect(1.0)
```
And now we call the `triangle_example1` function to plot it:
```
plt.figure(figsize=(6,6))
triangle_example1()
```
We have all the components to express $P$ in terms of the points of the triangle ${P_1}{P_2}{P_3}$.
We start at $P_1$ and go some amount $t_2$ in the direction of the vector $P_2 - P_1$. We'll end up at $P1 + t_2(P2 - P1)$. From there we go some amount $t_3$ in the direction of the vector $P_3 - P_1$ and with the proper amounts (or *ratios*) of $t_2$ and $t_3$ we should be able to end up at $P$.
To visualize it we'll use a slightly different plot. The dashed line goes through point $P$ and runs in the direction of $P_3 - P_1$. We start at $P_1$ and go in the direction of the $P_2 - P_1$ vector until we end up at the dashed line. From there we just have to follow said dashed line until we end up at $P$.
```
plt.figure(figsize=(6,6))
triangle_example1()
f = lambda x: 1/4 * x + 1
x = np.linspace(-1, 5, 5)
# FIXME: Let's just do it engineering style for now
f_x = f(x) - f(P[0]) + P[1]
plt.plot(x, f_x, 'r--')
ax = plt.axes()
ax.annotate(r'$t_2$', (0.25, 2))
ax.annotate(r'$t_3$', (1.25, 2.45))
```
In the end we'll end up with $P = P_1 + t_2(P2 - P1) + t_3(P_3 - P_1)$.
What happened to $t_1$ though? Earlier we saw how to rewrite this so it's an *affine combination* and we can actually do that.
Let's try expanding everything, this usually helps:
$P = P_1 + t_2{P_2} - t_2{P_1} + t_3{P_3} - t_3{P_1}$
First thing we note is that this is just a simple addition. It's bascially $A + B + C + D$. Except here it it's $P = A + B - C + D - E$.
Another thing we can see is that $P_1$ appears multple times. It's important to note that this is not a characteristic of the point $P_1$ itself but just the fact that our calculation uses that as a reference point. It could be any point but $P_1$ is just an easy reference.
There's probably lots of ways to geek this out but I like to take the pragmatic way. We got a $P_1$, $-{t_2}{P_1}$ and $-{t_3}{P_1}$ and some other stuff added to that. We can simplify this:
$1 \cdot {P_1} - {t_2}{P_1} - {t_3}{P_1} = (1 - t_2 - t_3)P_1$
Now that we managed to capture all of the instances of $P_1$ in a single factor we can incorporate this with the factors of $P_2$ and $P_3$. As we found out above they are quite easy though, they are just $t_2{P_2}$ and $t_3{P_3}$ so that:
$P = 1 \cdot P_1 + t_2{P_2} - t_2{P_1} + t_3{P_3} - t_3{P_1} = (1 - t_2 - t_3)P_1 + t_2{P_2} + t_3{P_3}$
Considering that we are a looking for an *affine combination* and we ended up with:
$P = (1 - t_2 - t_3)P_1 + t_2{P_2} + t_3{P_3}$
We can now finally start to answer: "so what's $t_1$"? Well looking at the sequence above we would have to guess it's $1 - t_2 - t_3$ so that we end up with:
$P = t_1{P_1} + t_2{P_2} + t_3{P_3}$
$P = (1 - t_2 - t_3)P_1 + t_2{P_2} + t_3{P_3}$.
Now the point of this is not to find exactly where $P$ is. The point is *how* to find out how to calculate it in terms of three points in **affine space** and how it works.
The real beauty of this all lies in the fact that it doesn't really matter what the points are or what they represent as long as they can be made to operate in affine space. Which basically means they work according to the [axioms](http://localhost:8888/notebooks/3dgfx.ipynb#axioms) we listed at the top.
| github_jupyter |
```
import tkinter as tk
import pyautogui
import matplotlib.pyplot as plt
import numpy as np
from pynput import mouse
def on_click(x, y, button, pressed):
print('{0} at {1}'.format(
'Pressed' if pressed else 'Released',
(x, y)))
if not pressed:
# Stop listener
return False
# Collect events until release
# ...or, in a non-blocking fashion:
listener = mouse.Listener(
on_click=on_click,
)
listener.start()
import tkinter as tk
# def find_text():
# text = e1.get()
# print(text)
# # def capture_template():
# # x1,x2,y1,y2 = None, None, None, None
# # def on_click(x, y, button, pressed):
# # global x1, x2, y1, y2
# # #print('{0} at {1}'.format(
# # # 'Pressed' if pressed else 'Released',
# # # (x, y)))
# # if pressed:
# # x1,y1 = x,y
# # else:
# # x2,y2 = x,y
# # if not pressed:
# # # Stop listener
# # return False
# # listener = mouse.Listener(
# # on_click=on_click,
# # )
# # listener.start()
# # print(x1,y1,x2,y2)
# def draw_region():
# pass
# x1,x2,y1,y2,scr_shot = None, None, None, None, None
# def on_click(x, y, button, pressed):
# global x1, x2, y1, y2
# #print('{0} at {1}'.format(
# # 'Pressed' if pressed else 'Released',
# # (x, y)))
# if pressed:
# x1,y1 = x,y
# else:
# x2,y2 = x,y
# print(x1,y1,x2,y2)
# if not pressed:
# # Stop listener
# scr_shot = pyautogui.screenshot(region=(x1,y1, x2-x1, y2-y1))
# scr_shot = np.array(scr_shot)
# print(scr_shot.min(), scr_shot.max())
# plt.imshow(scr_shot)
# plt.show()
# listener = mouse.Listener(
# on_click=on_click,
# )
# master = tk.Tk()
# master.geometry("200x100")
# tk.Label(master).grid(row=0)
# e1 = tk.Entry(master)
# e1.grid(row=0)
# e1.place(width=200)
# #Get input text
# text = e1.get()
# tk.Button(master,
# text='Find text',
# command=find_text).grid(row=1,
# column=1,
# sticky=tk.W,
# pady=4)
# tk.Button(master, text='Capture Template', command=listener.start).grid(row=3,
# column=1,
# sticky=tk.W,
# pady=4)
# tk.Button(master, text='Draw Region', command=draw_region).grid(row=3,
# column=0,
# sticky=tk.W,
# pady=4)
class MainDisplay(Toplevel):
def __init__(self, master, op):
Toplevel.__init__(self,master)
self.op = op
def find_text():
text = e1.get()
print(text)
# def capture_template():
# x1,x2,y1,y2 = None, None, None, None
# def on_click(x, y, button, pressed):
# global x1, x2, y1, y2
# #print('{0} at {1}'.format(
# # 'Pressed' if pressed else 'Released',
# # (x, y)))
# if pressed:
# x1,y1 = x,y
# else:
# x2,y2 = x,y
# if not pressed:
# # Stop listener
# return False
# listener = mouse.Listener(
# on_click=on_click,
# )
# listener.start()
# print(x1,y1,x2,y2)
def draw_region():
pass
x1,x2,y1,y2,scr_shot = None, None, None, None, None
def on_click(x, y, button, pressed):
global x1, x2, y1, y2
#print('{0} at {1}'.format(
# 'Pressed' if pressed else 'Released',
# (x, y)))
if pressed:
x1,y1 = x,y
else:
x2,y2 = x,y
print(x1,y1,x2,y2)
if not pressed:
# Stop listener
scr_shot = pyautogui.screenshot(region=(x1,y1, x2-x1, y2-y1))
scr_shot = np.array(scr_shot)
print(scr_shot.min(), scr_shot.max())
plt.imshow(scr_shot)
plt.show()
op.write("")
listener = mouse.Listener(
on_click=on_click,
)
self.geometry("200x100")
tk.Label(master).grid(row=0)
e1 = tk.Entry(self)
e1.grid(row=0)
e1.place(width=200)
#Get input text
text = e1.get()
tk.Button(self,
text='Find text',
command=find_text).grid(row=1,
column=1,
sticky=tk.W,
pady=4)
tk.Button(self, text='Capture Template', command=listener.start).grid(row=3,
column=1,
sticky=tk.W,
pady=4)
tk.Button(self, text='Draw Region', command=draw_region).grid(row=3,
column=0,
sticky=tk.W,
pady=4)
root = tk.Tk()
root.withdraw() #hide the root so that only the notes will be visible
op = OperationPanel()
display = MainDisplay(root, op)
root.mainloop()
from tkinter import *
#from threading import Thread #no longer needed
class Note(Toplevel):
nid = 0
#title = "" #this would block the method to override the current title
message = ""
def __init__(self, master, nid, title, message):
Toplevel.__init__(self,master)
self.nid = nid
self.title(title) #since toplevel widgets define a method called title you can't store it as an attribute
self.message = message
self.display_note_gui() #maybe just leave that code part of the __init__?
def display_note_gui(self):
'''Tkinter to create a note gui window with parameters '''
#no window, just self
self.geometry("200x200")
self.configure(background="#BAD0EF")
#pass self as the parent to all the child widgets instead of window
title = Entry(self,relief=FLAT, bg="#BAD0EF", bd=0)
title.pack(side=TOP)
scrollBar = Scrollbar(self, takefocus=0, width=20)
textArea = Text(self, height=4, width=1000, bg="#BAD0EF", font=("Times", "14"))
scrollBar.pack(side=RIGHT, fill=Y)
textArea.pack(side=LEFT, fill=Y)
scrollBar.config(command=textArea.yview)
textArea.config(yscrollcommand=scrollBar.set)
textArea.insert(END, self.message)
#self.mainloop() #leave this to the root window
# def run(self):
# self.display_note_gui()
root = Tk()
root.withdraw() #hide the root so that only the notes will be visible
new_note1 = Note(root, 0, "Hello", "Hi, how are you?")
#new_note1.start()
#new_note1.join()
new_note2 = Note(root, 1, "2", "How's everyone else?")
#new_note2.start()
#new_note2.join()
root.mainloop() #still call mainloop on the root
```
| github_jupyter |
# How to beat terrorism efficiently: identification of set of key players in terrorist networks.
## GROUP 27. Members:
* Abrate, Marco Pietro
* Bolón Brun, Natalie
* Kakavandy, Shahow
* Park, Jangwon
## PROJECT DESCRIPTION:
Proliferation of terrorism in recent years has led people to believe it as a real threat to their livelihood. Vital to the success of such terrorist organizations are the cohesiveness and ability to communicate efficiently within their respective terrorist networks. To make these networks vulnerable, identifying sources of such properties is an imperative mission and hence becomes the focus of this report. More technically, we seek to develop an appropriate methodology to evaluate the importance of each terrorist to the effectiveness of the network as a whole, and identify an optimal set of key terrorists that one should target in order to debilitate it.
# PART I - FRAGMENTATION OF THE NETWORK
# Initial Information:
This notebook contains initial data exploration about the network as well as analysis for the fragmentation task. For comments on the results or further details about the reasoning behind the different sections, please refer to the report.
```
%matplotlib inline
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from matplotlib import gridspec
import pygsp as pg
import networkx as nx
# import our own functions
from data_exploration_functions import get_true_labels, find_components, find_largest_component, \
give_names_tonodes_dates_based, num_nodes, connected_graph, compute_shortest_path_lengths
from fragmentation_measures import num_disconnected_components, F_measure, information_entropy, Fd_measure
from optimization_algorithms import find_key_terrorists_fragmentation, compute_objective
#import random
```
# Extracting Terrorist Names from Nodes
Load original network and get unique ID for each terrorist
```
A = np.load('adjacency.npy')
# get the largest component
A, size = find_largest_component(find_components(A))
n_nodes = size
n_edges = np.count_nonzero(A) / 2
# get terrorist names for each node
names, A, name_dict = give_names_tonodes_dates_based(A)
# get relation label for each node
labels = get_true_labels(A)
```
## Graph Inversion
In this section we seek to invert the graph yielding a structure that represents terrorists as nodes and relations between them as edges. This structure is prefered among the original one as it eases the interpretation of results given the nature of the purpose of the project.
```
# Number of unique terrorists
num_terrorist = len(name_dict.keys())
print("Number of unique terrorist: {n:}".format(n=num_terrorist))
# Array of terrorist names
all_names = np.array(list(name_dict.keys()))
# Initialize inverted adjacency matrix. Symmetric and unweighted by default.
A_inverted = np.zeros((num_terrorist, num_terrorist))
A_relations = np.zeros((num_terrorist, num_terrorist))
for n in range(n_nodes):
temp = []
for d, name in enumerate(list(name_dict.keys())):
if n in list(name_dict.values())[d]:
# collect all terrorist names that correspond to node n: will ALWAYS be at most length 2
temp.append(list([name]))
for k in range(len(temp)):
for j in range(k, len(temp)):
idx = np.where(all_names == temp[k])[0][0]
idx2 = np.where(all_names == temp[j])[0][0]
# create an edge between all terrorists that belonged to the same node in original graph
A_inverted[idx,idx2] = 1
A_inverted[idx2,idx] = 1
# create a matrix which stores corresponding relations between terrorists
A_relations[idx,idx2] = int(labels[n])
A_relations[idx2,idx] = int(labels[n])
plt.hist(np.sum(A_inverted,axis=1), bins=15)
plt.show()
```
In the inverted network we encounter 244 unique terrorists. The histogram shows the distribution of degrees along the inverted network. We can see the bast majority of the nodes have a degree lower than 10 while just a few achieve a higher degree of connection. The maximum degree of connectivity is 21 achieved by only one node.
### Largest Component of Inverted Graph
In this section we look for the largest component on the new network. For the purpose of the project, we are not interested in working with disconnected components. For further details, refer to the report.
```
# Find the largest component
components = find_components(A_inverted)
largest_cc_inv, size = find_largest_component(components)
# Remove all-zero indices
zero_index = np.where(np.sum(largest_cc_inv, axis=0) == 0)[0]
largest_cc_inv = np.delete(largest_cc_inv, zero_index, axis=0)
largest_cc_inv = np.delete(largest_cc_inv, zero_index, axis=1)
relations_largest_cc = np.delete(A_relations, zero_index, axis=0)
relations_largest_cc = np.delete(relations_largest_cc, zero_index, axis=1)
names_largest_cc = np.delete(all_names, zero_index)
np.fill_diagonal(largest_cc_inv, 0)
print("Number of disconnected components: {d:}".format(d=len(components)))
deg_dist = []
for c in range(len(components)):
deg_dist.append(num_nodes(components[c]))
plt.hist(deg_dist, bins=len(components))
plt.xlabel('Component Size')
plt.title('Distribution of component sizes')
plt.show()
print("Size of inverted graph: {s:}".format(s=num_terrorist))
fig = plt.figure(figsize=(10,10))
ax1 = fig.add_subplot(121)
ax1.spy(A_inverted, markersize=1)
print("Size of largest component: {s:}".format(s=size))
ax2 = fig.add_subplot(122)
ax2.spy(largest_cc_inv, markersize=1)
```
The inverted network has a total of 30 components. Nevertheless a big group (13) are just isolated nodes. There is only one component of considerable size (~20 nodes). Further analysis on its discard is explained in the report.
```
# Visualize correct parsing on the nodes names.
display = False
if display:
for idx, name in enumerate(names):
print(name)
for i in range(A.shape[0]):
if A[idx, i] == 1:
print("\t"+str(names[i]))
def from_matrix_to_dict(mat):
d = {}
for index, value in np.ndenumerate(mat):
d[index] = value
return d
```
## Visualization of inverted network and largest component
```
# Visualize inverted network
graph = nx.from_numpy_matrix(A_inverted)
d = from_matrix_to_dict(A_relations)
nx.set_edge_attributes(graph, d, 'relations')
plt.figure(1, figsize=(10, 10))
nx.draw(graph, pos=nx.spring_layout(graph), arrows=False, with_labels=False, node_size=30, node_color='b')
plt.show()
lcc_graph = nx.from_numpy_matrix(largest_cc_inv)
d = from_matrix_to_dict(relations_largest_cc)
nx.set_edge_attributes(lcc_graph, d, 'relations')
fig = plt.figure(1, figsize=(10, 10))
nx.draw(lcc_graph, pos=nx.spring_layout(lcc_graph), arrows=False, with_labels=False, node_size=50, node_color='r')
plt.show()
### GEPHI
nx.write_gml(lcc_graph, 'gephi/lcc_graph.gml')
nx.write_gml(graph, 'gephi/graph.gml')
```
# Identify Key Players for Fragmenting Network
In this section, we identify a set of key players that structurally destroys, or fragments, the network the most when removed. Metrics that help us evaluate this property include (all normalized except information entropy):
- count of number of disconnected components (+)
- F measure, count of number of pairs of nodes that are disconnected (+)
- Information entropy (+)
- Fd measure, modification of F measure which takes into account internal structure (+)
(+) = the higher, the more important that node is
Source: http://steveborgatti.com/papers/cmotkeyplayer.pdf
```
# compute value of different measures of fragmentation on the initial network
num_dis = num_disconnected_components(largest_cc_inv)
F = F_measure(largest_cc_inv)
entropy = information_entropy(largest_cc_inv)
Fd = Fd_measure(largest_cc_inv) # will display a progress bar
```
### Boxplots visualization of "fragmentation" measures
```
fig = plt.figure(figsize=(15,10))
ax1 = fig.add_subplot(221)
ax1.hist(num_dis, bins=10)
ax1.set_xlabel("Number of disconnected components")
ax1.set_ylabel("Count")
ax1.set_title("Distribution of number of disconnected components (normalized)")
ax2 = fig.add_subplot(222)
ax2.hist(F, bins=15)
ax2.set_xlabel("Number of disconnected pairs of nodes")
ax2.set_ylabel("Count")
ax2.set_title("Distribution of number of disconnected pairs of nodes (normalized)")
plt.subplots_adjust(hspace=0.5)
ax3 = fig.add_subplot(223)
ax3.hist(entropy, bins=15)
ax3.set_xlabel("Information entropy")
ax3.set_ylabel("Count")
ax3.set_title("Distribution of information entropy")
ax4 = fig.add_subplot(224)
ax4.hist(Fd, bins=15)
ax4.set_xlabel("Fd measure")
ax4.set_ylabel("Count")
ax4.set_title("Distribution of Fd measure (normalized)")
plt.show()
```
All measures explored in the network are increasing in value by relevance of the node. In all cases, we can see that the bast majority of the nodes are not relevant as they achieve low scores on the different measures. Nevertheless, in all cases we can encounter a small set of nodes or even a single one that differentiates from the rest by achieving a high score.
### Determine if any of the above measures are positively correlated
```
fig = plt.figure(figsize=(15,4))
ax1 = fig.add_subplot(131)
ax1.scatter(F, entropy)
ax1.set_xlabel("F measure")
ax1.set_ylabel("Information entropy")
ax2 = fig.add_subplot(132)
ax2.scatter(F, Fd)
ax2.set_xlabel("F measure")
ax2.set_ylabel("Fd measure")
ax3 = fig.add_subplot(133)
ax3.scatter(Fd, entropy)
ax3.set_xlabel("Fd measure")
ax3.set_ylabel("Information entropy")
ax1.grid()
ax2.grid()
ax3.grid()
```
# Identify Key Players for Information Flow
#### ATENTION:
In this notebook, only **Fragmentation approach** is shown. For information flow please refer to notebook: **information_flow.ipynb**
In this section, centrality measures are computed but only fur later usage for comparison on results
```
# Degree centrality
degrees = np.sum(largest_cc_inv,axis=1) / (largest_cc_inv.shape[0]-1)
# convert adjacency matrix to networkx graph object
G = nx.from_numpy_matrix(largest_cc_inv)
# Closeness centrality
closeness = np.array(list(nx.closeness_centrality(G).values()))
# Betweenness centrality
between = np.array(list(nx.betweenness_centrality(G).values()))
```
# Greedy Optimization Algorithm
In this section an optimization algorithm is executed with the aim to find the **best set of nodes in the FRAGMENTATION approach**
```
# Find key terrorist for fragmenting the network.
set_kt, objective = find_key_terrorists_fragmentation(largest_cc_inv, relations_largest_cc)
print("Number of key terrorists for fragmentation: {n:}".format(n=len(set_kt)-1))
print("Indices of the key terrorists for fragmentation: ", set_kt[:-1])
print("List of the names of the key terrorists for fragmentation: \n", names_largest_cc[set_kt[:-1]])
plt.plot(objective)
plt.xlabel("Number of key terrorists")
plt.ylabel("Objective function")
plt.grid()
plt.show()
```
# Results
Evalute the changes in the network when the optimal set is removed. Further comments on the results can be found in the report
```
# First, identify the top three terrorists based on three centrality measures for comparison purposes
k = 3
top_k_degrees = np.sort(degrees)[-k:]
top_k_between = np.sort(between)[-k:]
top_k_closeness = np.sort(closeness)[-k:]
list_degrees = []
list_between = []
list_closeness = []
for i in range(k):
list_degrees.append(np.where(degrees == top_k_degrees[i])[0][0])
list_between.append(np.where(between == top_k_between[i])[0][0])
list_closeness.append(np.where(closeness == top_k_closeness[i])[0][0])
adj_degree = largest_cc_inv
adj_between = largest_cc_inv
adj_closeness = largest_cc_inv
adj_frag = largest_cc_inv
# Remove these top individuals found by centrality measures
adj_degree = np.delete(largest_cc_inv, np.array(list_degrees), axis=0)
adj_degree = np.delete(adj_degree, np.array(list_degrees), axis=1)
adj_between = np.delete(largest_cc_inv, list_between, axis=0)
adj_between = np.delete(adj_between, list_between, axis=1)
adj_closeness = np.delete(largest_cc_inv, list_closeness, axis=0)
adj_closeness = np.delete(adj_closeness, list_closeness, axis=1)
adj_frag = np.delete(largest_cc_inv, set_kt[:-1], axis=0)
adj_frag = np.delete(adj_frag, set_kt[:-1], axis=1)
def evaluate_fragmentation(original, adjacency):
"""
Given original adjacency matrix, measure how fragmented "adjacency" is in comparison
"""
# Number of disconnected components
N = original.shape[0]
components = find_components(adjacency)
sum_Sk = 0
numer = 0
for i in range(len(components)):
zero_index = np.where(np.sum(components[i], axis=0) == 0)[0]
components[i] = np.delete(components[i], zero_index, axis=0)
components[i] = np.delete(components[i], zero_index, axis=1)
if len(components[i]) == 0: continue
# F measure
Sk = num_nodes(components[i])
sum_Sk += (Sk * (Sk-1))
# Information entropy
numer += -(Sk / N)*np.log(Sk/N)
Fmeasure = 1 - sum_Sk / (N*(N-1))
# Number of edges cut
E = np.count_nonzero(original) / 2
edges_cut = np.count_nonzero(adjacency) / 2
return Fmeasure, numer, len(components), (E - edges_cut)
# Compared to the original adjacency, measure how fragmented the new adjacency is
Fdegrees, ent_degrees, disc_degrees, edges_degrees = evaluate_fragmentation(largest_cc_inv, adj_degree)
Fbetween, ent_between, disc_between, edges_between = evaluate_fragmentation(largest_cc_inv, adj_between)
Fcloseness, ent_closeness, disc_closeness, edges_closeness = evaluate_fragmentation(largest_cc_inv, adj_closeness)
Ffrag, ent_frag, disc_frag, edges_frag = evaluate_fragmentation(largest_cc_inv, adj_frag)
print(' ',' F | Information Entropy | # disconnected components | Number of removed edges')
print('Degree ', round(Fdegrees, 6), ' ', round(ent_degrees,6) , ' ',\
disc_degrees,' ', edges_degrees)
print('Betweenness c.', round(Fbetween, 6), ' ', round(ent_between, 6), ' ',\
disc_between, ' ', edges_between)
print('Closeness c. ',round(Fcloseness, 6), ' ', round(ent_closeness, 6), ' ',\
disc_closeness, ' ',edges_closeness)
print('Greedy Search ',round(Ffrag, 6),' ', round(ent_frag, 6), ' ',\
disc_frag, ' ',edges_frag)
F2 = F_measure(adj_frag)
IE2 = information_entropy(adj_frag)
Fd2 = Fd_measure(adj_frag)
```
##### COMMENT ON THE RESULTS:
The previous table shows different values of the metrics when a set of nodes is removed from the network. Four different sets of nodes are considered, each one of them generated as optimal in a different way (highest value of degree, betweenness centrality, closeness centrality or best score in the optimization task).
The set found in the optimization task yields the best results as it fragments the network in 7 different subcomponents. By removing these nodes, 32 connections inside the netowork have been broken.
## Visual representation of results for comparison.
```
# Create pygsp graph for visualization
G_frag = pg.graphs.Graph(largest_cc_inv)
G_frag.set_coordinates('spring')
frag = F + entropy + Fd
# Highlight algorithm's findings
fig = plt.figure(figsize=(20, 20))
gs = gridspec.GridSpec(2, 2, hspace=0.3, wspace=0.05, left=0, right=0.97)
ax = plt.subplot(gs[0])
G_frag.plot_signal(frag, colorbar=True, ax=ax, highlight=set_kt[:-1])
title = r'Set generated with highest degree score'
_ = ax.set_title(title, fontdict={'fontsize': 20})
ax.set_axis_off()
# Highlight top individuals for degree centrality
#fig = plt.figure(figsize=(18, 21))
ax = plt.subplot(gs[1])
G_frag.plot_signal(frag, colorbar=True, ax=ax, highlight=list_degrees)
title = r'Set generated with highest degree score'
_ = ax.set_title(title, fontdict={'fontsize': 20})
ax.set_axis_off()
# Highlight top individuals for betweenness centrality
#fig = plt.figure(figsize=(18, 21))
ax = plt.subplot(gs[2])
G_frag.plot_signal(frag, colorbar=True, ax=ax, highlight=list_between)
title = r'Set generated with highest betweenness centrality score'
_ = ax.set_title(title, fontdict={'fontsize': 20})
ax.set_axis_off()
# Highlight top individuals for closeness centrality
#fig = plt.figure(figsize=(18, 21))
ax = plt.subplot(gs[3])
G_frag.plot_signal(frag, colorbar=True, ax=ax, highlight=list_closeness)
title = r'Set generated with highest closeness centrality score'
_ = ax.set_title(title, fontdict={'fontsize': 20})
ax.set_axis_off()
```
#### COMMENT ON THE RESULTS:
The previous figure shows the network with the **set of nodes considered key highlighted** in orange.
The colors in the different nodes represent its value scored in the "fragmentation" measure defined as objective function if the optimization algorithm.
It can be seen that nodes with high degree, although being relevant for a big number of connections, will not be key in the fragmentation of the network, possibly because they involve non unique paths.
On the other hand, nodes relevants for the fragmentation have also a high score on centrality measures. Given the meaning of these two measures, it is not surprising that both values are related.
```
# Post-fragmentation network
G_frag2 = pg.graphs.Graph(adj_frag)
G_frag2.set_coordinates('spring')
frag2 = F2 + IE2 + Fd2
fig = plt.figure(figsize=(20, 10))
gs = gridspec.GridSpec(1, 1, hspace=0.3, wspace=0.05, left=0, right=0.97)
ax = plt.subplot(gs[0])
G_frag2.plot_signal(frag2, colorbar=True, ax=ax)
title = r'Sum of scores: {}'.format(round(np.sum(frag[set_kt[:-1]]), 5))
_ = ax.set_title(title)
ax.set_axis_off()
```
##### COMMENT ON THE RESULTS:
The plot shows the resulting network after the selected set of nodes has been removed. It yields 7 different subnetworks without any connection between them. The size of the components has been also reduced, generating 2 big hubs and 5 smaller networks. Nevertheless, the two big hubs are already much smaller than the initially connected net. Further analysis on the results is provided in the report.
| github_jupyter |
# Turnover (Solution)
## Install packages
```
import sys
!{sys.executable} -m pip install -r requirements.txt
import cvxpy as cvx
import numpy as np
import pandas as pd
import time
import os
import quiz_helper
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14, 8)
```
### data bundle
```
import os
import quiz_helper
from zipline.data import bundles
os.environ['ZIPLINE_ROOT'] = os.path.join(os.getcwd(), '..', '..','data','module_4_quizzes_eod')
ingest_func = bundles.csvdir.csvdir_equities(['daily'], quiz_helper.EOD_BUNDLE_NAME)
bundles.register(quiz_helper.EOD_BUNDLE_NAME, ingest_func)
print('Data Registered')
```
### Build pipeline engine
```
from zipline.pipeline import Pipeline
from zipline.pipeline.factors import AverageDollarVolume
from zipline.utils.calendars import get_calendar
universe = AverageDollarVolume(window_length=120).top(500)
trading_calendar = get_calendar('NYSE')
bundle_data = bundles.load(quiz_helper.EOD_BUNDLE_NAME)
engine = quiz_helper.build_pipeline_engine(bundle_data, trading_calendar)
```
### View Data¶
With the pipeline engine built, let's get the stocks at the end of the period in the universe we're using. We'll use these tickers to generate the returns data for the our risk model.
```
universe_end_date = pd.Timestamp('2016-01-05', tz='UTC')
universe_tickers = engine\
.run_pipeline(
Pipeline(screen=universe),
universe_end_date,
universe_end_date)\
.index.get_level_values(1)\
.values.tolist()
universe_tickers
```
# Get Returns data
```
from zipline.data.data_portal import DataPortal
data_portal = DataPortal(
bundle_data.asset_finder,
trading_calendar=trading_calendar,
first_trading_day=bundle_data.equity_daily_bar_reader.first_trading_day,
equity_minute_reader=None,
equity_daily_reader=bundle_data.equity_daily_bar_reader,
adjustment_reader=bundle_data.adjustment_reader)
```
## Get pricing data helper function
```
def get_pricing(data_portal, trading_calendar, assets, start_date, end_date, field='close'):
end_dt = pd.Timestamp(end_date.strftime('%Y-%m-%d'), tz='UTC', offset='C')
start_dt = pd.Timestamp(start_date.strftime('%Y-%m-%d'), tz='UTC', offset='C')
end_loc = trading_calendar.closes.index.get_loc(end_dt)
start_loc = trading_calendar.closes.index.get_loc(start_dt)
return data_portal.get_history_window(
assets=assets,
end_dt=end_dt,
bar_count=end_loc - start_loc,
frequency='1d',
field=field,
data_frequency='daily')
```
## get pricing data into a dataframe
```
returns_df = \
get_pricing(
data_portal,
trading_calendar,
universe_tickers,
universe_end_date - pd.DateOffset(years=5),
universe_end_date)\
.pct_change()[1:].fillna(0) #convert prices into returns
returns_df
```
## Sector data helper function
We'll create an object for you, which defines a sector for each stock. The sectors are represented by integers. We inherit from the Classifier class. [Documentation for Classifier](https://www.quantopian.com/posts/pipeline-classifiers-are-here), and the [source code for Classifier](https://github.com/quantopian/zipline/blob/master/zipline/pipeline/classifiers/classifier.py)
```
from zipline.pipeline.classifiers import Classifier
from zipline.utils.numpy_utils import int64_dtype
class Sector(Classifier):
dtype = int64_dtype
window_length = 0
inputs = ()
missing_value = -1
def __init__(self):
self.data = np.load('../../data/project_4_sector/data.npy')
def _compute(self, arrays, dates, assets, mask):
return np.where(
mask,
self.data[assets],
self.missing_value,
)
sector = Sector()
```
## We'll use 2 years of data to calculate the factor
**Note:** Going back 2 years falls on a day when the market is closed. Pipeline package doesn't handle start or end dates that don't fall on days when the market is open. To fix this, we went back 2 extra days to fall on the next day when the market is open.
```
factor_start_date = universe_end_date - pd.DateOffset(years=2, days=2)
factor_start_date
```
## Create smoothed momentum factor
```
from zipline.pipeline.factors import Returns
from zipline.pipeline.factors import SimpleMovingAverage
# create a pipeline called p
p = Pipeline(screen=universe)
# create a factor of one year returns, deman by sector, then rank
factor = (
Returns(window_length=252, mask=universe).
demean(groupby=Sector()). #we use the custom Sector class that we reviewed earlier
rank().
zscore()
)
# Use this factor as input into SimpleMovingAverage, with a window length of 5
# Also rank and zscore (don't need to de-mean by sector, s)
factor_smoothed = (
SimpleMovingAverage(inputs=[factor], window_length=5).
rank().
zscore()
)
# add the unsmoothed factor to the pipeline
p.add(factor, 'Momentum_Factor')
# add the smoothed factor to the pipeline too
p.add(factor_smoothed, 'Smoothed_Momentum_Factor')
```
## visualize the pipeline
Note that if the image is difficult to read in the notebook, right-click and view the image in a separate tab.
```
p.show_graph(format='png')
```
## run pipeline and view the factor data
```
df = engine.run_pipeline(p, factor_start_date, universe_end_date)
df.head()
```
## Evaluate Factors
We'll go over some tools that we can use to evaluate alpha factors. To do so, we'll use the [alphalens library](https://github.com/quantopian/alphalens)
## Import alphalens
```
import alphalens as al
```
## Get price data
Note, we already got the price data and converted it to returns, which we used to calculate a factor. We'll retrieve the price data again, but won't convert these to returns. This is because we'll use alphalens functions that take their input as prices and not returns.
## Define the list of assets
Just to make sure we get the prices for the stocks that have factor values, we'll get the list of assets, which may be a subset of the original universe
```
# get list of stocks in our portfolio (tickers that identify each stock)
assets = df.index.levels[1].values.tolist()
print(f"stock universe number of stocks {len(universe_tickers)}, and number of stocks for which we have factor values {len(assets)}")
factor_start_date
pricing = get_pricing(
data_portal,
trading_calendar,
assets, #notice that we used assets instead of universe_tickers; in this example, they're the same
factor_start_date, # notice we're using the same start and end dates for when we calculated the factor
universe_end_date)
factor_names = df.columns
print(f"The factor names are {factor_names}")
# Use a dictionary to store each dataframe, one for each factor and its associated forward returns
factor_data = {}
for factor_name in factor_names:
print("Formatting factor data for: " + factor_name)
# Get clean factor and forward returns for each factor
# Choose single period returns (daily returns)
factor_data[factor_name] = al.utils.get_clean_factor_and_forward_returns(
factor=df[factor_name],
prices=pricing,
periods=[1])
```
## Turnover Analysis
One aspect of a good factor is one that does not incur as much transaction costs compared to other factors. How do some factors incur more transaction costs than others? Well, if a factor requires that we constantly rebalance our portfolio by buying and selling every day, then it would be more costly compared to a factor that only requires us to make trades once per quarter. If we look at the factor ranks (sort the stocks by their factor score on each day, then give them ranks 1,2,3...N), we can see how these factor ranks change from day to day. If, for instance, we have a portfolio of 3 stocks, and their ranks do not change for several days (for example: Stock A is always ranked 3rd, stock B is always ranked 1st, and stock C is always ranked 2nd), that means we would not have to initiate trades over those days in order to maintain portfolio weights that follow the alpha factor.
A proxy for the amount of trade turnover is the autocorrelation of the ranks over time. In the context of quant finance, we call this autocorrelation "factor rank autocorrelation", or FRA for short. Alphalens has a function [alphalens.performance.factor_rank_autocorrelation](https://quantopian.github.io/alphalens/alphalens.html?highlight=factor_rank_autocorrelation#alphalens.performance.factor_rank_autocorrelation)
```
factor_rank_autocorrelation(factor_data, period=1):
factor_data:
Use the dataframe that is returned from our call to alphalens.utils.get_clean_factor_and_forward_returns
period:
Period over which to calculate the turnover. Keep the default of 1
```
## Quiz 1
Look at the error message when trying to call the factor_rank_autocorrelation function, passing in the factor data. What data type is required?
```
factor_names = df.columns
ls_fra = []
for i, factor_name in enumerate(factor_names):
print("Calculating the FRA for: " + factor_name)
#TODO: look at the error generated from this line of code
fra = al.performance.factor_rank_autocorrelation(factor_data[factor_name]).to_frame()
fra.columns = [factor_name]
ls_fra.append(fra)
df_ls_fra = pd.concat(ls_fra, axis=1)
```
## Answer 1
An integer is required
## Convert datetime to integer
To pass in factor data that the factor_rank_autocorrelation function can use, we'll convert the datetime into an integer using unix
```
unixt_factor_data = {}
for factor_name in factor_names:
unixt_index_data = [(x.timestamp(), y) for x, y in factor_data[factor_name].index.values]
unixt_factor_data[factor_name] = factor_data[factor_name].set_index(pd.MultiIndex.from_tuples(unixt_index_data, names=['date', 'asset']))
```
## Quiz 2:
Calculate Factor rank autocorrelation
Use the data for which the datetime index was converted to integer
```
factor_names = df.columns
ls_fra = []
for i, factor_name in enumerate(factor_names):
print("Calculating the FRA for: " + factor_name)
#TODO: calculate factor rank autocorrelation
fra = al.performance.factor_rank_autocorrelation(unixt_factor_data[factor_name]).to_frame()
fra.columns = [factor_name]
ls_fra.append(fra)
df_ls_fra = pd.concat(ls_fra, axis=1)
```
## View the outputted FRA
```
df_ls_fra.plot(title="Factor Rank Autocorrelation");
```
## Quiz 3
How would you compare the factor rank autocorrelation of the unsmoothed and smoothed factors? How do you describe what this means in terms of potential transaction costs?
## Answer 3
The FRA of the smoothed factor is higher compared to the unsmoothed factor. This potentially means less trading and therefore lower transaction costs.
| github_jupyter |
# Installation Instructions
Download and install miniconda:
https://conda.io/miniconda.html
Make sure you are using the conda-forge channel:
```bash
$ conda config --add channels conda-forge
$ conda update --yes conda python
```
Install gsshapy:
```bash
$ conda create -n gssha python=2
$ source activate gssha
(gssha)$ conda install --yes gsshapy pynio jupyter
```
Install GSSHA:
http://www.gsshawiki.com/GSSHA_Download
<div class="alert alert-warning">
This will NOT work on Windows.
</div>
```
import os
from datetime import datetime, timedelta
from gsshapy.modeling import GSSHAFramework
from gsshapy.grid import HRRRtoGSSHA
from gsshapy.grid.hrrr_to_gssha import download_hrrr_for_gssha
from gsshapy.lib import db_tools as dbt
import pangaea as pa
```
Setup environment:
```
# assuming notebook is run from examples folder
# DONT FORGET dos2unix or unix2dos
base_dir = '/Users/rdchlads/GSSHA_INPUT/'
gssha_model_name = '2017_08_16_270m'
gssha_model_directory = os.path.join(base_dir, gssha_model_name)
hrrr_output_directory = os.path.join(gssha_model_directory, 'hrrr_data')
try:
os.mkdir(hrrr_output_directory)
except OSError:
pass
```
Get GSSHA model bounds:
```
# load in GSSHA model files
project_manager, db_sessionmaker = \
dbt.get_project_session(gssha_model_name,
gssha_model_directory)
db_session = db_sessionmaker()
project_manager.read(directory=gssha_model_directory,
filename="{0}.prj".format(gssha_model_name),
session=db_session)
gssha_grid = project_manager.getGrid()
# reproject GSSHA grid and get bounds
min_x, max_x, min_y, max_y = gssha_grid.bounds(as_geographic=True)
min_x, max_x, min_y, max_y
```
Download HRRR Data:
```
downloaded_files = download_hrrr_for_gssha(main_directory=hrrr_output_directory,
forecast_start_date_string='20170913',
forecast_start_hour_string='00',
leftlon=min_x,
rightlon=max_x,
toplat=max_y,
bottomlat=min_y)
downloaded_files
```
Inspect the grid data:
```
with pa.open_mfdataset(downloaded_files,
lat_var='gridlat_0',
lon_var='gridlon_0',
time_var='time',
lat_dim='ygrid_0',
lon_dim='xgrid_0',
time_dim='time',
loader='hrrr') as hrd:
print(hrd)
print(hrd.PRATE_P0_L1_GLC0)
```
Map the variable in the GRIB files to the conversion function:
```
hrrr_forecast_dir = os.path.dirname(downloaded_files[0])
data_var_map_array = [
['precipitation_rate', 'PRATE_P0_L1_GLC0'],
['pressure', 'PRES_P0_L1_GLC0'],
['relative_humidity', 'RH_P0_L103_GLC0'],
['wind_speed', ['UGRD_P0_L103_GLC0', 'VGRD_P0_L103_GLC0']],
['direct_radiation_cc', ['DSWRF_P0_L1_GLC0', 'TCDC_P0_L10_GLC0']],
['diffusive_radiation_cc', ['DSWRF_P0_L1_GLC0', 'TCDC_P0_L10_GLC0']],
['temperature', 'TMP_P0_L1_GLC0'],
['cloud_cover_pc', 'TCDC_P0_L10_GLC0'],
]
```
Option 1. Convert Data:
```
h2g = HRRRtoGSSHA(gssha_project_folder=gssha_model_directory,
gssha_project_file_name="{0}.prj".format(gssha_model_name),
lsm_input_folder_path=hrrr_forecast_dir,
lsm_search_card="hrrr.*.grib2")
# hmet
h2g.lsm_data_to_arc_ascii(data_var_map_array)
# gag
out_gage_file = os.path.join(gssha_model_directory,
'gage_hrrr.gag')
h2g.lsm_precip_to_gssha_precip_gage(out_gage_file,
lsm_data_var='PRATE_P0_L1_GLC0',
precip_type='RADAR')
```
Option 2. Convert Data & Run the model:
```
grf = GSSHAFramework("gssha",
gssha_model_directory,
"{0}.prj".format(gssha_model_name),
lsm_folder=hrrr_forecast_dir,
lsm_data_var_map_array=data_var_map_array,
lsm_precip_data_var='PRATE_P0_L1_GLC0',
lsm_precip_type='RADAR',
lsm_search_card="hrrr.*.grib2",
lsm_lat_var='gridlat_0',
lsm_lon_var='gridlon_0',
lsm_time_var='time',
lsm_lat_dim='ygrid_0',
lsm_lon_dim='xgrid_0',
lsm_time_dim='time',
grid_module='hrrr')
gssha_event_directory = gr.run_forecast()
gssha_event_directory
```
The `gssha_event_directory` is where the simulation output is stored.
| github_jupyter |
```
from __future__ import print_function, absolute_import
from rdkit import Chem
from rdkit.Chem import AllChem
import pandas as pd
import cPickle as pickle
import numpy as np
import re
# Load data from Schneider's 50k dataset
dataSetB = pd.read_csv('../data/from_schneider/dataSetB.csv')
dataSetB['reactantSet_NameRxn'] = [eval(x) for x in dataSetB['reactantSet_NameRxn']]
dataSetB.head()
# Class stats
dataSetB['rxn_Class'].value_counts()
# Create new df from old (minor processing)
classes = []
ids = []
rxn_smiles = []
prod_smiles = []
for row in dataSetB.itertuples():
if row[0] % 5000 == 0:
print('On index {:d}'.format(int(row[0])))
all_reactants, all_products = row[3].split('>>')
products = [Chem.MolFromSmiles(smi) for smi in all_products.split('.')]
# Multiple products = enumerate
for prod in products:
# Make sure all have atom mapping
if not all([a.HasProp('molAtomMapNumber') for a in prod.GetAtoms()]):
continue
prod_smi = Chem.MolToSmiles(prod, True)
# Re-parse reactants for each product so we can clear maps
reactants = [Chem.MolFromSmiles(smi) for (i, smi) in enumerate(
all_reactants.split('.')) if i in row[4]]
# Get rid of reactants when they don't contribute to this prod
prod_maps = set(re.findall('\:([[0-9]+)\]', prod_smi))
reactants_smi_list = []
for mol in reactants:
used = False
for a in mol.GetAtoms():
if a.HasProp('molAtomMapNumber'):
if a.GetProp('molAtomMapNumber') in prod_maps:
used = True
else:
a.ClearProp('molAtomMapNumber')
if used:
reactants_smi_list.append(Chem.MolToSmiles(mol, True))
reactants_smi = '.'.join(reactants_smi_list)
# Was this just a spectator? Some examples are HCl>>HCl
if reactants_smi == prod_smi:
continue
# Append to ongoing list
classes.append(row[1])
ids.append(row[2])
rxn_smiles.append('{}>>{}'.format(reactants_smi, prod_smi))
# Save non-mapped prod too
[a.ClearProp('molAtomMapNumber') for a in prod.GetAtoms()]
prod_smiles.append(Chem.MolToSmiles(prod, True))
data = pd.DataFrame({'class': classes,
'id': ids,
'rxn_smiles': rxn_smiles,
'prod_smiles': prod_smiles})
data['class'].value_counts()
# Find most popular product smiles (probably frags/salts)
from collections import Counter
prod_smi_counter = Counter(data['prod_smiles'])
print(prod_smi_counter.most_common(25))
data['prod_smiles_pop'] = [prod_smi_counter[smi] for smi in data['prod_smiles']]
data['keep'] = [x[5] < 10 and
len(x[4]) >= 5 for
x in data.itertuples()]
data.loc[data['keep']]['class'].value_counts()
data.loc[data['keep']].to_csv('../data/data_processed.csv')
```
| github_jupyter |
# Introduction to Transmon Physics
## Contents
1. [Multi-level Quantum Systems as Qubits](#mlqsaq)
2. [Hamiltonians of Quantum Circuits](#hoqc)
3. [Quantizing the Hamiltonian](#qth)
4. [The Quantized Transmon](#tqt)
5. [Comparison of the Transmon and the Quantum Harmonic Oscillator](#cottatqho)
6. [Qubit Drive and the Rotating Wave Approximation](#qdatrwa)
## 1. Multi-level Quantum Systems as Qubits <a id='mlqsaq'></a>
Studying qubits is fundamentally about learning the physics of two-level systems. One such example of a purely two-level system is the spin of an electron (or any other spin-$1/2$ particle): it can either point up or down, and we label these states $|0\rangle$ and $|1\rangle$, respectively. Historically, the reason the $|0\rangle$ state is at the "north pole" of the Bloch sphere is that this is the lower-energy state when a magnetic field is applied in the $+\hat{z}$ direction.
Another such two-level system occurs in the first type of superconducting qubit discovered: the [Cooper Pair Box](https://arxiv.org/pdf/cond-mat/9904003v1.pdf). The reason there is no electrical resistance in superconductors is that electrons combine as Cooper pairs, which take energy to break up (and that energy is not available thermally at low temperatures), because they are effectively attracted to each other. This situation is quite counterintuitive, because electrons are both negatively-charged, they should repel each other! However, in many material systems effective interactions can be mediated by collective effects: one can think of the electrons as being attracted to the wake of other electrons in the lattice of positive charge. The Cooper Pair Box consists of a superconducting island that possesses an extra Cooper pair of charge $2e$ ($|0\rangle$) or does not ($|1\rangle$). These states can be manipulated by voltages on tunnel junctions, and is periodic with "gate" voltage control, so it is indeed a two-level system.
Qubits encoded as charge states are particularly sensitive to *charge noise*, and this is true of the Cooper Pair Box, which is why it fell out of favor with researchers. Many other quantum systems are not two-level systems, such as atoms that each feature unique spectral lines (energy transitions) that are used by astronomers to determine the composition of our universe. By effectively isolating and controlling just two levels, such as the ground and first excited state of an atom, then you could treat it as a qubit. But what about using other types of superconducing circuits as qubits? The solution to the charge noise problem of the Cooper Pair Box hedged on designing a qubit with higher-order energy levels: the [transmon](https://arxiv.org/pdf/cond-mat/0703002.pdf). (The name is derived from *transmission-line shunted plasma oscillation* qubit). By sacrificing anharmonicity (the difference between the $|0\rangle \to |1\rangle$ and $|1\rangle \to |2\rangle$ transition frequencies, see section on [Accessing Higher Energy States](https://qiskit.org/textbook/ch-quantum-hardware/accessing_higher_energy_states.html)), charge noise is suppressed while still allowing the lowest two levels to be addressed as a qubit. Now the quantum states are encoded in oscillations of Cooper Pairs across a tunnel junction between two superconducting islands, with the excited $|1\rangle$ state oscillating at a high frequency than the ground $|0\rangle$.
## 2. Hamiltonians of Quantum Circuits <a id='hoqc'></a>
The Hamiltonian is a function that equals the total energy of a system, potential and kinetic. This is true in classical mechanics, and the quantum Hamiltonian is found by promoting the variables to operators. By comparing classical Poisson backets to quantum commutators, it is found that they do not commute, meaning they cannot be observed simultaneously, as in Heisenberg's uncertainty principle.
We'll first consider a linear $LC$ circuit, where $L$ is the inductance and $C$ is the capacitance. The Hamiltonian is the sum of the kinetic energy (represented by charge variable $Q$) and potential energy (represented by flux variable $\Phi$),
$$
\mathcal{H} = \frac{Q^2}{2C} + \frac{\Phi^2}{2L}
$$
<details>
<summary>Branch-Flux Method for Linear Circuits (click here to expand)</summary>
Hamiltonians and Lagrangians are functions involving the energies of massive objects and have a rich history in the dynamics of classical systems. They still serve as a template for "quantizing" objects, including the transmon. The method consists of writing the Lagrangian in terms of generalized coordinate: we will choose a quantity called flux that is defined by the history of voltages, classically one often chooses position in 3-dimensional space. The conjugate variable to our generalized coordinate is then calculated, and will end up being charge in our case (usually momentum in the classical case). By way of a Legendre transformation, the Hamiltonian is calculated, which represents the sum of energies of the system.
The circuit Hamiltonian can be found by considering the capacitative and inductive energies using the branch-flux method, which itself is based on classical Lagrangian mechanics. Defining the flux and charge to be time integrals of voltage and current, respectively,
$$
\Phi(t) = \int_{-\infty}^t V(t')\,dt' \quad {\rm and} \quad Q(t) = \int_{-\infty}^t I(t')\,dt'
$$
we will work with flux $\Phi$ as our generalized coordinate, where $V(t')$ and $I(t')$ are the voltage and current flowing across the transmon at time $t'$. In electric circuits, voltage functions much like potential energy and current like kinetic energy. The instantaneous energy across the transmon at time $t$ is
$$
E(t) = \int_{-\infty}^t V(t') I(t')\,dt'.
$$
The voltage and current across a capacitor (with capacitance $C$) and inductor (with inductance $L$), are related to each other by $V=L dI/dt$ and $I = C dV/dt$, respectively. In circuits, capacitors store charge and inductors store flux (current). We will work with the flux as our "coordinate" of choice. Then because inductors store flux, the potential energy is represented as
$$
U_L(t) = \int_{-\infty}^t L\frac{dI(t')}{dt'} I(t')\, dt' = \frac{1}{2} LI(t)^2 = \frac{1}{2L}\Phi^2
\quad {\rm because} \quad
\Phi(t) = \int_{-\infty}^t L \frac{dI(t')}{dt'}\,dt' = LI(t)
$$
by integration by parts. Similarly, voltage is the rate of change of flux, so it corresponds to the kinetic energy
$$
\tau_C(t) = \int_{-\infty}^t C\frac{dV(t')}{dt'} V(t')\, dt' = \frac{1}{2} CV(t)^2 = \frac{1}{2}C\dot{\Phi}^2 \quad {\rm where} \quad \dot{\Phi} = \frac{d\Phi}{dt}
$$
is the common way to denote time derivatives in Lagrangian mechanics. The Lagrangian is defined as the difference between the kinetic and potential energies and is thus
$$
\mathcal{L} = \tau_C - U_L = \frac{1}{2L} \Phi^2 - \frac{1}{2} C \dot{\Phi}^2.
$$
The dynamics are determined by the Euler-Lagrange equation
$$
0 \equiv \frac{\partial\mathcal{L}}{\partial\Phi} - \frac{d}{dt} \left(\frac{\partial\mathcal{L}}{\partial\dot{\Phi}}\right)
= \frac{\Phi}{L} + C\ddot{\Phi},
$$
which describes a harmonic oscillator in $\Phi$ with angular frequency $\omega = 1/\sqrt{LC}$ (now two dots corresponds to the second time derivative, $\ddot{\Phi} = d^2\Phi/dt^2$). However, we wish to move to the Hamiltonian framework and quantize from there. While the conjugate coordinate to flux $\Phi$ is defined by
$$
\frac{d\mathcal{L}}{d\dot{\Phi}} = C \dot{\Phi} = CV \equiv Q
$$
it is exactly the same for charge defined above due to the definition of capacitance. Now, the Hamiltonian is defined in terms of the Lagrangian as $\mathcal{H} = Q\dot{\Phi} - \mathcal{L}$, and one arrives at the equation above.
</details>
## 3. Quantizing the Hamiltonian <a id='qth'></a>
The quantum harmonic oscillator (QHO) is what we get when we quantize the Hamiltonian of an $LC$ circuit. Promote the conjugate variables to operators, $Q \to \hat{Q}$, $\Phi \to \hat{\Phi}$, so that the quantized Hamiltonian is
$$
\hat{H} = \frac{\hat{Q}^2}{2C} + \frac{\hat{\Phi}^2}{2L},
$$
where the "hats" remind us that these are quantum mechanical operators. Then make an association between the Poisson bracket of classical mechanics and the commutator of quantum mechanics via the correspondence
$$
\{A,B\} = \frac{\delta A}{\delta \Phi} \frac{\delta B}{\delta Q} - \frac{\delta B}{\delta \Phi} \frac{\delta A}{\delta Q} \Longleftrightarrow
\frac{1}{i\hbar} [\hat{A},\hat{B}] = \frac{1}{i\hbar}\left(\hat{A}\hat{B} - \hat{B}\hat{A}\right),
$$
where the $\delta$'s here represent functional derivates and the commutator reflects that the order of operations matter in quantum mechanics. Inserting our variables/operators, we arrive at
$$
\{\Phi,Q\} = \frac{\delta \Phi}{\delta \Phi}\frac{\delta Q}{\delta Q} - \frac{\delta Q}{\delta \Phi}\frac{\delta \Phi}{\delta Q} = 1-0=1 \Longrightarrow [\hat{\Phi}, \hat{Q}] = i\hbar.
$$
This implies, that just like position and momentum, charge and flux also obey a Heisenberg Uncertainty Principle ($[\hat{x},\hat{p}] = i\hbar$, as well). This means that they are not simultaneous observables, and are in fact, conjugate variables defined in the same way with the same properties. This result has been used over the history of superconducting qubits to inform design decisions and classify the types of superconducting qubits.
The above quantized Hamiltonian is usually written in a friendlier form using the reduced charge $\hat{n} = \hat{Q}/2e$ and phase $\hat{\phi} = 2\pi\hat{\Phi}/\Phi_0$, where $\Phi_0 = h/2e$ is the flux quanta, corresponding to the operators for the number of Cooper pairs and the phase across the Josephson junction, respectively. Then, the quantized Hamiltonian becomes
$$ \hat{H}_{\rm QHO}= 4E_c\hat{n}^2 + \frac{1}{2} E_L \hat{\phi}^2,$$
where $E_c = e^2/2C$ is the charging energy (the 4 in front corresponds to the fact we're dealing with Cooper pairs, not single electrons) and $E_L = (\Phi_0/2\pi)^2/L$ is the inductive energy.
<details>
<summary>Click to Expand: The Quantum Harmonic Oscillator</summary>
The Hamiltonian above represents a simple harmonic oscillator, and taking $\hat{\phi}$ as the position variable, then we can define creation and annihilation operators in terms of the zero-point fluctuations of the charge and phase,
$$ \hat{n} = i n_{\mathrm zpf}(\hat{a} + \hat{a}^\dagger) \quad \mathrm{and} \quad
\hat{\phi} = \phi_{\mathrm zpf}(\hat{a} - \hat{a}^\dagger), \qquad \mathrm{where} \quad
n_\mathrm{zpf} = \left( \frac{E_L}{32E_c} \right)^{1/4} \quad \mathrm{and} \quad
\phi_{\mathrm{zpf}} = \left(\frac{2E_c}{E_L}\right)^{1/4}.$$
The Hamiltonian is then that of a harmonic oscillator,
$$ H_{\mathrm{QHO}} = \hbar \omega \left( \hat{a}^\dagger \hat{a} + \frac{1}{2} \right) \qquad \mathrm{with} \qquad
\omega = \sqrt{8 E_L E_c}/\hbar = 1/\sqrt{LC}.$$
Here we see that the energy spacing of the QHO corresponds to the classical resonance frequency $\omega=1/\sqrt{LC}$ of an $LC$ oscillator.
</details>
<details>
<summary>Click to Expand: The Branch-Flux Method for Transmons</summary>
While the above concerns quantizing a linear circuit, [Vool and Devoret](https://arxiv.org/abs/1610.03438) discuss the branch-flux method for quantizing circuits in general. Basically, this gives us a systematic way of enforcing Kirchhoff's Laws for circuits: the sum of the currents at a node must equal zero and the addition of voltages around any loop must also equal zero. These Kirchhoff Laws give us the equations of motion for the circuit.
There is a very special relationship between the current and flux in Josephson junctions, given by the Josephson relation
$$
I = I_0 \sin\left(2\pi \Phi/\Phi_0\right)
$$
where $I_0$ is the maximum current (critical current) that can flow through the junction while maintaining a superconducting state, and $\Phi_0 = h/2e$ is the flux quantum. Enforcing Kirchhoff's current law, the sum of the Josephson current and the current across the total capacitance $C = C_S + C_J$, where $C_S$ is the shunt capacitor and $C_J$ is the capacitance of the Josephson junction and $C_S \gg C_J$, must vanish. This gives us an equation of motion
$$
I_0 \sin\left(2\pi \Phi/\Phi_0\right) + C\ddot{\Phi} = 0.
$$
Unlike the typical situation where the equations of motions are calculated by placing the Lagrangian into the Euler-Lagrange equation as we did in the case of the QHO, here we already have the equation of motion for the variable $\Phi$. But since we want to quantize the Hamiltonian, we must convert this equation of motion to a Lagrangian and then perform a Legendre transform to find the Hamiltonian. This is achieved by "integrating" the equation of motion:
$$
0 = \frac{\partial\mathcal{L}}{\partial\Phi} - \frac{d}{dt}\left(\frac{\partial\mathcal{L}}{\partial\dot{\Phi}}\right) = I_0 \sin\left(2\pi \Phi/\Phi_0\right) + C\ddot{\Phi} \Longrightarrow
\frac{I_0 \Phi_0}{2\pi} \cos\left(2\pi \Phi/\Phi_0\right) + \frac{C\dot{\Phi}^2}{2} = \mathcal{L}
$$
Now that we have gone "backward" to find the Lagrangian, we can continue forward to find the Hamiltonian by finding the conjugate variable $Q = \partial \mathcal{L}/\partial\dot{\Phi} = C\dot{\Phi}$, which turns out to be the same as in the QHO case, and
$$
\mathcal{H} = Q\dot{\Phi} - \mathcal{L} = \frac{Q^2}{2C} - \frac{I_0 \Phi_0}{2\pi} \cos\left(2\pi \Phi/\Phi_0\right)
$$
</details>
## 4. The Quantized Transmon <a id='tqt'></a>
Making the same variable substitutions as for the QHO, we can rewrite the transmon Hamiltonian in familiar form
$$
\hat{H}_{\rm tr} = 4E_c \hat{n}^2 - E_J \cos \hat{\phi},
$$
where the Josephson energy $E_J = I_0\Phi_0/2\pi$ replaces the inductive energy from the QHO. Note that the functional form of the phase is different from the QHO due to the presence of the Josephson junction instead of a linear inductor. Often $\hat{n} \to \hat{n} - n_g$ to reflect a gate offset charge, but this is not important in the transmon regime. Now we can approach the quantization similarly to the QHO, where we define the creation and annihilation operators in terms of the zero-point fluctuations of charge and phase
$$ \hat{n} = i n_{\mathrm zpf}(\hat{c} + \hat{c}^\dagger) \quad \mathrm{and} \quad
\hat{\phi} = \phi_{\mathrm zpf}(\hat{c} - \hat{c}^\dagger), \qquad \mathrm{where} \quad
n_\mathrm{zpf} = \left( \frac{E_J}{32E_c} \right)^{1/4} \quad \mathrm{and} \quad
\phi_{\mathrm{zpf}} = \left(\frac{2E_c}{E_J}\right)^{1/4},
$$
where the Josephson energy $E_J$ has replaced the linear inductive energy $E_L$ of the QHO. Here we use $\hat{c} = \sum_j \sqrt{j+1} |j\rangle\langle j+1|$ to denote the transmon annihilation operator and distinguish it from the evenly-spaced energy modes of $\hat{a}$. Now, noting that $\phi \ll 1$ because in the transmon regime $E_J/E_c \gg 1$, we can take a Taylor expansion of $\cos \hat{\phi}$ to approximate the Hamiltonian
$$
H = -4E_c n_{zpf}^2 (\hat{c} + \hat{c}^\dagger)^2 - E_J\left(1 - \frac{1}{2} \phi_{zpf}^2 (\hat{c}-\hat{c}^\dagger)^2 + \frac{1}{24} \phi_{zpf}^4(\hat{c}-\hat{c}^\dagger)^4 + \ldots \right) \\
\approx \sqrt{8 E_c E_J} \left(\hat{c}^\dagger \hat{c} + \frac{1}{2}\right) - E_J - \frac{E_c}{12}(\hat{c}^\dagger - \hat{c})^4,
$$
where it is helpful to observe $4E_c n_{\rm zpf}^2 = (1/2)E_J\phi_{zpf}^2 = \sqrt{2E_cE_J}$. Expanding the terms of the transmon operator $\hat{c}$ and dropping the fast-rotating terms (i.e. those with an uneven number of $\hat{c}$ and $\hat{c}^\dagger$), neglecting constants that have no influence on transmon dynamics, and defining $\omega_0 = \sqrt{8 E_c E_J}$ and identifying $\delta = -E_c$ as the transmon anharmonicity, we have
$$
\hat{H}_{\rm tr} = \omega_0 \hat{c}^\dagger \hat{c} + \frac{\delta}{2}((\hat{c}^\dagger \hat{c})^2 + \hat{c}^\dagger \hat{c})
= \left(\omega_0 + \frac{\delta}{2}\right) \hat{c}^\dagger \hat{c} + \frac{\delta}{2}(\hat{c}^\dagger \hat{c})^2
$$
which is the Hamiltonian of a Duffing oscillator. Defining $\omega \equiv \omega_0+\delta$, we see that the transmon levels have energy spacings that each differ by the anharmonicity, as $\omega_{j+1}-\omega_j = \omega + \delta j$, so that $\omega$ corresponds to "the frequency" of the transmon qubit (the transition $\omega_1-\omega_0$). From the definition of the transmon operator, $\hat{c}^\dagger \hat{c} = \sum_j j |j\rangle \langle j|$, we arrive at
$$
\hat{H}_{\rm tr} = \omega \hat{c}^\dagger \hat{c} + \frac{\delta}{2} \hat{c}^\dagger \hat{c} (\hat{c}^\dagger \hat{c} - 1)
= \sum_j \left(\left(\omega-\frac{\delta}{2}\right)j + \frac{\delta}{2} j^2\right) |j\rangle\langle j| \equiv \sum_j \omega_j |j\rangle \langle j|
$$
so that
$$
\omega_j = \left(\omega-\frac{\delta}{2}\right)j + \frac{\delta}{2} j^2
$$
are the energy levels of the transmon.
## 5. Comparison of the Transmon and the Quantum Harmonic Oscillator<a id='cottatqho'></a>
The QHO has even-spaced energy levels and the transmon does not, which is why we can use it as a qubit. Here we show the difference in energy levels by calculating them from their Hamiltonians using [`QuTiP`](http://www.qutip.org).
```
import numpy as np
import matplotlib.pyplot as plt
E_J = 20e9
w = 5e9
anharm = -300e6
N_phis = 101
phis = np.linspace(-np.pi,np.pi,N_phis)
mid_idx = int((N_phis+1)/2)
# potential energies of the QHO & transmon
U_QHO = 0.5*E_J*phis**2
U_QHO = U_QHO/w
U_transmon = (E_J-E_J*np.cos(phis))
U_transmon = U_transmon/w
# import QuTiP, construct Hamiltonians, and solve for energies
from qutip import destroy
N = 35
N_energies = 5
c = destroy(N)
H_QHO = w*c.dag()*c
E_QHO = H_QHO.eigenenergies()[0:N_energies]
H_transmon = w*c.dag()*c + (anharm/2)*(c.dag()*c)*(c.dag()*c - 1)
E_transmon = H_transmon.eigenenergies()[0:2*N_energies]
print(E_QHO[:4])
print(E_transmon[:8])
fig, axes = plt.subplots(1, 1, figsize=(6,6))
axes.plot(phis, U_transmon, '-', color='orange', linewidth=3.0)
axes.plot(phis, U_QHO, '--', color='blue', linewidth=3.0)
for eidx in range(1,N_energies):
delta_E_QHO = (E_QHO[eidx]-E_QHO[0])/w
delta_E_transmon = (E_transmon[2*eidx]-E_transmon[0])/w
QHO_lim_idx = min(np.where(U_QHO[int((N_phis+1)/2):N_phis] > delta_E_QHO)[0])
trans_lim_idx = min(np.where(U_transmon[int((N_phis+1)/2):N_phis] > delta_E_transmon)[0])
trans_label, = axes.plot([phis[mid_idx-trans_lim_idx-1], phis[mid_idx+trans_lim_idx-1]], \
[delta_E_transmon, delta_E_transmon], '-', color='orange', linewidth=3.0)
qho_label, = axes.plot([phis[mid_idx-QHO_lim_idx-1], phis[mid_idx+QHO_lim_idx-1]], \
[delta_E_QHO, delta_E_QHO], '--', color='blue', linewidth=3.0)
axes.set_xlabel('Phase $\phi$', fontsize=24)
axes.set_ylabel('Energy Levels / $\hbar\omega$', fontsize=24)
axes.set_ylim(-0.2,5)
qho_label.set_label('QHO Energies')
trans_label.set_label('Transmon Energies')
axes.legend(loc=2, fontsize=14)
```
## 6. Qubit Drive and the Rotating Wave Approximation <a id='qdatrwa'></a>
Here we will treat the transmon as a qubit for simplicity, which by definition means there are only two levels. Therefore the transmon Hamiltonian becomes
$$
\hat{H}_0 = \sum_{j=0}^1 \hbar \omega_j |j\rangle \langle j| \equiv 0 |0\rangle \langle 0| + \hbar\omega_q |1\rangle \langle 1|.
$$
Since we can add or subtract constant energy from the Hamiltonian without effecting the dynamics, we make the $|0\rangle$ and $|1\rangle$ state energies symmetric about $E=0$ by subtracting half the qubit frequency,
$$
\hat{H}_0 = - (1/2)\hbar\omega_q |0\rangle \langle 0| + (1/2)\hbar \omega_q |1\rangle \langle 1| =
-\frac{1}{2} \hbar \omega_q \sigma^z \qquad {\rm where} \qquad
\sigma^z = \begin{pmatrix}
1 & 0 \\
0 & -1 \end{pmatrix}
$$
is the Pauli-Z matrix. Now, applying an electric drive field $\vec{E}(t) = \vec{E}_0 e^{-i\omega_d t} + \vec{E}_0^* e^{i\omega_d t}$ to the transmon introduces a dipole interaction between the transmon and microwave field. The Hamiltonian is the sum of the qubit Hamiltonian $\hat{H}_0$ and drive Hamiltonian $\hat{H}_d$,
$$
\hat{H} = \hat{H}_0 + \hat{H}_d.
$$
Treating the transmon as a qubit allows us to use the Pauli raising/lowering operators $\sigma^\pm = (1/2)(\sigma^x \mp i\sigma^y)$ that have the effect $\sigma^+ |0\rangle = |1\rangle$ and $\sigma^+ |1\rangle = |0\rangle$. (Note that this definition reflects that we are using *qubit* raising/lower operators instead of those for *spin*. For the reason discussed in [Section 1](#mlqsaq), $|0\rangle \equiv |\uparrow\rangle$ and $|1\rangle \equiv |\downarrow \rangle$ so the raising and lowering operators are inverted). Now since the field will excite and de-excite the qubit, we define the dipole operator $\vec{d} = \vec{d}_0 \sigma^+ + \vec{d}_0^* \sigma^-$. The drive Hamiltonian from the dipole interaction is then
$$
\hat{H}_d = -\vec{d} \cdot \vec{E}(t) = -\left(\vec{d}_0 \sigma^+ + \vec{d}_0^* \sigma^-\right) \cdot \left(\vec{E}_0 e^{-i\omega_d t} + \vec{E}_0^* e^{i\omega_d t}\right) \\
= -\left(\vec{d}_0 \cdot \vec{E}_0 e^{-i\omega_d t} + \vec{d}_0 \cdot \vec{E}_0^* e^{i\omega_d t}\right)\sigma^+
-\left(\vec{d}_0^* \cdot \vec{E}_0 e^{-i\omega_d t} + \vec{d}_0^* \cdot \vec{E}_0^* e^{i\omega_d t}\right)\sigma^-\\
\equiv -\hbar\left(\Omega e^{-i\omega_d t} + \tilde{\Omega} e^{i\omega_d t}\right)\sigma^+
-\hbar\left(\tilde{\Omega}^* e^{-i\omega_d t} + \Omega^* e^{i\omega_d t}\right)\sigma^-
$$
where we made the substitutions $\Omega = \vec{d}_0 \cdot \vec{E}_0$ and $\tilde{\Omega} = \vec{d}_0 \cdot \vec{E}_0^*$ to describe the strength of the field and dipole. Now we transform to the interaction picture $\hat{H}_{d,I} = U\hat{H}_dU^\dagger$ (omitting terms that cancel for simplicity) with
$$
U = e^{i\hat{H}_0t/\hbar} = e^{-i\omega_q t \sigma^z/2} = I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)
$$
which can be calculated by noting that
$$
\sigma^\pm \sigma^z = (1/2) \left(\sigma^x \sigma^z \mp i \sigma^y \sigma^z\right) = (1/2)(-i\sigma^y \pm \sigma^x) = \pm\sigma^\pm = -\sigma^z \sigma^\pm.
$$
Then
$$U\sigma^\pm U^\dagger = \left(I\cos(\omega_q t/2) - i\sigma^z\sin(\omega_q t/2)\right) \sigma^\pm \left(I\cos(\omega_q t/2) + i\sigma^z\sin(\omega_q t/2)\right) \\
= \sigma^\pm \left( \cos(\omega_q t/2) \pm i\sin(\omega_q t/2)\right) \left(\cos(\omega_q t/2) \pm i\sin(\omega_q t/2) \right) \\
= \sigma^\pm \left( \cos^2(\omega_q t/2) \pm 2i\cos(\omega_q t/2)\sin(\omega_q t/2) - \sin^2(\omega_q t/2)\right) \\
= \sigma^\pm \left( \cos(\omega_q t) \pm i\sin(\omega_q t) \right) = e^{\pm i\omega_q t} \sigma^{\pm},$$
where we have used the double-angle formula from trigonometry. The transformed Hamiltonian is then
$$
\hat{H}_{d,I} = U\hat{H}_dU^\dagger = -\hbar\left(\Omega e^{-i\omega_d t} + \tilde{\Omega} e^{i\omega_d t}\right)e^{i\omega_q t} \sigma^+ -\hbar\left(\tilde{\Omega}^* e^{-i\omega_d t} + \Omega^* e^{i\omega_d t}\right)e^{-i\omega_q t} \sigma^-\\
= -\hbar\left(\Omega e^{i\Delta_q t} + \tilde{\Omega} e^{i(\omega_q+\omega_d) t}\right) \sigma^+ -\hbar\left(\tilde{\Omega}^* e^{-i(\omega_q+\omega_d) t} + \Omega^* e^{-i\Delta_q t}\right) \sigma^-
$$
Now we make the rotating-wave approximation: since $\omega_q+\omega_d$ is much larger than $\Delta_q = \omega_q-\omega_d$, the terms with the sum in the exponential oscillate much faster, so effectively average out their contribution and we therefore drop those terms from the Hamiltonian. Now the RWA interaction Hamiltonian becomes
$$
\hat{H}_{d,I}^{\rm (RWA)} =-\hbar\Omega e^{i\Delta_q t} \sigma^+ -\hbar \Omega^* e^{-i\Delta_q t} \sigma^-
$$
Moving back to the Schrödinger picture,
$$
\hat{H}_{d}^{\rm (RWA)} = U^\dagger \hat{H}_{d,I}^{\rm (RWA)} U = -\hbar\Omega e^{-i\omega_d t} \sigma^+ -\hbar\Omega^* e^{i\omega_d t} \sigma^-
$$
so that the total qubit and drive Hamiltonian is
$$
\hat{H}^{\rm (RWA)} = -\frac{1}{2} \hbar\omega_q \sigma^z -\hbar\Omega e^{-i\omega_d t} \sigma^+ -\hbar\Omega^* e^{i\omega_d t} \sigma^-.
$$
Going into the frame of the drive, using the transformation $U_d = \exp\{-i\omega_d t\sigma^z/2\}$, the Hamiltonian becomes
$$
\hat{H}_{\rm eff} = U_d \hat{H}^{\rm (RWA)} U_d^\dagger - i\hbar U_d \dot{U}_d^\dagger
$$
where $\dot{U}_d = dU_d/dt$ is the time derivative of $U_d$. Then in the drive frame under the RWA
$$
\hat{H}_{\rm eff} = -\frac{1}{2} \hbar\omega_q \sigma^z -\hbar\Omega \sigma^+ -\hbar\Omega^* \sigma^- + \frac{1}{2} \hbar\omega_d \sigma^z = -\frac{1}{2}\hbar \Delta_q \sigma^z -\hbar\Omega \sigma^+ -\hbar\Omega^* \sigma^-
$$
assuming the drive is real so that $\Omega = \Omega^*$, this simplifies to
$$
\hat{H}_{\rm eff} = -\frac{1}{2}\hbar \Delta_q \sigma^z -\hbar\Omega \sigma^x.
$$
This shows that when the drive is resonant with the qubit (i.e., $\Delta_q = 0$), the drive causes an $x$ rotation in the Bloch sphere that is generated by $\sigma^x$ with a strength of $\Omega$. We can see the effect of this on-resonant qubit drive in the [finding the frequency of a qubit with spectroscopy](https://qiskit.org/textbook/ch-quantum-hardware/calibrating-qubits-openpulse.html#frequencysweep) section. An off-resonant drive has additional $z$ rotations generated by the $\sigma^z$ contribution, and these manifest themselves as oscillations in a [Ramsey experiment](https://qiskit.org/textbook/ch-quantum-hardware/calibrating-qubits-openpulse.html#ramsey).
| github_jupyter |
# Train a Deep NN to predict Asset Price movements
## Setup Docker for GPU acceleration
`docker run -it -p 8889:8888 -v /path/to/machine-learning-for-trading/16_convolutions_neural_nets/cnn:/cnn --name tensorflow tensorflow/tensorflow:latest-gpu-py3 bash`
## Imports & Settings
```
import warnings
warnings.filterwarnings('ignore')
import os
from pathlib import Path
from importlib import reload
from joblib import dump, load
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold
from sklearn.metrics import roc_auc_score
import tensorflow as tf
from keras.models import Sequential
from keras import backend as K
from keras.wrappers.scikit_learn import KerasClassifier
from keras.layers import Dense, Dropout, Activation
from keras.models import load_model
from keras.callbacks import Callback, EarlyStopping, TensorBoard, ModelCheckpoint
np.random.seed(42)
```
## Build Dataset
We load the Quandl adjusted stock price data:
```
prices = (pd.read_hdf('../data/assets.h5', 'quandl/wiki/prices')
.adj_close
.unstack().loc['2007':])
prices.info()
```
### Resample to weekly frequency
We start by generating weekly returns for close to 2,500 stocks without missing data for the 2008-17 period, as follows:
```
returns = (prices
.resample('W')
.last()
.pct_change()
.loc['2008': '2017']
.dropna(axis=1)
.sort_index(ascending=False))
returns.info()
returns.head().append(returns.tail())
```
### Create & stack 52-week sequences
We'll use 52-week sequences, which we'll create in a stacked format:
```
n = len(returns)
T = 52 # weeks
tcols = list(range(T))
data = pd.DataFrame()
for i in range(n-T-1):
if i % 50 == 0:
print(i, end=' ', flush=True)
df = returns.iloc[i:i+T+1]
data = pd.concat([data, (df
.reset_index(drop=True)
.transpose()
.reset_index()
.assign(year=df.index[0].year,
month=df.index[0].month))],
ignore_index=True)
data.info()
```
### Create categorical variables
We create dummy variables for different time periods, namely months and years:
```
data[tcols] = (data[tcols].apply(lambda x: x.clip(lower=x.quantile(.01),
upper=x.quantile(.99))))
data.ticker = pd.factorize(data.ticker)[0]
data['label'] = (data[0] > 0).astype(int)
data['date'] = pd.to_datetime(data.assign(day=1)[['year', 'month', 'day']])
data = pd.get_dummies((data.drop(0, axis=1)
.set_index('date')
.apply(pd.to_numeric)),
columns=['year', 'month']).sort_index()
data.info()
data.to_hdf('data.h5', 'returns_daily')
data.shape
```
| github_jupyter |
# Think Bayes
This notebook presents example code and exercise solutions for Think Bayes.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Pmf, Suite
import pandas as pd
import numpy as np
import thinkplot
```
## Interpreting medical tests
Suppose you are a doctor treating a 40-year old female patient. After she gets a routine screening mammogram, the result comes back positive (defined below).
The patient asks whether this result indicates that she has breast cancer. You interpret this question as, "What is the probability that this patient has breast cancer, given a positive test result?"
How would you respond?
The following background information from the Breast Cancer Screening Consortium (BCSC) might help:
[Cancer Rate (per 1,000 examinations) and Cancer Detection Rate (per 1,000 examinations) for 1,838,372 Screening Mammography Examinations from 2004 to 2008 by Age -- based on BCSC data through 2009](http://www.bcsc-research.org/statistics/performance/screening/2009/rate_age.html).
[Performance Measures for 1,838,372 Screening Mammography Examinations1 from 2004 to 2008 by Age -- based on BCSC data through 2009](http://www.bcsc-research.org/statistics/performance/screening/2009/perf_age.html).
```
class BayesTable(pd.DataFrame):
def __init__(self, hypo, prior=1, **options):
columns = ['prior', 'likelihood', 'unnorm', 'posterior']
super().__init__(index=hypo, columns=columns, **options)
self.prior = prior
def mult(self):
self.unnorm = self.prior * self.likelihood
def norm(self):
nc = np.sum(self.unnorm)
self.posterior = self.unnorm / nc
return nc
def update(self):
self.mult()
return self.norm()
def reset(self):
return BayesTable(self.hypo, self.posterior)
```
### Assumptions and interpretation
According to [the first table](http://www.bcsc-research.org/statistics/performance/screening/2009/rate_age.html), the cancer rate per 1000 examinations is 2.65 for women age 40-44. The notes explain that this rate is based on "the number of examinations with a tissue diagnosis of ductal carcinoma in situ or invasive cancer within 1 year following the examination and before the next screening mammography examination", so it would be more precise to say that it is the rate of diagnosis within a year of the examination, not the rate of actual cancers.
Since untreated invasive breast cancer is likely to become symptomatic, we expect a large fraction of cancers to be diagnosed eventually. But there might be a long delay between developing a cancer and diagnosis, and a patient might die of another cause before diagnosis. So we should consider this rate as a lower bound on the probability that a patient has cancer at the time of the examination.
According to [the second table](http://www.bcsc-research.org/statistics/performance/screening/2009/perf_age.html), the sensitivity of the test for women in this age group is 73.4%; the specificity is 87.7%. From these, we can get the conditional probabilities:
```
P(positive test | cancer) = sensitivity
P(positive test | no cancer) = (1 - specificity)
```
Now we can use a Bayes table to compute the probability we are interested in, `P(cancer | positive test)`
```
base_rate = 2.65 / 1000
hypo = ['cancer', 'no cancer']
prior = [base_rate, 1-base_rate]
table = BayesTable(hypo, prior)
sensitivity = 0.734
specificity = 0.877
table.likelihood = [sensitivity, 1-specificity]
table
likelihood_ratio = table.likelihood['cancer'] / table.likelihood['no cancer']
table.update()
table
table.posterior['cancer'] * 100
```
So there is a 1.56% chance that this patient has cancer, given that the initial screening mammogram was positive.
This result is called the positive predictive value (PPV) of the test, which we could have read from [the second table](http://www.bcsc-research.org/statistics/performance/screening/2009/perf_age.html)
This data was the basis, in 2009, for the recommendation of the US Preventive Services Task Force,
```
def compute_ppv(base_rate, sensitivity, specificity):
pmf = Pmf()
pmf['cancer'] = base_rate * sensitivity
pmf['no cancer'] = (1 - base_rate) * (1 - specificity)
pmf.Normalize()
return pmf
pmf = compute_ppv(base_rate, sensitivity, specificity)
ages = [40, 50, 60, 70, 80]
rates = pd.Series([2.65, 4.28, 5.70, 6.76, 8.51], index=ages)
for age, rate in rates.items():
pmf = compute_ppv(rate, sensitivity, specificity)
print(age, pmf['cancer'])
```
| github_jupyter |
# Movie Frames Embedding
```
%matplotlib inline
data_dir = 'data'
movie = 'father-and-daughter-720p.mp4'
fps = 0.6
frame_width = 320
frame_height = 240
movie_name = movie.split('.')[0]
frames_dir = f'{data_dir}/{movie_name}'
outfile = f'{data_dir}/{movie_name}.json'
```
## Extract frames
```
import subprocess
subprocess.run([f'rm -rf {frames_dir}; mkdir -p {frames_dir}'], shell=True, check=True)
subprocess.run(
[f'ffmpeg -i {data_dir}/{movie} -f image2 -vf fps={fps} -s {frame_width}*{frame_height} {frames_dir}/%03d.png'],
shell=True,
check=True
)
```
## Load images in gray scale
```
import imageio
import numpy as np
import os
frame_imgs = os.listdir(frames_dir)
frame_imgs.sort(key=str.lower)
num_frames = len(frame_imgs)
frames = []
frames_gray = []
# First 5 frames are the same so we start with the 4th frame and extract 365 frames for the whole year
for file_name in frame_imgs:
name = f'{frames_dir}/{file_name}'
frame = imageio.imread(name)
frame_gray = imageio.imread(name, as_gray=True)
frames.append(frame)
frames_gray.append(frame_gray)
frames = np.asarray(frames)
frames_gray = np.asarray(frames_gray)
print(f'Extracted {frames.shape[0]} frames')
from skimage import io
io.imshow(frames[10])
io.imshow(frames_gray[10])
```
### Remove the text label at the top
Only for the precipitation movie.
```
# from matplotlib.pyplot import imshow
# label_size = [20, 81] # i,j
# label_pos = [11, 125] # i,j
# # We set the label to black
# frames[
# :,
# label_pos[0]:label_pos[0] + label_size[0],
# label_pos[1]:label_pos[1] + label_size[1],
# :
# ] = 0
# imshow(frames[0])
# frames_gray = np.copy(frames).mean(axis=3).astype(np.uint8).reshape((frames.shape[0], -1))
# frames = frames.reshape((frames.shape[0], -1, 3))
# imshow(frames_gray[0])
```
## Compute Structural Similarity Index
```
from skimage.metrics import structural_similarity as ssim
from itertools import combinations
import numpy as np
import time
norm_frames = frames_gray / 255.0
n = norm_frames.shape[0]
x = norm_frames.shape[1]
y = norm_frames.shape[2]
multichannel = np.ndim(norm_frames) > 3
num_combinations = (n * (n - 1)) / 2
print(f'Compute {num_combinations} comparisons')
s = time.time()
k = 10
t = 0
for i, c in enumerate(combinations(np.arange(n), 2)):
if i % k == k - 1:
t = time.time() - s
print(f'{k} computations took {t:.2f} sec')
break
ssim(norm_frames[c[0]], norm_frames[c[1]], data_range=1.0, multichannel=multichannel)
total_time = num_combinations / k * t / 60
print(f'Total time will take about {total_time:.1f} mins')
ssim_dist = np.zeros((n, n))
l = 0
for k, c in enumerate(combinations(np.arange(n), 2)):
if k % 1000 == 999:
l += 1
print(f'{l},', end='', flush=True)
i, j = c
d = ssim(norm_frames[i], norm_frames[j], data_range=1.0, multichannel=multichannel)
ssim_dist[i,j] = d
ssim_dist[j,i] = d
```
## Compute mean absolute difference between frames
```
import numba
from scipy.spatial.distance import pdist, squareform
@numba.njit()
def mean_absolute_dist(a,b):
return np.sum(np.abs(a - b)) / a.shape[0]
dist_gray = squareform(pdist(frames_gray, metric=mean_absolute_dist))
def hue(v):
cmax = v.max(axis=1)
cmin = v.min(axis=1)
delta = (cmax - cmin).astype(np.float64)
cmax_r = cmax==v[:, 0]
cmax_g = cmax==v[:, 1]
cmax_b = cmax==v[:, 2]
x = np.zeros_like(v[:, 0]).astype(np.float64)
x[cmax_r] = v[cmax_r, 1] - v[cmax_r, 2]
x[cmax_g] = v[cmax_g, 2] - v[cmax_g, 0]
x[cmax_b] = v[cmax_b, 0] - v[cmax_b, 1]
x = np.divide(x, delta, out=np.zeros_like(x), where=delta!=0)
x[cmax_r] %= 6
x[cmax_g] += 2
x[cmax_b] += 4
return x
def lightness(v):
cmax = v.max(axis=1)
cmin = v.min(axis=1)
return (cmax + cmin) / 2.0
def saturation(v):
cmax = v.max(axis=1)
cmin = v.min(axis=1)
chroma = (cmax - cmin).astype(np.float64)
light = lightness(v)
light = (1 - np.abs(2 * light - 1))
return np.divide(chroma, light, out=np.zeros_like(chroma), where=light!=0)
def hsl_dist(a, b):
a_sat = saturation(a)
b_sat = saturation(b)
a_light = lightness(a)
b_light = lightness(b)
a_hue = hue(a)
b_hue = hue(b)
return (a_sat - b_sat)**2 + (a_light - b_light)**2 + (((a_hue - b_hue) % 6) / 6.0)
dist_hsl = np.zeros((num_frames, num_frames))
frame_dim = 240*320
for i in np.arange(num_frames):
print(f'{i}, ', end='', flush=True)
for j in np.arange(i + 1, num_frames):
dist = np.sum(np.abs(hsl_dist(frames[i], frames[j]))) / frame_dim
dist_hsl[i,j] = dist
dist_hsl[j,i] = dist
print('done!')
```
## Compute embedding with distance matrix
```
from umap import UMAP
from sklearn.manifold import MDS, TSNE
from sklearn.preprocessing import MinMaxScaler
# mds_gray = MDS(dissimilarity='precomputed', n_jobs=-1).fit_transform(dist_gray)
# tsne_gray = TSNE(metric='precomputed', n_jobs=-1).fit_transform(dist_gray)
# umap_gray = UMAP(n_neighbors=5, metric='precomputed').fit_transform(dist_gray)
from matplotlib import cm
cmap = [cm.magma(int(i)) for i in np.linspace(0, 255, umap_ssim.shape[0])]
import matplotlib.pyplot as plt
umap_ssim = UMAP(n_neighbors=15, metric='precomputed').fit_transform(np.abs(ssim_dist))
fig, ax = plt.subplots(figsize=(12, 10))
plt.scatter(umap_ssim[:, 0], umap_ssim[:, 1], s=25, c=cmap)
# plt.plot(umap_ssim[:, 0], umap_ssim[:, 1])
plt.setp(ax, xticks=[], yticks=[])
plt.title("Embedding", fontsize=18)
plt.show()
mds_hsl = MDS(dissimilarity='precomputed', n_jobs=-1).fit_transform(dist_hsl)
tsne_hsl = TSNE(metric='precomputed', n_jobs=-1).fit_transform(dist_hsl)
umap_hsl = UMAP(n_neighbors=int(num_frames/12), metric='precomputed').fit_transform(dist_hsl)
```
## Scale embeddings
To add some padding we scale the data to `[0.1, 0.9]`.
```
scaler = MinMaxScaler((0.1, 0.9))
mds_scale_gray = scaler.fit_transform(mds_gray)
tsne_scale_gray = scaler.fit_transform(tsne_gray)
umap_scale_gray = scaler.fit_transform(umap_gray)
mds_scale_hsl = scaler.fit_transform(mds_hsl)
tsne_scale_hsl = scaler.fit_transform(t_sne_hsl)
umap_scale_hsl = scaler.fit_transform(u_map_hsl)
```
## Save data
```
data = []
for i in range(num_frames):
item = {}
item['src'] = f'{frames_dir}/{frame_imgs[i]}'
item['mds_gray'] = mds_scale_gray[i].tolist()
item['tsne_gray'] = tsne_scale_gray[i].tolist()
item['umap_gray'] = umap_scale_gray[i].tolist()
data.append(item)
for item in data:
item['mds_hsl'] = mds_scale_hsl[i].tolist()
item['tsne_hsl'] = tsne_scale_hsl[i].tolist()
item['umap_hsl'] = umap_scale_hsl[i].tolist()
import json
with open(outfile, 'w') as f:
json.dump(data, f)
```
| github_jupyter |
**12장 – 텐서플로를 사용한 사용자 정의 모델과 훈련**
_이 노트북은 12장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/handson-ml2/blob/master/12_custom_models_and_training_with_tensorflow.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩에서 실행하기</a>
</td>
</table>
# 설정
먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지와 텐서플로 버전이 2.0 이상인지 확인합니다.
```
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# %tensorflow_version은 코랩 명령입니다.
%tensorflow_version 2.x
except Exception:
pass
# 이 노트북은 텐서플로 ≥2.4이 필요합니다
# 2.x 버전은 대부분 동일한 결과를 만들지만 몇 가지 버그가 있습니다.
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.4"
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
tf.random.set_seed(42)
# 깔끔한 그래프 출력을 위해
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("그림 저장:", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
```
## 텐서와 연산
### 텐서
```
tf.constant([[1., 2., 3.], [4., 5., 6.]]) # 행렬
tf.constant(42) # 스칼라
t = tf.constant([[1., 2., 3.], [4., 5., 6.]])
t
t.shape
t.dtype
```
### 인덱싱
```
t[:, 1:]
t[..., 1, tf.newaxis]
```
### 연산
```
t + 10
tf.square(t)
t @ tf.transpose(t)
```
### `keras.backend` 사용하기
```
from tensorflow import keras
K = keras.backend
K.square(K.transpose(t)) + 10
```
### 넘파이 변환
```
a = np.array([2., 4., 5.])
tf.constant(a)
t.numpy()
np.array(t)
tf.square(a)
np.square(t)
```
### 타입 변환
```
try:
tf.constant(2.0) + tf.constant(40)
except tf.errors.InvalidArgumentError as ex:
print(ex)
try:
tf.constant(2.0) + tf.constant(40., dtype=tf.float64)
except tf.errors.InvalidArgumentError as ex:
print(ex)
t2 = tf.constant(40., dtype=tf.float64)
tf.constant(2.0) + tf.cast(t2, tf.float32)
```
### 문자열
```
tf.constant(b"hello world")
tf.constant("café")
u = tf.constant([ord(c) for c in "café"])
u
b = tf.strings.unicode_encode(u, "UTF-8")
tf.strings.length(b, unit="UTF8_CHAR")
tf.strings.unicode_decode(b, "UTF-8")
```
### 문자열 배열
```
p = tf.constant(["Café", "Coffee", "caffè", "咖啡"])
tf.strings.length(p, unit="UTF8_CHAR")
r = tf.strings.unicode_decode(p, "UTF8")
r
print(r)
```
### 래그드 텐서
```
print(r[1])
print(r[1:3])
r2 = tf.ragged.constant([[65, 66], [], [67]])
print(tf.concat([r, r2], axis=0))
r3 = tf.ragged.constant([[68, 69, 70], [71], [], [72, 73]])
print(tf.concat([r, r3], axis=1))
tf.strings.unicode_encode(r3, "UTF-8")
r.to_tensor()
```
### 희소 텐서
```
s = tf.SparseTensor(indices=[[0, 1], [1, 0], [2, 3]],
values=[1., 2., 3.],
dense_shape=[3, 4])
print(s)
tf.sparse.to_dense(s)
s2 = s * 2.0
try:
s3 = s + 1.
except TypeError as ex:
print(ex)
s4 = tf.constant([[10., 20.], [30., 40.], [50., 60.], [70., 80.]])
tf.sparse.sparse_dense_matmul(s, s4)
s5 = tf.SparseTensor(indices=[[0, 2], [0, 1]],
values=[1., 2.],
dense_shape=[3, 4])
print(s5)
try:
tf.sparse.to_dense(s5)
except tf.errors.InvalidArgumentError as ex:
print(ex)
s6 = tf.sparse.reorder(s5)
tf.sparse.to_dense(s6)
```
### 집합
```
set1 = tf.constant([[2, 3, 5, 7], [7, 9, 0, 0]])
set2 = tf.constant([[4, 5, 6], [9, 10, 0]])
tf.sparse.to_dense(tf.sets.union(set1, set2))
tf.sparse.to_dense(tf.sets.difference(set1, set2))
tf.sparse.to_dense(tf.sets.intersection(set1, set2))
```
### 변수
```
v = tf.Variable([[1., 2., 3.], [4., 5., 6.]])
v.assign(2 * v)
v[0, 1].assign(42)
v[:, 2].assign([0., 1.])
try:
v[1] = [7., 8., 9.]
except TypeError as ex:
print(ex)
v.scatter_nd_update(indices=[[0, 0], [1, 2]],
updates=[100., 200.])
sparse_delta = tf.IndexedSlices(values=[[1., 2., 3.], [4., 5., 6.]],
indices=[1, 0])
v.scatter_update(sparse_delta)
```
### 텐서 배열
```
array = tf.TensorArray(dtype=tf.float32, size=3)
array = array.write(0, tf.constant([1., 2.]))
array = array.write(1, tf.constant([3., 10.]))
array = array.write(2, tf.constant([5., 7.]))
array.read(1)
array.stack()
mean, variance = tf.nn.moments(array.stack(), axes=0)
mean
variance
```
## 사용자 정의 손실 함수
캘리포니아 주택 데이터셋을 로드하여 준비해 보겠습니다. 먼저 이 데이터셋을 로드한 다음 훈련 세트, 검증 세트, 테스트 세트로 나눕니다. 마지막으로 스케일을 변경합니다:
```
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < 1
squared_loss = tf.square(error) / 2
linear_loss = tf.abs(error) - 0.5
return tf.where(is_small_error, squared_loss, linear_loss)
plt.figure(figsize=(8, 3.5))
z = np.linspace(-4, 4, 200)
plt.plot(z, huber_fn(0, z), "b-", linewidth=2, label="huber($z$)")
plt.plot(z, z**2 / 2, "b:", linewidth=1, label=r"$\frac{1}{2}z^2$")
plt.plot([-1, -1], [0, huber_fn(0., -1.)], "r--")
plt.plot([1, 1], [0, huber_fn(0., 1.)], "r--")
plt.gca().axhline(y=0, color='k')
plt.gca().axvline(x=0, color='k')
plt.axis([-4, 4, 0, 4])
plt.grid(True)
plt.xlabel("$z$")
plt.legend(fontsize=14)
plt.title("Huber loss", fontsize=14)
plt.show()
input_shape = X_train.shape[1:]
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
model.compile(loss=huber_fn, optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
```
## 사용자 정의 요소를 가진 모델을 저장하고 로드하기
```
model.save("my_model_with_a_custom_loss.h5")
model = keras.models.load_model("my_model_with_a_custom_loss.h5",
custom_objects={"huber_fn": huber_fn})
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
def create_huber(threshold=1.0):
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < threshold
squared_loss = tf.square(error) / 2
linear_loss = threshold * tf.abs(error) - threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
return huber_fn
model.compile(loss=create_huber(2.0), optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save("my_model_with_a_custom_loss_threshold_2.h5")
model = keras.models.load_model("my_model_with_a_custom_loss_threshold_2.h5",
custom_objects={"huber_fn": create_huber(2.0)})
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
class HuberLoss(keras.losses.Loss):
def __init__(self, threshold=1.0, **kwargs):
self.threshold = threshold
super().__init__(**kwargs)
def call(self, y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < self.threshold
squared_loss = tf.square(error) / 2
linear_loss = self.threshold * tf.abs(error) - self.threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
model.compile(loss=HuberLoss(2.), optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save("my_model_with_a_custom_loss_class.h5")
model = keras.models.load_model("my_model_with_a_custom_loss_class.h5",
custom_objects={"HuberLoss": HuberLoss})
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.loss.threshold
```
## 그외 사용자 정의 함수
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
def my_softplus(z): # tf.nn.softplus(z) 값을 반환합니다
return tf.math.log(tf.exp(z) + 1.0)
def my_glorot_initializer(shape, dtype=tf.float32):
stddev = tf.sqrt(2. / (shape[0] + shape[1]))
return tf.random.normal(shape, stddev=stddev, dtype=dtype)
def my_l1_regularizer(weights):
return tf.reduce_sum(tf.abs(0.01 * weights))
def my_positive_weights(weights): # tf.nn.relu(weights) 값을 반환합니다
return tf.where(weights < 0., tf.zeros_like(weights), weights)
layer = keras.layers.Dense(1, activation=my_softplus,
kernel_initializer=my_glorot_initializer,
kernel_regularizer=my_l1_regularizer,
kernel_constraint=my_positive_weights)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1, activation=my_softplus,
kernel_regularizer=my_l1_regularizer,
kernel_constraint=my_positive_weights,
kernel_initializer=my_glorot_initializer),
])
model.compile(loss="mse", optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save("my_model_with_many_custom_parts.h5")
model = keras.models.load_model(
"my_model_with_many_custom_parts.h5",
custom_objects={
"my_l1_regularizer": my_l1_regularizer,
"my_positive_weights": my_positive_weights,
"my_glorot_initializer": my_glorot_initializer,
"my_softplus": my_softplus,
})
class MyL1Regularizer(keras.regularizers.Regularizer):
def __init__(self, factor):
self.factor = factor
def __call__(self, weights):
return tf.reduce_sum(tf.abs(self.factor * weights))
def get_config(self):
return {"factor": self.factor}
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1, activation=my_softplus,
kernel_regularizer=MyL1Regularizer(0.01),
kernel_constraint=my_positive_weights,
kernel_initializer=my_glorot_initializer),
])
model.compile(loss="mse", optimizer="nadam", metrics=["mae"])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.save("my_model_with_many_custom_parts.h5")
model = keras.models.load_model(
"my_model_with_many_custom_parts.h5",
custom_objects={
"MyL1Regularizer": MyL1Regularizer,
"my_positive_weights": my_positive_weights,
"my_glorot_initializer": my_glorot_initializer,
"my_softplus": my_softplus,
})
```
## 사용자 정의 지표
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
model.compile(loss="mse", optimizer="nadam", metrics=[create_huber(2.0)])
model.fit(X_train_scaled, y_train, epochs=2)
```
**노트**: 손실과 지표에 같은 함수를 사용하면 다른 결과가 나올 수 있습니다. 이는 일반적으로 부동 소수점 정밀도 오차 때문입니다. 수학 식이 동일하더라도 연산은 동일한 순서대로 실행되지 않습니다. 이로 인해 작은 차이가 발생합니다. 또한 샘플 가중치를 사용하면 정밀도보다 더 큰 오차가 생깁니다:
* 에포크에서 손실은 지금까지 본 모든 배치 손실의 평균입니다. 각 배치 손실은 가중치가 적용된 샘플 손실의 합을 _배치 크기_ 로 나눈 것입니다(샘플 가중치의 합으로 나눈 것이 아닙니다. 따라서 배치 손실은 손실의 가중 평균이 아닙니다).
* 에포크에서 지표는 가중치가 적용된 샘플 손실의 합을 지금까지 본 모든 샘플 가중치의 합으로 나눈 것입니다. 다른 말로하면 모든 샘플 손실의 가중 평균입니다. 따라서 위와 같지 않습니다.
수학적으로 말하면 손실 = 지표 * 샘플 가중치의 평균(더하기 약간의 부동 소수점 정밀도 오차)입니다.
```
model.compile(loss=create_huber(2.0), optimizer="nadam", metrics=[create_huber(2.0)])
sample_weight = np.random.rand(len(y_train))
history = model.fit(X_train_scaled, y_train, epochs=2, sample_weight=sample_weight)
history.history["loss"][0], history.history["huber_fn"][0] * sample_weight.mean()
```
### 스트리밍 지표
```
precision = keras.metrics.Precision()
precision([0, 1, 1, 1, 0, 1, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1])
precision([0, 1, 0, 0, 1, 0, 1, 1], [1, 0, 1, 1, 0, 0, 0, 0])
precision.result()
precision.variables
precision.reset_states()
```
스트리밍 지표 만들기:
```
class HuberMetric(keras.metrics.Metric):
def __init__(self, threshold=1.0, **kwargs):
super().__init__(**kwargs) # 기본 매개변수 처리 (예를 들면, dtype)
self.threshold = threshold
self.huber_fn = create_huber(threshold)
self.total = self.add_weight("total", initializer="zeros")
self.count = self.add_weight("count", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
metric = self.huber_fn(y_true, y_pred)
self.total.assign_add(tf.reduce_sum(metric))
self.count.assign_add(tf.cast(tf.size(y_true), tf.float32))
def result(self):
return self.total / self.count
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
m = HuberMetric(2.)
# total = 2 * |10 - 2| - 2²/2 = 14
# count = 1
# result = 14 / 1 = 14
m(tf.constant([[2.]]), tf.constant([[10.]]))
# total = total + (|1 - 0|² / 2) + (2 * |9.25 - 5| - 2² / 2) = 14 + 7 = 21
# count = count + 2 = 3
# result = total / count = 21 / 3 = 7
m(tf.constant([[0.], [5.]]), tf.constant([[1.], [9.25]]))
m.result()
m.variables
m.reset_states()
m.variables
```
`HuberMetric` 클래스가 잘 동작하는지 확인해 보죠:
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
model.compile(loss=create_huber(2.0), optimizer="nadam", metrics=[HuberMetric(2.0)])
model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32), epochs=2)
model.save("my_model_with_a_custom_metric.h5")
model = keras.models.load_model("my_model_with_a_custom_metric.h5",
custom_objects={"huber_fn": create_huber(2.0),
"HuberMetric": HuberMetric})
model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32), epochs=2)
```
**경고**: 텐서플로 2.2에서 tf.keras가 `model.metrics`의 0번째 위치에 지표를 추가합니다([텐서플로 이슈 #38150](https://github.com/tensorflow/tensorflow/issues/38150) 참조). 따라서 `HuberMetric`에 접근하려면 `model.metrics[0]` 대신 `model.metrics[-1]`를 사용해야 합니다.
```
model.metrics[-1].threshold
```
잘 동작하는군요! 다음처럼 더 간단하게 클래스를 만들 수 있습니다:
```
class HuberMetric(keras.metrics.Mean):
def __init__(self, threshold=1.0, name='HuberMetric', dtype=None):
self.threshold = threshold
self.huber_fn = create_huber(threshold)
super().__init__(name=name, dtype=dtype)
def update_state(self, y_true, y_pred, sample_weight=None):
metric = self.huber_fn(y_true, y_pred)
super(HuberMetric, self).update_state(metric, sample_weight)
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
```
이 클래스는 크기를 잘 처리하고 샘플 가중치도 지원합니다.
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
model.compile(loss=keras.losses.Huber(2.0), optimizer="nadam", weighted_metrics=[HuberMetric(2.0)])
sample_weight = np.random.rand(len(y_train))
history = model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32),
epochs=2, sample_weight=sample_weight)
history.history["loss"][0], history.history["HuberMetric"][0] * sample_weight.mean()
model.save("my_model_with_a_custom_metric_v2.h5")
model = keras.models.load_model("my_model_with_a_custom_metric_v2.h5",
custom_objects={"HuberMetric": HuberMetric})
model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32), epochs=2)
model.metrics[-1].threshold
```
## 사용자 정의 층
```
exponential_layer = keras.layers.Lambda(lambda x: tf.exp(x))
exponential_layer([-1., 0., 1.])
```
회귀 모델이 예측할 값이 양수이고 스케일이 매우 다른 경우 (예를 들어, 0.001, 10., 10000) 출력층에 지수 함수를 추가하면 유용할 수 있습니다:
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=input_shape),
keras.layers.Dense(1),
exponential_layer
])
model.compile(loss="mse", optimizer="sgd")
model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
class MyDense(keras.layers.Layer):
def __init__(self, units, activation=None, **kwargs):
super().__init__(**kwargs)
self.units = units
self.activation = keras.activations.get(activation)
def build(self, batch_input_shape):
self.kernel = self.add_weight(
name="kernel", shape=[batch_input_shape[-1], self.units],
initializer="glorot_normal")
self.bias = self.add_weight(
name="bias", shape=[self.units], initializer="zeros")
super().build(batch_input_shape) # must be at the end
def call(self, X):
return self.activation(X @ self.kernel + self.bias)
def compute_output_shape(self, batch_input_shape):
return tf.TensorShape(batch_input_shape.as_list()[:-1] + [self.units])
def get_config(self):
base_config = super().get_config()
return {**base_config, "units": self.units,
"activation": keras.activations.serialize(self.activation)}
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
MyDense(30, activation="relu", input_shape=input_shape),
MyDense(1)
])
model.compile(loss="mse", optimizer="nadam")
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
model.save("my_model_with_a_custom_layer.h5")
model = keras.models.load_model("my_model_with_a_custom_layer.h5",
custom_objects={"MyDense": MyDense})
class MyMultiLayer(keras.layers.Layer):
def call(self, X):
X1, X2 = X
print("X1.shape: ", X1.shape ," X2.shape: ", X2.shape) # 사용자 정의 층 디버깅
return X1 + X2, X1 * X2
def compute_output_shape(self, batch_input_shape):
batch_input_shape1, batch_input_shape2 = batch_input_shape
return [batch_input_shape1, batch_input_shape2]
```
사용자 정의 층은 다음처럼 함수형 API를 사용해 호출할 수 있습니다:
```
inputs1 = keras.layers.Input(shape=[2])
inputs2 = keras.layers.Input(shape=[2])
outputs1, outputs2 = MyMultiLayer()((inputs1, inputs2))
```
`call()` 메서드는 심볼릭 입력을 받습니다. 이 입력의 크기는 부분적으로만 지정되어 있습니다(이 시점에서는 배치 크기를 모릅니다. 그래서 첫 번째 차원이 None입니다):
사용자 층에 실제 데이터를 전달할 수도 있습니다. 이를 테스트하기 위해 각 데이터셋의 입력을 각각 네 개의 특성을 가진 두 부분으로 나누겠습니다:
```
def split_data(data):
columns_count = data.shape[-1]
half = columns_count // 2
return data[:, :half], data[:, half:]
X_train_scaled_A, X_train_scaled_B = split_data(X_train_scaled)
X_valid_scaled_A, X_valid_scaled_B = split_data(X_valid_scaled)
X_test_scaled_A, X_test_scaled_B = split_data(X_test_scaled)
# 분할된 데이터 크기 출력
X_train_scaled_A.shape, X_train_scaled_B.shape
```
크기가 완전하게 지정된 것을 볼 수 있습니다:
```
outputs1, outputs2 = MyMultiLayer()((X_train_scaled_A, X_train_scaled_B))
```
함수형 API를 사용해 완전한 모델을 만들어 보겠습니다(이 모델은 간단한 예제이므로 놀라운 성능을 기대하지 마세요):
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
input_A = keras.layers.Input(shape=X_train_scaled_A.shape[-1])
input_B = keras.layers.Input(shape=X_train_scaled_B.shape[-1])
hidden_A, hidden_B = MyMultiLayer()((input_A, input_B))
hidden_A = keras.layers.Dense(30, activation='selu')(hidden_A)
hidden_B = keras.layers.Dense(30, activation='selu')(hidden_B)
concat = keras.layers.Concatenate()((hidden_A, hidden_B))
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[input_A, input_B], outputs=[output])
model.compile(loss='mse', optimizer='nadam')
model.fit((X_train_scaled_A, X_train_scaled_B), y_train, epochs=2,
validation_data=((X_valid_scaled_A, X_valid_scaled_B), y_valid))
```
훈련과 테스트에서 다르게 동작하는 층을 만들어 보죠:
```
class AddGaussianNoise(keras.layers.Layer):
def __init__(self, stddev, **kwargs):
super().__init__(**kwargs)
self.stddev = stddev
def call(self, X, training=None):
if training:
noise = tf.random.normal(tf.shape(X), stddev=self.stddev)
return X + noise
else:
return X
def compute_output_shape(self, batch_input_shape):
return batch_input_shape
```
다음은 사용자 정의 층을 사용하는 간단한 모델입니다:
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
AddGaussianNoise(stddev=1.0),
keras.layers.Dense(30, activation="selu"),
keras.layers.Dense(1)
])
model.compile(loss="mse", optimizer="nadam")
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
```
## 사용자 정의 모델
```
X_new_scaled = X_test_scaled
class ResidualBlock(keras.layers.Layer):
def __init__(self, n_layers, n_neurons, **kwargs):
super().__init__(**kwargs)
self.hidden = [keras.layers.Dense(n_neurons, activation="elu",
kernel_initializer="he_normal")
for _ in range(n_layers)]
def call(self, inputs):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
return inputs + Z
class ResidualRegressor(keras.models.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden1 = keras.layers.Dense(30, activation="elu",
kernel_initializer="he_normal")
self.block1 = ResidualBlock(2, 30)
self.block2 = ResidualBlock(2, 30)
self.out = keras.layers.Dense(output_dim)
def call(self, inputs):
Z = self.hidden1(inputs)
for _ in range(1 + 3):
Z = self.block1(Z)
Z = self.block2(Z)
return self.out(Z)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = ResidualRegressor(1)
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=5)
score = model.evaluate(X_test_scaled, y_test)
y_pred = model.predict(X_new_scaled)
model.save("my_custom_model.ckpt")
model = keras.models.load_model("my_custom_model.ckpt")
history = model.fit(X_train_scaled, y_train, epochs=5)
```
대신 시퀀셜 API를 사용하는 모델을 정의할 수 있습니다:
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
block1 = ResidualBlock(2, 30)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="elu", kernel_initializer="he_normal"),
block1, block1, block1, block1,
ResidualBlock(2, 30),
keras.layers.Dense(1)
])
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=5)
score = model.evaluate(X_test_scaled, y_test)
y_pred = model.predict(X_new_scaled)
```
## 모델 구성 요소에 기반한 손실과 지표
**노트**: 다음 코드는 책의 코드와 두 가지 다른 점이 있습니다:
1. 생성자에서 `keras.metrics.Mean()` 측정 지표를 만들고 `call()` 메서드에서 사용하여 평균 재구성 손실을 추적합니다. 훈련에서만 사용해야 하기 때문에 `call()` 메서드에 `training` 매개변수를 추가합니다. `training`이 `True`이면 `reconstruction_mean`를 업데이트하고 `self.add_metric()`를 호출합니다.
2. TF 2.2에 있는 이슈([#46858](https://github.com/tensorflow/tensorflow/issues/46858)) 때문에 `build()` 메서드 안에서 `super().build()`를 호출하면 안됩니다.
```
class ReconstructingRegressor(keras.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden = [keras.layers.Dense(30, activation="selu",
kernel_initializer="lecun_normal")
for _ in range(5)]
self.out = keras.layers.Dense(output_dim)
self.reconstruction_mean = keras.metrics.Mean(name="reconstruction_error")
def build(self, batch_input_shape):
n_inputs = batch_input_shape[-1]
self.reconstruct = keras.layers.Dense(n_inputs)
#super().build(batch_input_shape)
def call(self, inputs, training=None):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
reconstruction = self.reconstruct(Z)
self.recon_loss = 0.05 * tf.reduce_mean(tf.square(reconstruction - inputs))
if training:
result = self.reconstruction_mean(recon_loss)
self.add_metric(result)
return self.out(Z)
def train_step(self, data):
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x)
loss = self.compiled_loss(y, y_pred, regularization_losses=[self.recon_loss])
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
return {m.name: m.result() for m in self.metrics}
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = ReconstructingRegressor(1)
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=2)
y_pred = model.predict(X_test_scaled)
```
## 자동 미분을 사용하여 그레이디언트 계산하기
```
def f(w1, w2):
return 3 * w1 ** 2 + 2 * w1 * w2
w1, w2 = 5, 3
eps = 1e-6
(f(w1 + eps, w2) - f(w1, w2)) / eps
(f(w1, w2 + eps) - f(w1, w2)) / eps
w1, w2 = tf.Variable(5.), tf.Variable(3.)
with tf.GradientTape() as tape:
z = f(w1, w2)
gradients = tape.gradient(z, [w1, w2])
gradients
with tf.GradientTape() as tape:
z = f(w1, w2)
dz_dw1 = tape.gradient(z, w1)
try:
dz_dw2 = tape.gradient(z, w2)
except RuntimeError as ex:
print(ex)
with tf.GradientTape(persistent=True) as tape:
z = f(w1, w2)
dz_dw1 = tape.gradient(z, w1)
dz_dw2 = tape.gradient(z, w2) # works now!
del tape
dz_dw1, dz_dw2
c1, c2 = tf.constant(5.), tf.constant(3.)
with tf.GradientTape() as tape:
z = f(c1, c2)
gradients = tape.gradient(z, [c1, c2])
gradients
with tf.GradientTape() as tape:
tape.watch(c1)
tape.watch(c2)
z = f(c1, c2)
gradients = tape.gradient(z, [c1, c2])
gradients
with tf.GradientTape() as tape:
z1 = f(w1, w2 + 2.)
z2 = f(w1, w2 + 5.)
z3 = f(w1, w2 + 7.)
tape.gradient([z1, z2, z3], [w1, w2])
with tf.GradientTape(persistent=True) as tape:
z1 = f(w1, w2 + 2.)
z2 = f(w1, w2 + 5.)
z3 = f(w1, w2 + 7.)
tf.reduce_sum(tf.stack([tape.gradient(z, [w1, w2]) for z in (z1, z2, z3)]), axis=0)
del tape
with tf.GradientTape(persistent=True) as hessian_tape:
with tf.GradientTape() as jacobian_tape:
z = f(w1, w2)
jacobians = jacobian_tape.gradient(z, [w1, w2])
hessians = [hessian_tape.gradient(jacobian, [w1, w2])
for jacobian in jacobians]
del hessian_tape
jacobians
hessians
def f(w1, w2):
return 3 * w1 ** 2 + tf.stop_gradient(2 * w1 * w2)
with tf.GradientTape() as tape:
z = f(w1, w2)
tape.gradient(z, [w1, w2])
x = tf.Variable(100.)
with tf.GradientTape() as tape:
z = my_softplus(x)
tape.gradient(z, [x])
tf.math.log(tf.exp(tf.constant(30., dtype=tf.float32)) + 1.)
x = tf.Variable([100.])
with tf.GradientTape() as tape:
z = my_softplus(x)
tape.gradient(z, [x])
@tf.custom_gradient
def my_better_softplus(z):
exp = tf.exp(z)
def my_softplus_gradients(grad):
return grad / (1 + 1 / exp)
return tf.math.log(exp + 1), my_softplus_gradients
def my_better_softplus(z):
return tf.where(z > 30., z, tf.math.log(tf.exp(z) + 1.))
x = tf.Variable([1000.])
with tf.GradientTape() as tape:
z = my_better_softplus(x)
z, tape.gradient(z, [x])
```
# 사용자 정의 훈련 반복
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
l2_reg = keras.regularizers.l2(0.05)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="elu", kernel_initializer="he_normal",
kernel_regularizer=l2_reg),
keras.layers.Dense(1, kernel_regularizer=l2_reg)
])
def random_batch(X, y, batch_size=32):
idx = np.random.randint(len(X), size=batch_size)
return X[idx], y[idx]
def print_status_bar(iteration, total, loss, metrics=None):
metrics = " - ".join(["{}: {:.4f}".format(m.name, m.result())
for m in [loss] + (metrics or [])])
end = "" if iteration < total else "\n"
print("\r{}/{} - ".format(iteration, total) + metrics,
end=end)
import time
mean_loss = keras.metrics.Mean(name="loss")
mean_square = keras.metrics.Mean(name="mean_square")
for i in range(1, 50 + 1):
loss = 1 / i
mean_loss(loss)
mean_square(i ** 2)
print_status_bar(i, 50, mean_loss, [mean_square])
time.sleep(0.05)
```
A fancier version with a progress bar:
```
def progress_bar(iteration, total, size=30):
running = iteration < total
c = ">" if running else "="
p = (size - 1) * iteration // total
fmt = "{{:-{}d}}/{{}} [{{}}]".format(len(str(total)))
params = [iteration, total, "=" * p + c + "." * (size - p - 1)]
return fmt.format(*params)
progress_bar(3500, 10000, size=6)
def print_status_bar(iteration, total, loss, metrics=None, size=30):
metrics = " - ".join(["{}: {:.4f}".format(m.name, m.result())
for m in [loss] + (metrics or [])])
end = "" if iteration < total else "\n"
print("\r{} - {}".format(progress_bar(iteration, total), metrics), end=end)
mean_loss = keras.metrics.Mean(name="loss")
mean_square = keras.metrics.Mean(name="mean_square")
for i in range(1, 50 + 1):
loss = 1 / i
mean_loss(loss)
mean_square(i ** 2)
print_status_bar(i, 50, mean_loss, [mean_square])
time.sleep(0.05)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
n_epochs = 5
batch_size = 32
n_steps = len(X_train) // batch_size
optimizer = keras.optimizers.Nadam(learning_rate=0.01)
loss_fn = keras.losses.mean_squared_error
mean_loss = keras.metrics.Mean()
metrics = [keras.metrics.MeanAbsoluteError()]
for epoch in range(1, n_epochs + 1):
print("Epoch {}/{}".format(epoch, n_epochs))
for step in range(1, n_steps + 1):
X_batch, y_batch = random_batch(X_train_scaled, y_train)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
mean_loss(loss)
for metric in metrics:
metric(y_batch, y_pred)
print_status_bar(step * batch_size, len(y_train), mean_loss, metrics)
print_status_bar(len(y_train), len(y_train), mean_loss, metrics)
for metric in [mean_loss] + metrics:
metric.reset_states()
try:
from tqdm.notebook import trange
from collections import OrderedDict
with trange(1, n_epochs + 1, desc="All epochs") as epochs:
for epoch in epochs:
with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps:
for step in steps:
X_batch, y_batch = random_batch(X_train_scaled, y_train)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
status = OrderedDict()
mean_loss(loss)
status["loss"] = mean_loss.result().numpy()
for metric in metrics:
metric(y_batch, y_pred)
status[metric.name] = metric.result().numpy()
steps.set_postfix(status)
for metric in [mean_loss] + metrics:
metric.reset_states()
except ImportError as ex:
print("To run this cell, please install tqdm, ipywidgets and restart Jupyter")
```
## 텐서플로 함수
```
def cube(x):
return x ** 3
cube(2)
cube(tf.constant(2.0))
tf_cube = tf.function(cube)
tf_cube
tf_cube(2)
tf_cube(tf.constant(2.0))
```
### TF 함수와 콘크리트 함수
```
concrete_function = tf_cube.get_concrete_function(tf.constant(2.0))
concrete_function.graph
concrete_function(tf.constant(2.0))
concrete_function is tf_cube.get_concrete_function(tf.constant(2.0))
```
### 함수 정의와 그래프
```
concrete_function.graph
ops = concrete_function.graph.get_operations()
ops
pow_op = ops[2]
list(pow_op.inputs)
pow_op.outputs
concrete_function.graph.get_operation_by_name('x')
concrete_function.graph.get_tensor_by_name('Identity:0')
concrete_function.function_def.signature
```
### TF 함수가 계산 그래프를 추출하기 위해 파이썬 함수를 트레이싱하는 방법
```
@tf.function
def tf_cube(x):
print("print:", x)
return x ** 3
result = tf_cube(tf.constant(2.0))
result
result = tf_cube(2)
result = tf_cube(3)
result = tf_cube(tf.constant([[1., 2.]])) # New shape: trace!
result = tf_cube(tf.constant([[3., 4.], [5., 6.]])) # New shape: trace!
result = tf_cube(tf.constant([[7., 8.], [9., 10.], [11., 12.]])) # New shape: trace!
```
특정 입력 시그니처를 지정하는 것도 가능합니다:
```
@tf.function(input_signature=[tf.TensorSpec([None, 28, 28], tf.float32)])
def shrink(images):
print("트레이싱", images)
return images[:, ::2, ::2] # 행과 열의 절반을 버립니다
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
img_batch_1 = tf.random.uniform(shape=[100, 28, 28])
img_batch_2 = tf.random.uniform(shape=[50, 28, 28])
preprocessed_images = shrink(img_batch_1) # 함수 트레이싱
preprocessed_images = shrink(img_batch_2) # 동일한 콘크리트 함수 재사용
img_batch_3 = tf.random.uniform(shape=[2, 2, 2])
try:
preprocessed_images = shrink(img_batch_3) # 다른 타입이나 크기 거부
except ValueError as ex:
print(ex)
```
### 오토그래프를 사용해 제어 흐름 나타내기
`range()`를 사용한 정적인 `for` 반복:
```
@tf.function
def add_10(x):
for i in range(10):
x += 1
return x
add_10(tf.constant(5))
add_10.get_concrete_function(tf.constant(5)).graph.get_operations()
```
`tf.while_loop()`를 사용한 동적인 반복:
```
@tf.function
def add_10(x):
condition = lambda i, x: tf.less(i, 10)
body = lambda i, x: (tf.add(i, 1), tf.add(x, 1))
final_i, final_x = tf.while_loop(condition, body, [tf.constant(0), x])
return final_x
add_10(tf.constant(5))
add_10.get_concrete_function(tf.constant(5)).graph.get_operations()
```
(오토그래프에 의한) `tf.range()`를 사용한 동적인 `for` 반복:
```
@tf.function
def add_10(x):
for i in tf.range(10):
x = x + 1
return x
add_10.get_concrete_function(tf.constant(0)).graph.get_operations()
```
### TF 함수에서 변수와 다른 자원 다루기
```
counter = tf.Variable(0)
@tf.function
def increment(counter, c=1):
return counter.assign_add(c)
increment(counter)
increment(counter)
function_def = increment.get_concrete_function(counter).function_def
function_def.signature.input_arg[0]
counter = tf.Variable(0)
@tf.function
def increment(c=1):
return counter.assign_add(c)
increment()
increment()
function_def = increment.get_concrete_function().function_def
function_def.signature.input_arg[0]
class Counter:
def __init__(self):
self.counter = tf.Variable(0)
@tf.function
def increment(self, c=1):
return self.counter.assign_add(c)
c = Counter()
c.increment()
c.increment()
@tf.function
def add_10(x):
for i in tf.range(10):
x += 1
return x
print(tf.autograph.to_code(add_10.python_function))
def display_tf_code(func):
from IPython.display import display, Markdown
if hasattr(func, "python_function"):
func = func.python_function
code = tf.autograph.to_code(func)
display(Markdown('```python\n{}\n```'.format(code)))
display_tf_code(add_10)
```
## tf.keras와 TF 함수를 함께 사용하거나 사용하지 않기
기본적으로 tf.keras는 자동으로 사용자 정의 코드를 TF 함수로 변환하기 때문에 `tf.function()`을 사용할 필요가 없습니다:
```
# 사용자 손실 함수
def my_mse(y_true, y_pred):
print("my_mse() 손실 트레이싱")
return tf.reduce_mean(tf.square(y_pred - y_true))
# 사용자 지표 함수
def my_mae(y_true, y_pred):
print("my_mae() 지표 트레이싱")
return tf.reduce_mean(tf.abs(y_pred - y_true))
# 사용자 정의 층
class MyDense(keras.layers.Layer):
def __init__(self, units, activation=None, **kwargs):
super().__init__(**kwargs)
self.units = units
self.activation = keras.activations.get(activation)
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.units),
initializer='uniform',
trainable=True)
self.biases = self.add_weight(name='bias',
shape=(self.units,),
initializer='zeros',
trainable=True)
super().build(input_shape)
def call(self, X):
print("MyDense.call() 트레이싱")
return self.activation(X @ self.kernel + self.biases)
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# 사용자 정의 모델
class MyModel(keras.models.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.hidden1 = MyDense(30, activation="relu")
self.hidden2 = MyDense(30, activation="relu")
self.output_ = MyDense(1)
def call(self, input):
print("MyModel.call() 트레이싱")
hidden1 = self.hidden1(input)
hidden2 = self.hidden2(hidden1)
concat = keras.layers.concatenate([input, hidden2])
output = self.output_(concat)
return output
model = MyModel()
model.compile(loss=my_mse, optimizer="nadam", metrics=[my_mae])
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
```
`dynamic=True`로 모델을 만들어 이 기능을 끌 수 있습니다(또는 모델의 생성자에서 `super().__init__(dynamic=True, **kwargs)`를 호출합니다):
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = MyModel(dynamic=True)
model.compile(loss=my_mse, optimizer="nadam", metrics=[my_mae])
```
사용자 정의 코드는 반복마다 호출됩니다. 너무 많이 출력되는 것을 피하기 위해 작은 데이터셋으로 훈련, 검증, 평가해 보겠습니다:
```
model.fit(X_train_scaled[:64], y_train[:64], epochs=1,
validation_data=(X_valid_scaled[:64], y_valid[:64]), verbose=0)
model.evaluate(X_test_scaled[:64], y_test[:64], verbose=0)
```
또는 모델을 컴파일할 때 `run_eagerly=True`를 지정합니다:
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = MyModel()
model.compile(loss=my_mse, optimizer="nadam", metrics=[my_mae], run_eagerly=True)
model.fit(X_train_scaled[:64], y_train[:64], epochs=1,
validation_data=(X_valid_scaled[:64], y_valid[:64]), verbose=0)
model.evaluate(X_test_scaled[:64], y_test[:64], verbose=0)
```
## 사용자 정의 옵티마이저
사용자 정의 옵티마이저를 정의하는 것은 일반적이지 않습니다. 하지만 어쩔 수 없이 만들어야 하는 상황이라면 다음 예를 참고하세요:
```
class MyMomentumOptimizer(keras.optimizers.Optimizer):
def __init__(self, learning_rate=0.001, momentum=0.9, name="MyMomentumOptimizer", **kwargs):
"""super().__init__()를 호출하고 _set_hyper()를 사용해 하이퍼파라미터를 저장합니다"""
super().__init__(name, **kwargs)
self._set_hyper("learning_rate", kwargs.get("lr", learning_rate)) # lr=learning_rate을 처리
self._set_hyper("decay", self._initial_decay) #
self._set_hyper("momentum", momentum)
def _create_slots(self, var_list):
"""모델 파라미터마다 연관된 옵티마이저 변수를 만듭니다.
텐서플로는 이런 옵티마이저 변수를 '슬롯'이라고 부릅니다.
모멘텀 옵티마이저에서는 모델 파라미터마다 하나의 모멘텀 슬롯이 필요합니다.
"""
for var in var_list:
self.add_slot(var, "momentum")
@tf.function
def _resource_apply_dense(self, grad, var):
"""슬롯을 업데이트하고 모델 파라미터에 대한 옵티마이저 스텝을 수행합니다.
"""
var_dtype = var.dtype.base_dtype
lr_t = self._decayed_lr(var_dtype) # 학습률 감쇠 처리
momentum_var = self.get_slot(var, "momentum")
momentum_hyper = self._get_hyper("momentum", var_dtype)
momentum_var.assign(momentum_var * momentum_hyper - (1. - momentum_hyper)* grad)
var.assign_add(momentum_var * lr_t)
def _resource_apply_sparse(self, grad, var):
raise NotImplementedError
def get_config(self):
base_config = super().get_config()
return {
**base_config,
"learning_rate": self._serialize_hyperparameter("learning_rate"),
"decay": self._serialize_hyperparameter("decay"),
"momentum": self._serialize_hyperparameter("momentum"),
}
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([keras.layers.Dense(1, input_shape=[8])])
model.compile(loss="mse", optimizer=MyMomentumOptimizer())
model.fit(X_train_scaled, y_train, epochs=5)
```
# 연습문제
## 1. to 11.
부록 A 참조.
## 12. _층 정규화_ 를 수행하는 사용자 정의 층을 구현하세요.
_15장에서 순환 신경망을 사용할 때 이런 종류의 층을 사용합니다._
### a.
_문제: `build()` 메서드에서 두 개의 훈련 가능한 가중치 *α*와 *β*를 정의합니다. 두 가중치 모두 크기가 `input_shape[-1:]`이고 데이터 타입은 `tf.float32`입니다. *α*는 1로 초기화되고 *β*는 0으로 초기화되어야 합니다._
솔루션: 아래 참조.
### b.
_문제: `call()` 메서드는 샘플의 특성마다 평균 μ와 표준편차 σ를 계산해야 합니다. 이를 위해 전체 샘플의 평균 μ와 분산 σ<sup>2</sup>을 반환하는 `tf.nn.moments(inputs, axes=-1, keepdims=True)`을 사용할 수 있습니다(분산의 제곱근으로 표준편차를 계산합니다). 그다음 *α*⊗(*X* - μ)/(σ + ε) + *β*를 계산하여 반환합니다. 여기에서 ⊗는 원소별
곱셈(`*`)을 나타냅니다. ε은 안전을 위한 항입니다(0으로 나누어지는 것을 막기 위한 작은 상수. 예를 들면 0.001)._
```
class LayerNormalization(keras.layers.Layer):
def __init__(self, eps=0.001, **kwargs):
super().__init__(**kwargs)
self.eps = eps
def build(self, batch_input_shape):
self.alpha = self.add_weight(
name="alpha", shape=batch_input_shape[-1:],
initializer="ones")
self.beta = self.add_weight(
name="beta", shape=batch_input_shape[-1:],
initializer="zeros")
super().build(batch_input_shape) # 반드시 끝에 와야 합니다
def call(self, X):
mean, variance = tf.nn.moments(X, axes=-1, keepdims=True)
return self.alpha * (X - mean) / (tf.sqrt(variance + self.eps)) + self.beta
def compute_output_shape(self, batch_input_shape):
return batch_input_shape
def get_config(self):
base_config = super().get_config()
return {**base_config, "eps": self.eps}
```
_ε_ 하이퍼파라미터(`eps`)는 필수가 아닙니다. 또한 `tf.sqrt(variance) + self.eps` 보다 `tf.sqrt(variance + self.eps)`를 계산하는 것이 좋습니다. sqrt(z)의 도함수는 z=0에서 정의되지 않기 때문에 분산 벡터의 한 원소가 0에 가까우면 훈련이 이리저리 널뜁니다. 제곱근 안에 _ε_를 넣으면 이런 현상을 방지할 수 있습니다.
### c.
_문제: 사용자 정의 층이 `keras.layers.LayerNormalization` 층과 동일한(또는 거의 동일한) 출력을 만드는지 확인하세요._
각 클래스의 객체를 만들고 데이터(예를 들면, 훈련 세트)를 적용해 보죠. 차이는 무시할 수 있는 수준입니다.
```
X = X_train.astype(np.float32)
custom_layer_norm = LayerNormalization()
keras_layer_norm = keras.layers.LayerNormalization()
tf.reduce_mean(keras.losses.mean_absolute_error(
keras_layer_norm(X), custom_layer_norm(X)))
```
네 충분히 가깝네요. 조금 더 확실하게 알파와 베타를 완전히 랜덤하게 지정하고 다시 비교해 보죠:
```
random_alpha = np.random.rand(X.shape[-1])
random_beta = np.random.rand(X.shape[-1])
custom_layer_norm.set_weights([random_alpha, random_beta])
keras_layer_norm.set_weights([random_alpha, random_beta])
tf.reduce_mean(keras.losses.mean_absolute_error(
keras_layer_norm(X), custom_layer_norm(X)))
```
여전히 무시할 수 있는 수준입니다! 사용자 정의 층이 잘 동작합니다.
## 13. 사용자 정의 훈련 반복을 사용해 패션 MNIST 데이터셋으로 모델을 훈련해보세요.
_패션 MNIST 데이터셋은 10장에서 소개했습니다._
### a.
_문제: 에포크, 반복, 평균 훈련 손실, (반복마다 업데이트되는) 에포크의 평균 정확도는 물론 에포크 끝에서 검증 손실과 정확도를 출력하세요._
```
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full.astype(np.float32) / 255.
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test.astype(np.float32) / 255.
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax"),
])
n_epochs = 5
batch_size = 32
n_steps = len(X_train) // batch_size
optimizer = keras.optimizers.Nadam(learning_rate=0.01)
loss_fn = keras.losses.sparse_categorical_crossentropy
mean_loss = keras.metrics.Mean()
metrics = [keras.metrics.SparseCategoricalAccuracy()]
with trange(1, n_epochs + 1, desc="All epochs") as epochs:
for epoch in epochs:
with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps:
for step in steps:
X_batch, y_batch = random_batch(X_train, y_train)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
status = OrderedDict()
mean_loss(loss)
status["loss"] = mean_loss.result().numpy()
for metric in metrics:
metric(y_batch, y_pred)
status[metric.name] = metric.result().numpy()
steps.set_postfix(status)
y_pred = model(X_valid)
status["val_loss"] = np.mean(loss_fn(y_valid, y_pred))
status["val_accuracy"] = np.mean(keras.metrics.sparse_categorical_accuracy(
tf.constant(y_valid, dtype=np.float32), y_pred))
steps.set_postfix(status)
for metric in [mean_loss] + metrics:
metric.reset_states()
```
### b.
_문제: 상위 층과 하위 층에 학습률이 다른 옵티마이저를 따로 사용해보세요._
```
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
lower_layers = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
])
upper_layers = keras.models.Sequential([
keras.layers.Dense(10, activation="softmax"),
])
model = keras.models.Sequential([
lower_layers, upper_layers
])
lower_optimizer = keras.optimizers.SGD(learning_rate=1e-4)
upper_optimizer = keras.optimizers.Nadam(learning_rate=1e-3)
n_epochs = 5
batch_size = 32
n_steps = len(X_train) // batch_size
loss_fn = keras.losses.sparse_categorical_crossentropy
mean_loss = keras.metrics.Mean()
metrics = [keras.metrics.SparseCategoricalAccuracy()]
with trange(1, n_epochs + 1, desc="All epochs") as epochs:
for epoch in epochs:
with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps:
for step in steps:
X_batch, y_batch = random_batch(X_train, y_train)
with tf.GradientTape(persistent=True) as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
for layers, optimizer in ((lower_layers, lower_optimizer),
(upper_layers, upper_optimizer)):
gradients = tape.gradient(loss, layers.trainable_variables)
optimizer.apply_gradients(zip(gradients, layers.trainable_variables))
del tape
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
status = OrderedDict()
mean_loss(loss)
status["loss"] = mean_loss.result().numpy()
for metric in metrics:
metric(y_batch, y_pred)
status[metric.name] = metric.result().numpy()
steps.set_postfix(status)
y_pred = model(X_valid)
status["val_loss"] = np.mean(loss_fn(y_valid, y_pred))
status["val_accuracy"] = np.mean(keras.metrics.sparse_categorical_accuracy(
tf.constant(y_valid, dtype=np.float32), y_pred))
steps.set_postfix(status)
for metric in [mean_loss] + metrics:
metric.reset_states()
```
| github_jupyter |
# Vega Lite Examples in Haskell - Layered Plots
The overview notebook - `VegaLiteGallery` - describes how
[`hvega`](http://hackage.haskell.org/package/hvega)
is used to create Vega-Lite visualizations.
-----
## Table of Contents
This notebook represents the [Layered Plots](https://vega.github.io/vega-lite/examples/#layered-plots)
section of the [Vega-Lite example gallery](https://vega.github.io/vega-lite/examples/).
Those labelled `(repeat)` are shown in the "Single View Plots" notebook.
I'm having a hard-enough-time getting links to examples within the same notebook
to work, so I'm afraid I haven't tried to get cross-notebook linking working.
### [1. Labelling and Annotation](#Labelling-and-Annotation)
- Simple Bar Chart with Labels (repeat)
- [Simple Bar Chart with Labels and Emojis](#Simple-Bar-Chart-with-Labels-and-Emojis)
- Layering text over heatmap (repeat)
- Carbon Dioxide in the Atmosphere (repeat)
- [Bar Chart Highlighting Values beyond a Threshold](#Bar-Chart-Highlighting-Values-beyond-a-Threshold)
- [Mean overlay over precipitation chart](#Mean-overlay-over-precipitation-chart)
- [Histogram with a Global Mean Overlay](#Histogram-with-a-Global-Mean-Overlay)
- [Line Chart with Highlighted Rectangles](#Line-Chart-with-Highlighted-Rectangles)
- Layering Averages over Raw Values (repeat)
- Layering Rolling Averages over Raw Values (repeat)
- [Distributions and Medians of Likert Scale Ratings](#Distributions-and-Medians-of-Likert-Scale-Ratings)
- [Comparative Likert Scale Ratings](#Comparative-Likert-Scale-Ratings)
- Bar with Label Overlays (repeat of Simple Bar Chart with Labels above)
### [2. Other Layered Plots](#Other-Layered-Plots)
- [Candlestick Chart](#Candlestick-Chart)
- [Ranged Dot Plot](#Ranged-Dot-Plot)
- [Bullet Chart](#Bullet-Chart)
- [Layered Plot with Dual-Axis](#Layered-PLot-with-Dual-Axis)
- Horizon Graph (repeat)
- [Weekly Weather Plot](#Weekly-Weather-Plot)
- [Wheat and Wages Example](#Wheat-and-Wages-Example)
---
## Versions
The notebook was last run with the following versions of [`hvega`](https://hackage.haskell.org/package/hvega) and
related modules:
```
:!ghc-pkg latest ghc
:!ghc-pkg latest ihaskell
:!ghc-pkg latest hvega
:!ghc-pkg latest ihaskell-hvega
```
As to when it was last run, how about:
```
import Data.Time (getCurrentTime)
getCurrentTime
```
## Set up
See the overview notebook for an explanation of this section (it provides code I use to compate the `hvega` output
to the specification given in the Vega-Lite gallery).
```
:ext OverloadedStrings
:ext QuasiQuotes
-- VegaLite uses these names
import Prelude hiding (filter, lookup, repeat)
import Graphics.Vega.VegaLite
-- IHaskell automatically imports this if the `ihaskell-vega` module is installed
-- import IHaskell.Display.Hvega
-- If you are viewing this in an IHaskell notebook rather than Jupyter Lab,
-- use the following to see the visualizations
--
vlShow = id
import qualified Data.ByteString.Lazy.Char8 as BL8
import qualified Data.HashMap.Strict as HM
import qualified Data.Set as S
import Data.Aeson (Value(Object), encode)
import Data.Aeson.QQ.Simple (aesonQQ)
import Control.Monad (forM_, unless, when)
import Data.Maybe (fromJust)
import System.Directory (removeFile)
import System.Process (readProcess, readProcessWithExitCode)
validate ::
VLSpec -- ^ The expected specification
-> VegaLite -- ^ The actual visualization
-> IO ()
validate exp vl =
let got = fromVL vl
put = putStrLn
in if got == exp
then put "Okay"
else do
let red = "\x1b[31m"
def = "\x1b[0m"
report m = put (red ++ m ++ def)
report "The visualization and expected specification do not match."
-- assume both objects
let Object oexp = exp
Object ogot = got
kexp = S.fromList (HM.keys oexp)
kgot = S.fromList (HM.keys ogot)
kmiss = S.toList (S.difference kexp kgot)
kextra = S.toList (S.difference kgot kexp)
keys = S.toList (S.intersection kexp kgot)
unless (null kmiss && null kextra) $ do
put ""
report "Keys are different:"
unless (null kmiss) $ put (" Missing: " ++ show kmiss)
unless (null kextra) $ put (" Extra : " ++ show kextra)
-- this often creates an impressive amount of text for what is
-- only a small change, which is why it is followed by a call
-- to debug
--
forM_ keys $ \key ->
let vexp = fromJust (HM.lookup key oexp)
vgot = fromJust (HM.lookup key ogot)
in when (vexp /= vgot) $ do
put ""
report ("Values are different for " ++ show key)
put (" Expected: " ++ show vexp)
put (" Found : " ++ show vgot)
putStrLn ""
report "The field-level differences are:"
debug_ exp vl
-- Rather than come up with a way to diff JSON here, rely on `jq` and the trusty
-- `diff` command. This is not written to be robust!
--
debug_ spec vl = do
let tostr = BL8.unpack . encode
expected <- readProcess "jq" [] (tostr spec)
got <- readProcess "jq" [] (tostr (fromVL vl))
let f1 = "expected.json"
f2 = "got.json"
writeFile f1 expected
writeFile f2 got
let diffOpts = ["--minimal", f1, f2]
(_, diff, _) <- readProcessWithExitCode "diff" diffOpts ""
putStrLn diff
forM_ [f1, f2] removeFile
```
The following is used in at least one example:
```
import qualified Data.Text as T
import qualified Data.Aeson as A
import Data.Aeson ((.=))
```
---
## Labelling and Annotation
- Simple Bar Chart with Labels (repeat)
- [Simple Bar Chart with Labels and Emojis](#Simple-Bar-Chart-with-Labels-and-Emojis)
- Layering text over heatmap (repeat)
- Carbon Dioxide in the Atmosphere (repeat)
- [Bar Chart Highlighting Values beyond a Threshold](#Bar-Chart-Highlighting-Values-beyond-a-Threshold)
- [Mean overlay over precipitation chart](#Mean-overlay-over-precipitation-chart)
- [Histogram with a Global Mean Overlay](#Histogram-with-a-Global-Mean-Overlay)
- [Line Chart with Highlighted Rectangles](#Line-Chart-with-Highlighted-Rectangles)
- [Layering Averages over Raw Values](#Layering-Averages-over-Raw-Values)
- [Layering Rolling Averages over Raw Values](#Layering-Rolling-Averages-over-Raw-Values)
- [Distributions and Medians of Likert Scale Ratings](#Distributions-and-Medians-of-Likert-Scale-Ratings)
- [Comparative Likert Scale Ratings](#Comparative-Likert-Scale-Ratings)
- Bar with Label Overlays (repeat)
---
### Simple Bar Charts with Labels and Emojis
From https://vega.github.io/vega-lite/examples/layer_bar_fruit.html
```
layerBarFruitSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "Vega-Lite version of bar chart from https://observablehq.com/@d3/learn-d3-scales.",
"width": 400,
"data": {
"values": [
{"name": "🍊", "count": 21},
{"name": "🍇", "count": 13},
{"name": "🍏", "count": 8},
{"name": "🍌", "count": 5},
{"name": "🍐", "count": 3},
{"name": "🍋", "count": 2},
{"name": "🍎", "count": 1},
{"name": "🍉", "count": 1}
]
},
"encoding": {
"y": {"field": "name", "type": "nominal", "sort": "-x", "title": null},
"x": {"field": "count", "type": "quantitative", "title": null}
},
"layer": [{
"mark": "bar",
"encoding": {
"color": {
"field": "count",
"type": "quantitative",
"title": "Number of fruit"
}
}
}, {
"mark": {
"type": "text",
"align": "right",
"xOffset": -4,
"aria": false
},
"encoding": {
"text": {"field": "count", "type": "quantitative"},
"color": {
"condition": {
"test": {"field": "count", "gt": 10},
"value": "white"
},
"value": "black"
}
}
}]
}
|]
layerBarFruit =
let desc = "Vega-Lite version of bar chart from https://observablehq.com/@d3/learn-d3-scales."
dvals = dataFromColumns []
. dataColumn "count" (Numbers [21, 13, 8, 5, 3, 2, 1, 1])
. dataColumn "name" (Strings ["🍊", "🍇", "🍏", "🍌", "🍐", "🍋", "🍎", "🍉"])
$ []
barPlot = [ mark Bar []
, encoding
. color [MName "count", MmType Quantitative, MTitle "Number of fruit"]
$ []
]
textPlot = [ mark Text [ MAlign AlignRight
, MXOffset (-4)
, MAria False
]
, encoding
. text [TName "count", TmType Quantitative]
. color [ MDataCondition
[(FilterOp (FGreaterThan "count" (Number 10)), [MString "white"])]
[MString "black"]
]
$ []
]
in toVegaLite [ description desc
, width 400
, dvals
, encoding
. position Y [PName "name", PmType Nominal, PNoTitle, PSort [Descending, ByChannel ChX]]
. position X [PName "count", PmType Quantitative, PNoTitle]
$ []
, layer [asSpec barPlot, asSpec textPlot]
]
vlShow layerBarFruit
```
The specifications don't match, but they have the same meaning:
```
validate layerBarFruitSpec layerBarFruit
```
Return to the [Table of Contents](#Table-of-Contents).
### Bar Chart Highlighting Values beyond a Threshold
From https://vega.github.io/vega-lite/examples/layer_bar_annotations.html
```
layerBarAnnotationsSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "The PM2.5 value of Beijing observed 15 days, highlighting the days when PM2.5 level is hazardous to human health. Data source https://chartaccent.github.io/chartaccent.html",
"layer": [{
"data": {
"values": [
{"Day": 1, "Value": 54.8},
{"Day": 2, "Value": 112.1},
{"Day": 3, "Value": 63.6},
{"Day": 4, "Value": 37.6},
{"Day": 5, "Value": 79.7},
{"Day": 6, "Value": 137.9},
{"Day": 7, "Value": 120.1},
{"Day": 8, "Value": 103.3},
{"Day": 9, "Value": 394.8},
{"Day": 10, "Value": 199.5},
{"Day": 11, "Value": 72.3},
{"Day": 12, "Value": 51.1},
{"Day": 13, "Value": 112.0},
{"Day": 14, "Value": 174.5},
{"Day": 15, "Value": 130.5}
]
},
"layer": [{
"mark": "bar",
"encoding": {
"x": {"field": "Day", "type": "ordinal", "axis": {"labelAngle": 0}},
"y": {"field": "Value", "type": "quantitative"}
}
}, {
"mark": "bar",
"transform": [
{"filter": "datum.Value >= 300"},
{"calculate": "300", "as": "baseline"}
],
"encoding": {
"x": {"field": "Day", "type": "ordinal"},
"y": {"field": "baseline", "type": "quantitative", "title": "PM2.5 Value"},
"y2": {"field": "Value"},
"color": {"value": "#e45755"}
}
}
]}, {
"data": {
"values": [{}]
},
"encoding": {
"y": {"datum": 300}
},
"layer": [{
"mark": "rule"
}, {
"mark": {
"type": "text",
"align": "right",
"baseline": "bottom",
"dx": -2,
"dy": -2,
"x": "width",
"text": "hazardous"
}
}]
}
]
}
|]
layerBarAnnotations =
let label = description "The PM2.5 value of Beijing observed 15 days, highlighting the days when PM2.5 level is hazardous to human health. Data source https://chartaccent.github.io/chartaccent.html"
days = map fromIntegral [1::Int .. 15]
values = [54.8, 112.1, 63.6, 37.6, 79.7, 137.9, 120.1, 103.3, 394.8, 199.5, 72.3, 51.1, 112.0, 174.5, 130.5]
dvals = dataFromColumns []
. dataColumn "Day" (Numbers days)
. dataColumn "Value" (Numbers values)
empty = A.toJSON [A.Object HM.empty]
threshold = dataFromJson empty []
posX extra = position X ([PName "Day", PmType Ordinal] ++ extra)
encBar = encoding
. posX [PAxis [AxLabelAngle 0]]
. position Y [PName "Value", PmType Quantitative]
encLine = encoding
. posX []
. position Y [PName "baseline", PmType Quantitative, PTitle "PM2.5 Value"]
. position Y2 [PName "Value"]
. color [MString "#e45755"]
transLine = transform
. filter (FExpr "datum.Value >= 300")
. calculateAs "300" "baseline"
specBar = asSpec [mark Bar [], encBar []]
specLine = asSpec [mark Bar [], transLine [], encLine []]
lyrBar = asSpec [dvals [], layer [specBar, specLine]]
textOpts = [ MAlign AlignRight
, MBaseline AlignBottom
, MdX (-2)
, MdY (-2)
, MXWidth
, MText "hazardous"
]
specRule = asSpec [mark Rule []]
specText = asSpec [mark Text textOpts]
lyrThresh = asSpec [ threshold
, encoding (position Y [PDatum (Number 300)] [])
, layer [specRule, specText]
]
layers = layer [lyrBar, lyrThresh]
in toVegaLite [label, layers]
vlShow layerBarAnnotations
validate layerBarAnnotationsSpec layerBarAnnotations
```
Return to the [Table of Contents](#Table-of-Contents).
### Mean overlay over precipitation chart
From https://vega.github.io/vega-lite/examples/layer_precipitation_mean.html
```
layerPrecipitationMeanSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"data": {"url": "data/seattle-weather.csv"},
"layer": [
{
"mark": "bar",
"encoding": {
"x": {
"timeUnit": "month",
"field": "date",
"type": "ordinal"
},
"y": {
"aggregate": "mean",
"field": "precipitation",
"type": "quantitative"
}
}
},
{
"mark": "rule",
"encoding": {
"y": {
"aggregate": "mean",
"field": "precipitation",
"type": "quantitative"
},
"color": {"value": "red"},
"size": {"value": 3}
}
}
]
}
|]
layerPrecipitationMean =
let dvals = dataFromUrl "data/seattle-weather.csv" []
posY = position Y [PName "precipitation", PmType Quantitative, PAggregate Mean]
enc1 = encoding
. position X [PName "date", PmType Ordinal, PTimeUnit (TU Month)]
. posY
enc2 = encoding
. posY
. color [MString "red"]
. size [MNumber 3]
lyr1 = [mark Bar [], enc1 []]
lyr2 = [mark Rule [], enc2 []]
layers = layer (map asSpec [lyr1, lyr2])
in toVegaLite [dvals, layers]
vlShow layerPrecipitationMean
validate layerPrecipitationMeanSpec layerPrecipitationMean
```
Return to the [Table of Contents](#Table-of-Contents).
### Histogram with a Global Mean Overlay
From https://vega.github.io/vega-lite/examples/layer_histogram_global_mean.html
```
layerHistogramGlobalMeanSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"data": {"url": "data/movies.json"},
"layer": [{
"mark": "bar",
"encoding": {
"x": {"field": "IMDB Rating", "bin": true},
"y": {"aggregate": "count"}
}
},{
"mark": "rule",
"encoding": {
"x": {"aggregate": "mean", "field": "IMDB Rating"},
"color": {"value": "red"},
"size": {"value": 5}
}
}]
}
|]
layerHistogramGlobalMean =
let dvals = dataFromUrl "data/movies.json" []
posX extra = position X (PName "IMDB Rating" : extra)
enc1 = encoding
. posX [PBin []]
. position Y [PAggregate Count]
enc2 = encoding
. posX [PAggregate Mean]
. color [MString "red"]
. size [MNumber 5]
lyr1 = [mark Bar [], enc1 []]
lyr2 = [mark Rule [], enc2 []]
layers = layer (map asSpec [lyr1, lyr2])
in toVegaLite [dvals, layers]
vlShow layerHistogramGlobalMean
validate layerHistogramGlobalMeanSpec layerHistogramGlobalMean
```
Return to the [Table of Contents](#Table-of-Contents).
### Line Chart with Highlighted Rectangles
From https://vega.github.io/vega-lite/examples/layer_falkensee.html
```
layerFalkenseeSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "The population of the German city of Falkensee over time",
"width": 500,
"data": {
"values": [
{"year": "1875", "population": 1309},
{"year": "1890", "population": 1558},
{"year": "1910", "population": 4512},
{"year": "1925", "population": 8180},
{"year": "1933", "population": 15915},
{"year": "1939", "population": 24824},
{"year": "1946", "population": 28275},
{"year": "1950", "population": 29189},
{"year": "1964", "population": 29881},
{"year": "1971", "population": 26007},
{"year": "1981", "population": 24029},
{"year": "1985", "population": 23340},
{"year": "1989", "population": 22307},
{"year": "1990", "population": 22087},
{"year": "1991", "population": 22139},
{"year": "1992", "population": 22105},
{"year": "1993", "population": 22242},
{"year": "1994", "population": 22801},
{"year": "1995", "population": 24273},
{"year": "1996", "population": 25640},
{"year": "1997", "population": 27393},
{"year": "1998", "population": 29505},
{"year": "1999", "population": 32124},
{"year": "2000", "population": 33791},
{"year": "2001", "population": 35297},
{"year": "2002", "population": 36179},
{"year": "2003", "population": 36829},
{"year": "2004", "population": 37493},
{"year": "2005", "population": 38376},
{"year": "2006", "population": 39008},
{"year": "2007", "population": 39366},
{"year": "2008", "population": 39821},
{"year": "2009", "population": 40179},
{"year": "2010", "population": 40511},
{"year": "2011", "population": 40465},
{"year": "2012", "population": 40905},
{"year": "2013", "population": 41258},
{"year": "2014", "population": 41777}
],
"format": {
"parse": {"year": "date:'%Y'"}
}
},
"layer": [
{
"mark": "rect",
"data": {
"values": [
{
"start": "1933",
"end": "1945",
"event": "Nazi Rule"
},
{
"start": "1948",
"end": "1989",
"event": "GDR (East Germany)"
}
],
"format": {
"parse": {"start": "date:'%Y'", "end": "date:'%Y'"}
}
},
"encoding": {
"x": {
"field": "start",
"timeUnit": "year"
},
"x2": {
"field": "end",
"timeUnit": "year"
},
"color": {"field": "event", "type": "nominal"}
}
},
{
"mark": "line",
"encoding": {
"x": {
"field": "year",
"timeUnit": "year",
"title": "year (year)"
},
"y": {"field": "population", "type": "quantitative"},
"color": {"value": "#333"}
}
},
{
"mark": "point",
"encoding": {
"x": {
"field": "year",
"timeUnit": "year"
},
"y": {"field": "population", "type": "quantitative"},
"color": {"value": "#333"}
}
}
]
}
|]
layerFalkensee =
let desc = "The population of the German city of Falkensee over time"
i2s = Str . T.pack . show
rw y p = dataRow [("year", i2s y), ("population", Number p)]
dvals = dataFromRows [Parse [("year", FoDate "%Y")]]
. rw 1875 1309
. rw 1890 1558
. rw 1910 4512
. rw 1925 8180
. rw 1933 15915
. rw 1939 24824
. rw 1946 28275
. rw 1950 29189
. rw 1964 29881
. rw 1971 26007
. rw 1981 24029
. rw 1985 23340
. rw 1989 22307
. rw 1990 22087
. rw 1991 22139
. rw 1992 22105
. rw 1993 22242
. rw 1994 22801
. rw 1995 24273
. rw 1996 25640
. rw 1997 27393
. rw 1998 29505
. rw 1999 32124
. rw 2000 33791
. rw 2001 35297
. rw 2002 36179
. rw 2003 36829
. rw 2004 37493
. rw 2005 38376
. rw 2006 39008
. rw 2007 39366
. rw 2008 39821
. rw 2009 40179
. rw 2010 40511
. rw 2011 40465
. rw 2012 40905
. rw 2013 41258
. rw 2014 41777
dvals1 = dataFromRows [Parse [("start", FoDate "%Y"), ("end", FoDate "%Y")]]
. dataRow [("start", Str "1933"), ("end", Str "1945"), ("event", Str "Nazi Rule")]
. dataRow [("start", Str "1948"), ("end", Str "1989"), ("event", Str "GDR (East Germany)")]
enc1 = encoding
. position X [PName "start", PTimeUnit (TU Year)]
. position X2 [PName "end", PTimeUnit (TU Year)]
. color [MName "event", MmType Nominal]
enc2 flag = encoding
. position X ([PName "year", PTimeUnit (TU Year)] ++ [PTitle "year (year)" | flag])
. position Y [PName "population", PmType Quantitative]
. color [MString "#333"]
lyr1 = asSpec [mark Rect [], dvals1 [], enc1 []]
lyr2 = asSpec [mark Line [], enc2 True []]
lyr3 = asSpec [mark Point [], enc2 False []]
in toVegaLite [description desc, width 500, dvals [], layer [lyr1, lyr2, lyr3]]
vlShow layerFalkensee
validate layerFalkenseeSpec layerFalkensee
```
Return to the [Table of Contents](#Table-of-Contents).
### Distributions and Medians of Likert Scale Ratings
From https://vega.github.io/vega-lite/examples/layer_likert.html
```
layerLikertSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "Likert Scale Ratings Distributions and Medians. (Figure 9 from @jhoffswell and @zcliu's ['Interactive Repair of Tables Extracted from PDF Documents on Mobile Devices'](https://idl.cs.washington.edu/files/2019-InteractiveTableRepair-CHI.pdf))",
"datasets": {
"medians": [
{"name": "Identify Errors:", "median": 1.999976, "lo": "Easy", "hi": "Hard"},
{"name": "Fix Errors:", "median": 2, "lo": "Easy", "hi": "Hard"},
{"name": "Easier to Fix:", "median": 1.999969, "lo": "Toolbar", "hi": "Gesture"},
{"name": "Faster to Fix:", "median": 2.500045, "lo": "Toolbar", "hi": "Gesture"},
{"name": "Easier on Phone:", "median": 1.500022, "lo": "Toolbar", "hi": "Gesture"},
{"name": "Easier on Tablet:", "median": 2.99998, "lo": "Toolbar", "hi": "Gesture"},
{"name": "Device Preference:", "median": 4.500007, "lo": "Phone", "hi": "Tablet"}
],
"values": [
{"value": "P1", "name": "Participant ID", "id": "P1"},
{"value": 2, "name": "Identify Errors:", "id": "P1"},
{"value": 2, "name": "Fix Errors:", "id": "P1"},
{"value": 3, "name": "Easier to Fix:", "id": "P1"},
{"value": 4, "name": "Faster to Fix:", "id": "P1"},
{"value": 2, "name": "Easier on Phone:", "id": "P1"},
{"value": 5, "name": "Easier on Tablet:", "id": "P1"},
{"value": 5, "name": "Device Preference:", "id": "P1"},
{"value": 1, "name": "Tablet_First", "id": "P1"},
{"value": 1, "name": "Toolbar_First", "id": "P1"},
{"value": "P2", "name": "Participant ID", "id": "P2"},
{"value": 2, "name": "Identify Errors:", "id": "P2"},
{"value": 3, "name": "Fix Errors:", "id": "P2"},
{"value": 4, "name": "Easier to Fix:", "id": "P2"},
{"value": 5, "name": "Faster to Fix:", "id": "P2"},
{"value": 5, "name": "Easier on Phone:", "id": "P2"},
{"value": 5, "name": "Easier on Tablet:", "id": "P2"},
{"value": 5, "name": "Device Preference:", "id": "P2"},
{"value": 1, "name": "Tablet_First", "id": "P2"},
{"value": 1, "name": "Toolbar_First", "id": "P2"},
{"value": "P3", "name": "Participant ID", "id": "P3"},
{"value": 2, "name": "Identify Errors:", "id": "P3"},
{"value": 2, "name": "Fix Errors:", "id": "P3"},
{"value": 2, "name": "Easier to Fix:", "id": "P3"},
{"value": 1, "name": "Faster to Fix:", "id": "P3"},
{"value": 2, "name": "Easier on Phone:", "id": "P3"},
{"value": 1, "name": "Easier on Tablet:", "id": "P3"},
{"value": 5, "name": "Device Preference:", "id": "P3"},
{"value": 1, "name": "Tablet_First", "id": "P3"},
{"value": 0, "name": "Toolbar_First", "id": "P3"},
{"value": "P4", "name": "Participant ID", "id": "P4"},
{"value": 3, "name": "Identify Errors:", "id": "P4"},
{"value": 3, "name": "Fix Errors:", "id": "P4"},
{"value": 2, "name": "Easier to Fix:", "id": "P4"},
{"value": 2, "name": "Faster to Fix:", "id": "P4"},
{"value": 4, "name": "Easier on Phone:", "id": "P4"},
{"value": 1, "name": "Easier on Tablet:", "id": "P4"},
{"value": 5, "name": "Device Preference:", "id": "P4"},
{"value": 1, "name": "Tablet_First", "id": "P4"},
{"value": 0, "name": "Toolbar_First", "id": "P4"},
{"value": "P5", "name": "Participant ID", "id": "P5"},
{"value": 2, "name": "Identify Errors:", "id": "P5"},
{"value": 2, "name": "Fix Errors:", "id": "P5"},
{"value": 4, "name": "Easier to Fix:", "id": "P5"},
{"value": 4, "name": "Faster to Fix:", "id": "P5"},
{"value": 4, "name": "Easier on Phone:", "id": "P5"},
{"value": 5, "name": "Easier on Tablet:", "id": "P5"},
{"value": 5, "name": "Device Preference:", "id": "P5"},
{"value": 0, "name": "Tablet_First", "id": "P5"},
{"value": 1, "name": "Toolbar_First", "id": "P5"},
{"value": "P6", "name": "Participant ID", "id": "P6"},
{"value": 1, "name": "Identify Errors:", "id": "P6"},
{"value": 3, "name": "Fix Errors:", "id": "P6"},
{"value": 3, "name": "Easier to Fix:", "id": "P6"},
{"value": 4, "name": "Faster to Fix:", "id": "P6"},
{"value": 4, "name": "Easier on Phone:", "id": "P6"},
{"value": 4, "name": "Easier on Tablet:", "id": "P6"},
{"value": 4, "name": "Device Preference:", "id": "P6"},
{"value": 0, "name": "Tablet_First", "id": "P6"},
{"value": 1, "name": "Toolbar_First", "id": "P6"},
{"value": "P7", "name": "Participant ID", "id": "P7"},
{"value": 2, "name": "Identify Errors:", "id": "P7"},
{"value": 3, "name": "Fix Errors:", "id": "P7"},
{"value": 4, "name": "Easier to Fix:", "id": "P7"},
{"value": 5, "name": "Faster to Fix:", "id": "P7"},
{"value": 3, "name": "Easier on Phone:", "id": "P7"},
{"value": 2, "name": "Easier on Tablet:", "id": "P7"},
{"value": 4, "name": "Device Preference:", "id": "P7"},
{"value": 0, "name": "Tablet_First", "id": "P7"},
{"value": 0, "name": "Toolbar_First", "id": "P7"},
{"value": "P8", "name": "Participant ID", "id": "P8"},
{"value": 3, "name": "Identify Errors:", "id": "P8"},
{"value": 1, "name": "Fix Errors:", "id": "P8"},
{"value": 2, "name": "Easier to Fix:", "id": "P8"},
{"value": 4, "name": "Faster to Fix:", "id": "P8"},
{"value": 2, "name": "Easier on Phone:", "id": "P8"},
{"value": 5, "name": "Easier on Tablet:", "id": "P8"},
{"value": 5, "name": "Device Preference:", "id": "P8"},
{"value": 0, "name": "Tablet_First", "id": "P8"},
{"value": 0, "name": "Toolbar_First", "id": "P8"},
{"value": "P9", "name": "Participant ID", "id": "P9"},
{"value": 2, "name": "Identify Errors:", "id": "P9"},
{"value": 3, "name": "Fix Errors:", "id": "P9"},
{"value": 2, "name": "Easier to Fix:", "id": "P9"},
{"value": 4, "name": "Faster to Fix:", "id": "P9"},
{"value": 1, "name": "Easier on Phone:", "id": "P9"},
{"value": 4, "name": "Easier on Tablet:", "id": "P9"},
{"value": 4, "name": "Device Preference:", "id": "P9"},
{"value": 1, "name": "Tablet_First", "id": "P9"},
{"value": 1, "name": "Toolbar_First", "id": "P9"},
{"value": "P10", "name": "Participant ID", "id": "P10"},
{"value": 2, "name": "Identify Errors:", "id": "P10"},
{"value": 2, "name": "Fix Errors:", "id": "P10"},
{"value": 1, "name": "Easier to Fix:", "id": "P10"},
{"value": 1, "name": "Faster to Fix:", "id": "P10"},
{"value": 1, "name": "Easier on Phone:", "id": "P10"},
{"value": 1, "name": "Easier on Tablet:", "id": "P10"},
{"value": 5, "name": "Device Preference:", "id": "P10"},
{"value": 1, "name": "Tablet_First", "id": "P10"},
{"value": 1, "name": "Toolbar_First", "id": "P10"},
{"value": "P11", "name": "Participant ID", "id": "P11"},
{"value": 2, "name": "Identify Errors:", "id": "P11"},
{"value": 2, "name": "Fix Errors:", "id": "P11"},
{"value": 1, "name": "Easier to Fix:", "id": "P11"},
{"value": 1, "name": "Faster to Fix:", "id": "P11"},
{"value": 1, "name": "Easier on Phone:", "id": "P11"},
{"value": 1, "name": "Easier on Tablet:", "id": "P11"},
{"value": 4, "name": "Device Preference:", "id": "P11"},
{"value": 1, "name": "Tablet_First", "id": "P11"},
{"value": 0, "name": "Toolbar_First", "id": "P11"},
{"value": "P12", "name": "Participant ID", "id": "P12"},
{"value": 1, "name": "Identify Errors:", "id": "P12"},
{"value": 3, "name": "Fix Errors:", "id": "P12"},
{"value": 2, "name": "Easier to Fix:", "id": "P12"},
{"value": 3, "name": "Faster to Fix:", "id": "P12"},
{"value": 1, "name": "Easier on Phone:", "id": "P12"},
{"value": 3, "name": "Easier on Tablet:", "id": "P12"},
{"value": 3, "name": "Device Preference:", "id": "P12"},
{"value": 0, "name": "Tablet_First", "id": "P12"},
{"value": 1, "name": "Toolbar_First", "id": "P12"},
{"value": "P13", "name": "Participant ID", "id": "P13"},
{"value": 2, "name": "Identify Errors:", "id": "P13"},
{"value": 2, "name": "Fix Errors:", "id": "P13"},
{"value": 1, "name": "Easier to Fix:", "id": "P13"},
{"value": 1, "name": "Faster to Fix:", "id": "P13"},
{"value": 1, "name": "Easier on Phone:", "id": "P13"},
{"value": 1, "name": "Easier on Tablet:", "id": "P13"},
{"value": 5, "name": "Device Preference:", "id": "P13"},
{"value": 0, "name": "Tablet_First", "id": "P13"},
{"value": 0, "name": "Toolbar_First", "id": "P13"},
{"value": "P14", "name": "Participant ID", "id": "P14"},
{"value": 3, "name": "Identify Errors:", "id": "P14"},
{"value": 3, "name": "Fix Errors:", "id": "P14"},
{"value": 2, "name": "Easier to Fix:", "id": "P14"},
{"value": 2, "name": "Faster to Fix:", "id": "P14"},
{"value": 1, "name": "Easier on Phone:", "id": "P14"},
{"value": 1, "name": "Easier on Tablet:", "id": "P14"},
{"value": 1, "name": "Device Preference:", "id": "P14"},
{"value": 1, "name": "Tablet_First", "id": "P14"},
{"value": 1, "name": "Toolbar_First", "id": "P14"},
{"value": "P15", "name": "Participant ID", "id": "P15"},
{"value": 4, "name": "Identify Errors:", "id": "P15"},
{"value": 5, "name": "Fix Errors:", "id": "P15"},
{"value": 1, "name": "Easier to Fix:", "id": "P15"},
{"value": 1, "name": "Faster to Fix:", "id": "P15"},
{"value": 1, "name": "Easier on Phone:", "id": "P15"},
{"value": 1, "name": "Easier on Tablet:", "id": "P15"},
{"value": 5, "name": "Device Preference:", "id": "P15"},
{"value": 1, "name": "Tablet_First", "id": "P15"},
{"value": 0, "name": "Toolbar_First", "id": "P15"},
{"value": "P16", "name": "Participant ID", "id": "P16"},
{"value": 1, "name": "Identify Errors:", "id": "P16"},
{"value": 3, "name": "Fix Errors:", "id": "P16"},
{"value": 2, "name": "Easier to Fix:", "id": "P16"},
{"value": 2, "name": "Faster to Fix:", "id": "P16"},
{"value": 1, "name": "Easier on Phone:", "id": "P16"},
{"value": 4, "name": "Easier on Tablet:", "id": "P16"},
{"value": 5, "name": "Device Preference:", "id": "P16"},
{"value": 0, "name": "Tablet_First", "id": "P16"},
{"value": 1, "name": "Toolbar_First", "id": "P16"},
{"value": "P17", "name": "Participant ID", "id": "P17"},
{"value": 3, "name": "Identify Errors:", "id": "P17"},
{"value": 2, "name": "Fix Errors:", "id": "P17"},
{"value": 2, "name": "Easier to Fix:", "id": "P17"},
{"value": 2, "name": "Faster to Fix:", "id": "P17"},
{"value": 1, "name": "Easier on Phone:", "id": "P17"},
{"value": 3, "name": "Easier on Tablet:", "id": "P17"},
{"value": 2, "name": "Device Preference:", "id": "P17"},
{"value": 0, "name": "Tablet_First", "id": "P17"},
{"value": 0, "name": "Toolbar_First", "id": "P17"}
]
},
"data": {"name": "medians"},
"title": "Questionnaire Ratings",
"width": 250,
"height": 175,
"encoding": {
"y": {
"field": "name",
"type": "nominal",
"sort": null,
"axis": {
"domain": false,
"offset": 50,
"labelFontWeight": "bold",
"ticks": false,
"grid": true,
"title": null
}
},
"x": {
"type": "quantitative",
"scale": {"domain": [0, 6]},
"axis": {"grid": false, "values": [1, 2, 3, 4, 5], "title": null}
}
},
"view": {"stroke": null},
"layer": [
{
"mark": "circle",
"data": {"name": "values"},
"transform": [
{"filter": "datum.name != 'Toolbar_First'"},
{"filter": "datum.name != 'Tablet_First'"},
{"filter": "datum.name != 'Participant ID'"}
],
"encoding": {
"x": {"field": "value"},
"size": {
"aggregate": "count",
"type": "quantitative",
"title": "Number of Ratings",
"legend": {"offset": 75}
},
"color": {"value": "#6EB4FD"}
}
},
{
"mark": "tick",
"encoding": {
"x": {"field": "median"},
"color": {"value": "black"}
}
},
{
"mark": {"type": "text", "x": -5, "align": "right"},
"encoding": {
"text": {"field": "lo"}
}
},
{
"mark": {"type": "text", "x": 255, "align": "left"},
"encoding": {
"text": {"field": "hi"}
}
}
]
}
|]
data Errors = IE | FE | EF | FF | EP | ET | DP | TBL | TBR
fromE IE = "Identify Errors:"
fromE FE = "Fix Errors:"
fromE EF = "Easier to Fix:"
fromE FF = "Faster to Fix:"
fromE EP = "Easier on Phone:"
fromE ET = "Easier on Tablet:"
fromE DP = "Device Preference:"
fromE TBL = "Tablet_First"
fromE TBR = "Toolbar_First"
layerLikert =
let label = description "Likert Scale Ratings Distributions and Medians. (Figure 9 from @jhoffswell and @zcliu's ['Interactive Repair of Tables Extracted from PDF Documents on Mobile Devices'](https://idl.cs.washington.edu/files/2019-InteractiveTableRepair-CHI.pdf))"
titleOpt = title "Questionnaire Ratings" []
viewOpt = viewBackground [VBNoStroke]
mData = [ (IE, 1.999976, "Easy", "Hard")
, (FE, 2, "Easy", "Hard")
, (EF, 1.999969, "Toolbar", "Gesture")
, (FF, 2.500045, "Toolbar", "Gesture")
, (EP, 1.500022, "Toolbar", "Gesture")
, (ET, 2.99998, "Toolbar", "Gesture")
, (DP, 4.500007, "Phone", "Tablet") ]
mRow (name, med, lo, hi) = dataRow [("name", Str (fromE name)), ("median", Number med), ("lo", Str lo), ("hi", Str hi)]
medians = dataFromRows [] (foldr mRow [] mData)
vData = [ ("P1", [2, 2, 3, 4, 2, 5, 5, 1, 1])
, ("P2", [2, 3, 4, 5, 5, 5, 5, 1, 1])
, ("P3", [2, 2, 2, 1, 2, 1, 5, 1, 0])
, ("P4", [3, 3, 2, 2, 4, 1, 5, 1, 0])
, ("P5", [2, 2, 4, 4, 4, 5, 5, 0, 1])
, ("P6", [1, 3, 3, 4, 4, 4, 4, 0, 1])
, ("P7", [2, 3, 4, 5, 3, 2, 4, 0, 0])
, ("P8", [3, 1, 2, 4, 2, 5, 5, 0, 0])
, ("P9", [2, 3, 2, 4, 1, 4, 4, 1, 1])
, ("P10", [2, 2, 1, 1, 1, 1, 5, 1, 1])
, ("P11", [2, 2, 1, 1, 1, 1, 4, 1, 0])
, ("P12", [1, 3, 2, 3, 1, 3, 3, 0, 1])
, ("P13", [2, 2, 1, 1, 1, 1, 5, 0, 0])
, ("P14", [3, 3, 2, 2, 1, 1, 1, 1, 1])
, ("P15", [4, 5, 1, 1, 1, 1, 5, 1, 0])
, ("P16", [1, 3, 2, 2, 1, 4, 5, 0, 1])
, ("P17", [3, 2, 2, 2, 1, 3, 2, 0, 0])
]
names = [IE, FE, EF, FF, EP, ET, DP, TBL, TBR]
v1Row pid = dataRow [("value", Str pid), ("name", Str "Participant ID"), ("id", Str pid)]
v2Row pid (value, name) = dataRow [("value", Number value), ("name", Str (fromE name)), ("id", Str pid)]
vRow (pid, vs) = v1Row pid (foldr (v2Row pid) [] (zip vs names))
values = dataFromRows [] (concatMap vRow vData)
dsets = datasets [("medians", medians), ("values", values)]
yAxis = [AxDomain False, AxOffset 50, AxLabelFontWeight Bold, AxTicks False, AxGrid True, AxNoTitle]
enc = encoding
. position Y [PName "name", PmType Nominal, PSort [], PAxis yAxis]
. position X [ PmType Quantitative
, PScale [SDomain (DNumbers [0, 6])]
, PAxis [ AxGrid False
, AxNoTitle
, AxValues (Numbers [1, 2, 3, 4, 5])]
]
trans1 = transform
. filter (FExpr "datum.name != 'Toolbar_First'")
. filter (FExpr "datum.name != 'Tablet_First'")
. filter (FExpr "datum.name != 'Participant ID'")
enc1 = encoding
. position X [PName "value"]
. size [ MAggregate Count, MmType Quantitative
, MTitle "Number of Ratings"
, MLegend [LOffset 75]
]
. color [MString "#6EB4FD"]
lyr1 = [mark Circle [], dataFromSource "values" [], trans1 [], enc1 []]
enc2 = encoding
. position X [PName "median"]
. color [MString "black"]
lyr2 = [mark Tick [], enc2 []]
lyr3 = [ mark Text [MX (-5), MAlign AlignRight]
, encoding (text [TName "lo"] [])
]
lyr4 = [ mark Text [MX 255, MAlign AlignLeft]
, encoding (text [TName "hi"] [])
]
lyr = layer (map asSpec [lyr1, lyr2, lyr3, lyr4])
vlOpts = [ label, titleOpt, height 175, width 250, viewOpt
, enc [], dsets, dsets, dataFromSource "medians" []
, lyr
]
in toVegaLite vlOpts
vlShow layerLikert
validate layerLikertSpec layerLikert
```
Return to the [Table of Contents](#Table-of-Contents).
### Comparative Likert Scale Ratings
From https://vega.github.io/vega-lite/examples/concat_layer_voyager_result.html
```
concatLayerVoyagerResultSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"data": {
"values": [
{
"measure": "Open Exploration",
"mean": 1.813,
"lo": 1.255,
"hi": 2.37,
"study": "PoleStar vs Voyager"
},
{
"measure": "Focused Question Answering",
"mean": -1.688,
"lo": -2.325,
"hi": -1.05,
"study": "PoleStar vs Voyager"
},
{
"measure": "Open Exploration",
"mean": 2.1875,
"lo": 1.665,
"hi": 2.71,
"study": "PoleStar vs Voyager 2"
},
{
"measure": "Focused Question Answering",
"mean": -0.0625,
"lo": -0.474,
"hi": 0.349,
"study": "PoleStar vs Voyager 2"
}
]
},
"spacing": 10,
"vconcat": [
{
"title": {
"text": "Mean of Subject Ratings (95% CIs)",
"frame": "bounds"
},
"encoding": {
"y": {
"field": "study",
"type": "nominal",
"axis": {
"title": null,
"labelPadding": 5,
"domain": false,
"ticks": false,
"grid": false
}
},
"x": {
"type": "quantitative",
"scale": {"domain": [-3, 3]},
"axis": {
"title": "",
"gridDash": [3, 3],
"gridColor": {
"condition": {
"test": "datum.value === 0",
"value": "#666"
},
"value": "#CCC"
}
}
}
},
"layer": [
{
"mark": "rule",
"encoding": {
"x": {"field": "lo"},
"x2": {"field": "hi"}
}
},
{
"mark": {
"type": "circle",
"stroke": "black",
"opacity": 1
},
"encoding": {
"x": {"field": "mean"},
"color": {
"field": "measure",
"type": "nominal",
"scale": {
"range": ["black", "white"]
},
"legend": null
}
}
}
]
},
{
"data": {
"values": [
{
"from": -0.25,
"to": -2.9,
"label": "PoleStar"
},
{
"from": 0.25,
"to": 2.9,
"label": "Voyager / Voyager 2"
}
]
},
"encoding": {
"x": {
"type": "quantitative",
"scale": {"zero": false},
"axis": null
}
},
"layer": [
{
"mark": "rule",
"encoding": {
"x": {"field": "from"},
"x2": {"field": "to"}
}
},
{
"mark": {
"type": "point",
"filled": true,
"size": 60,
"fill": "black"
},
"encoding": {
"x": {"field": "to"},
"shape": {
"condition": {
"test": "datum.to > 0",
"value": "triangle-right"
},
"value": "triangle-left"
}
}
},
{
"mark": {
"type": "text",
"align": "right",
"style": "arrow-label",
"text": ["Polestar", "More Valuable"]
},
"transform": [{"filter": "datum.label === 'PoleStar'"}],
"encoding": {
"x": {"field": "from"}
}
},
{
"mark": {
"type": "text",
"align": "left",
"style": "arrow-label",
"text": ["Voyager / Voyager 2", "More Valuable"]
},
"transform": [{"filter": "datum.label !== 'PoleStar'"}],
"encoding": {
"x": {"field": "from"}
}
}
]
}
],
"config": {
"view": {"stroke": "transparent"},
"style": {
"arrow-label": {"dy": 12, "fontSize": 9.5},
"arrow-label2": {"dy": 24, "fontSize": 9.5}
},
"title": {"fontSize": 12}
}
}
|]
voyagerData = [aesonQQ|
[
{
"measure": "Open Exploration",
"mean": 1.813,
"lo": 1.255,
"hi": 2.37,
"study": "PoleStar vs Voyager"
},
{
"measure": "Focused Question Answering",
"mean": -1.688,
"lo": -2.325,
"hi": -1.05,
"study": "PoleStar vs Voyager"
},
{
"measure": "Open Exploration",
"mean": 2.1875,
"lo": 1.665,
"hi": 2.71,
"study": "PoleStar vs Voyager 2"
},
{
"measure": "Focused Question Answering",
"mean": -0.0625,
"lo": -0.474,
"hi": 0.349,
"study": "PoleStar vs Voyager 2"
}]
|]
concatLayerVoyagerResult =
let dataPlot = [ title "Mean of Subject Ratings (95% CIs)" [TFrame FrBounds]
, encoding
. position Y [ PName "study"
, PmType Nominal
, PAxis [ AxNoTitle
, AxLabelPadding 5
, AxDomain False
, AxTicks False
, AxGrid False
]
]
. position X [ PmType Quantitative
, PScale [SDomain (DNumbers [-3, 3])]
, PAxis [ AxTitle ""
, AxGridDash [3, 3]
, AxDataCondition
(Expr "datum.value === 0")
(CAxGridColor "#666" "#CCC")
]
]
$ []
, layer [asSpec rulePlot, asSpec circlePlot]
]
rulePlot = [ mark Rule []
, encoding
. position X [PName "lo"]
. position X2 [PName "hi"]
$ []
]
circlePlot = [ mark Circle [MStroke "black", MOpacity 1]
, encoding
. position X [PName "mean"]
. color [ MName "measure"
, MmType Nominal
, MScale [SRange (RStrings ["black", "white"])]
, MLegend []
]
$ []
]
labelPlot = [ dataFromColumns []
. dataColumn "from" (Numbers [-0.25, 0.25])
. dataColumn "to" (Numbers [-2.9, 2.9])
. dataColumn "label" (Strings ["PoleStar", "Voyager / Voyager 2"])
$ []
, encoding
. position X [ PmType Quantitative
, PScale [SZero False]
, PAxis []
]
$ []
, layer (map asSpec [linePlot, arrowsPlot, pTextPlot, vTextPlot])
]
linePlot = [ mark Rule []
, encoding
. position X [PName "from"]
. position X2 [PName "to"]
$ []
]
shapeOpts = MDataCondition
[(Expr "datum.to > 0", [MSymbol SymTriangleRight])]
[MSymbol SymTriangleLeft]
arrowsPlot = [ mark Point [ MFilled True
, MFill "black"
, MSize 60 ]
, encoding
. position X [PName "to"]
. shape [ shapeOpts ]
$ []
]
from = position X [PName "from"]
pTextPlot = [ mark Text [ MAlign AlignRight
, MStyle ["arrow-label"]
, MTexts ["Polestar", "More Valuable"]
]
, transform
. filter (FExpr "datum.label === 'PoleStar'")
$ []
, encoding
. from
$ []
]
vTextPlot = [ mark Text [ MAlign AlignLeft
, MStyle ["arrow-label"]
, MTexts ["Voyager / Voyager 2", "More Valuable"]
]
, transform
. filter (FExpr "datum.label !== 'PoleStar'")
$ []
, encoding
. from
$ []
]
styles = MarkNamedStyles
[ ("arrow-label", [MdY 12, MFontSize 9.5])
, ("arrow-label2", [MdY 24, MFontSize 9.5])
]
conf = configure
. configuration (ViewStyle [ViewStroke "transparent"])
. configuration styles
. configuration (TitleStyle [TFontSize 12])
v = [ dataFromJson voyagerData []
, spacing 10
, vConcat [asSpec dataPlot, asSpec labelPlot]
, conf []
]
in toVegaLite v
vlShow concatLayerVoyagerResult
validate concatLayerVoyagerResultSpec concatLayerVoyagerResult
```
Return to the [Table of Contents](#Table-of-Contents).
---
## Other Layered Plots
- [Candlestick Chart](#Candlestick-Chart)
- [Ranged Dot Plot](#Ranged-Dot-Plot)
- [Bullet Chart](#Bullet-Chart)
- [Layered Plot with Dual-Axis](#Layered-PLot-with-Dual-Axis)
- Horizon Graph (repeat)
- [Weekly Weather Plot](#Weekly-Weather-Plot)
- [Wheat and Wages Example](#Wheat-and-Wages-Example)
---
### Candlestick Chart
From https://vega.github.io/vega-lite/examples/layer_candlestick.html
I have removed the repeated X-axis label - `Date in 2009` - from the JSON (it is specified both as the `title` and the `axis.title` field in the URL above at the time of writing).
```
layerCandlestickSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"width": 400,
"description": "A candlestick chart inspired by an example in Protovis (http://mbostock.github.io/protovis/ex/candlestick.html)",
"data": {"url": "data/ohlc.json"},
"encoding": {
"x": {
"field": "date",
"type": "temporal",
"title": "Date in 2009",
"axis": {
"format": "%m/%d",
"labelAngle": -45,
"title": "Date in 2009"
}
},
"y": {
"type": "quantitative",
"scale": {"zero": false},
"axis": {"title": "Price"}
},
"color": {
"condition": {
"test": "datum.open < datum.close",
"value": "#06982d"
},
"value": "#ae1325"
}
},
"layer": [
{
"mark": "rule",
"encoding": {
"y": {"field": "low"},
"y2": {"field": "high"}
}
},
{
"mark": "bar",
"encoding": {
"y": {"field": "open"},
"y2": {"field": "close"}
}
}
]
}
|]
layerCandlestick =
let desc = "A candlestick chart inspired by an example in Protovis (http://mbostock.github.io/protovis/ex/candlestick.html)"
dvals = dataFromUrl "data/ohlc.json"
start = [DTYear 2009, DTMonth May, DTDate 31]
end = [DTYear 2009, DTMonth Jul, DTDate 1]
enc = encoding
. position X [ PName "date"
, PmType Temporal
, PTitle "Date in 2009"
, PAxis [ AxFormat "%m/%d"
, AxLabelAngle (-45)
, AxTitle "Date in 2009" -- not needed
]
]
. position Y [ PmType Quantitative
, PScale [SZero False]
, PAxis [AxTitle "Price"]
]
. color [ MDataCondition
[(Expr "datum.open < datum.close", [MString "#06982d"])]
[MString "#ae1325"]
]
enc1 = encoding
. position Y [PName "low"]
. position Y2 [PName "high"]
enc2 = encoding
. position Y [PName "open"]
. position Y2 [PName "close"]
lyr1 = asSpec [mark Rule [], enc1 []]
lyr2 = asSpec [mark Bar [], enc2 []]
in toVegaLite [ description desc
, width 400
, dvals []
, enc []
, layer [lyr1, lyr2]
]
vlShow layerCandlestick
validate layerCandlestickSpec layerCandlestick
```
Return to the [Table of Contents](#Table-of-Contents).
### Ranged Dot Plot
From https://vega.github.io/vega-lite/examples/layer_ranged_dot.html
```
layerRangedDotSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "A ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000).",
"data": {"url": "data/countries.json"},
"transform": [
{
"filter": {
"field": "country",
"oneOf": ["China", "India", "United States", "Indonesia", "Brazil"]
}
},
{
"filter": {
"field": "year",
"oneOf": [1955, 2000]
}
}
],
"encoding": {
"x": {
"field": "life_expect",
"type": "quantitative",
"title": "Life Expectancy (years)"
},
"y": {
"field": "country",
"type": "nominal",
"title": "Country",
"axis": {
"offset": 5,
"ticks": false,
"minExtent": 70,
"domain": false
}
}
},
"layer": [
{
"mark": "line",
"encoding": {
"detail": {
"field": "country",
"type": "nominal"
},
"color": {"value": "#db646f"}
}
},
{
"mark": {
"type": "point",
"filled": true
},
"encoding": {
"color": {
"field": "year",
"type": "ordinal",
"scale": {
"domain": [1955, 2000],
"range": ["#e6959c", "#911a24"]
},
"title": "Year"
},
"size": {"value": 100},
"opacity": {"value": 1}
}
}
]
}
|]
layerRangedDot =
let label = description "A ranged dot plot that uses 'layer' to convey changing life expectancy for the five most populous countries (between 1955 and 2000)."
dvals = dataFromUrl "data/countries.json" []
trans = transform
. filter (FOneOf "country" (Strings ["China", "India", "United States", "Indonesia", "Brazil"]))
. filter (FOneOf "year" (Numbers [1955, 2000]))
enc = encoding
. position X [ PName "life_expect", PmType Quantitative
, PTitle "Life Expectancy (years)"
]
. position Y [ PName "country", PmType Nominal
, PTitle "Country"
, PAxis [ AxOffset 5
, AxTicks False
, AxMinExtent 70
, AxDomain False
]
]
lyr1 = [mark Line [], encoding (detail [DName "country", DmType Nominal] (color [MString "#db646f"] []))]
sc2 = domainRangeMap (1955, "#e6959c") (2000, "#911a24")
enc2 = encoding
. color [MName "year", MmType Ordinal, MScale sc2, MTitle "Year"]
. size [MNumber 100]
. opacity [MNumber 1]
lyr2 = [mark Point [MFilled True], enc2 []]
lyr = layer (map asSpec [lyr1, lyr2])
in toVegaLite [label, dvals, trans [], enc [], lyr]
vlShow layerRangedDot
validate layerRangedDotSpec layerRangedDot
```
Return to the [Table of Contents](#Table-of-Contents).
### Bullet Chart
From https://vega.github.io/vega-lite/examples/facet_bullet.html
```
facetBulletSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"data": {
"values": [
{"title":"Revenue", "subtitle":"US$, in thousands", "ranges":[150,225,300],"measures":[220,270],"markers":[250]},
{"title":"Profit", "subtitle":"%", "ranges":[20,25,30],"measures":[21,23],"markers":[26]},
{"title":"Order Size", "subtitle":"US$, average", "ranges":[350,500,600],"measures":[100,320],"markers":[550]},
{"title":"New Customers", "subtitle":"count", "ranges":[1400,2000,2500],"measures":[1000,1650],"markers":[2100]},
{"title":"Satisfaction", "subtitle":"out of 5", "ranges":[3.5,4.25,5],"measures":[3.2,4.7],"markers":[4.4]}
]
},
"facet": {
"row": {
"field": "title", "type": "ordinal",
"header": {"labelAngle": 0, "title": ""}
}
},
"spacing": 10,
"spec": {
"encoding": {
"x": {
"type": "quantitative",
"scale": {"nice": false},
"title": null
}
},
"layer": [{
"mark": {"type": "bar", "color": "#eee"},
"encoding": {"x": {"field": "ranges[2]"}}
},{
"mark": {"type": "bar", "color": "#ddd"},
"encoding": {"x": {"field": "ranges[1]"}}
},{
"mark": {"type": "bar", "color": "#ccc"},
"encoding": {"x": {"field": "ranges[0]"}}
},{
"mark": {"type": "bar", "color": "lightsteelblue", "size": 10},
"encoding": {"x": {"field": "measures[1]"}}
},{
"mark": {"type": "bar", "color": "steelblue", "size": 10},
"encoding": {"x": {"field": "measures[0]"}}
},{
"mark": {"type": "tick", "color": "black"},
"encoding": {"x": {"field": "markers[0]"}}
}]
},
"resolve": {"scale": {"x": "independent"}},
"config": {"tick": {"thickness": 2}}
}
|]
facetBullet =
let rows = map mkRow [ ("Revenue", "US$, in thousands", [150, 225, 300], [220, 270], [250])
, ("Profit", "%", [20, 25, 30], [21, 23], [26])
, ("Order Size", "US$, average", [350, 500, 600], [100, 320], [550])
, ("New Customers", "count", [1400, 2000, 2500], [1000, 1650], [2100])
, ("Satisfaction", "out of 5", [3.5, 4.25, 5], [3.2, 4.7], [4.4])
]
-- mkRow :: (T.Text, T.Text, [Double], [Double], [Double]) -> A.Value
mkRow (ttl, stl, rngs, ms, mks) = A.object [ "title" .= ttl
, "subtitle" .= stl
, "ranges" .= rngs
, "measures" .= ms
, "markers" .= mks
]
dvals = dataFromJson (A.toJSON rows) []
fct = facet [RowBy [FName "title", FmType Ordinal, FHeader [HLabelAngle 0, HTitle ""]]]
encX name = encoding (position X [PName name] [])
lyr1 = [mark Bar [MColor "#eee"], encX "ranges[2]"]
lyr2 = [mark Bar [MColor "#ddd"], encX "ranges[1]"]
lyr3 = [mark Bar [MColor "#ccc"], encX "ranges[0]"]
lyr4 = [mark Bar [MColor "lightsteelblue", MSize 10], encX "measures[1]"]
lyr5 = [mark Bar [MColor "steelblue", MSize 10], encX "measures[0]"]
lyr6 = [mark Tick [MColor "black"], encX "markers[0]"]
lyr = [ encoding
. position X [PmType Quantitative, PScale [SNice (IsNice False)], PNoTitle]
$ []
, layer (map asSpec [lyr1, lyr2, lyr3, lyr4, lyr5, lyr6])
]
spec = specification (asSpec lyr)
rslv = resolve (resolution (RScale [(ChX, Independent)]) [])
cnf = configure (configuration (TickStyle [MThickness 2]) [])
in toVegaLite [dvals, fct, spacing 10, spec, rslv, cnf]
vlShow facetBullet
validate facetBulletSpec facetBullet
```
Return to the [Table of Contents](#Table-of-Contents).
### Layered Plot with Dual-Axis
From https://vega.github.io/vega-lite/examples/layer_dual_axis.html
Note that Vega-Lite supports both `average` and `mean` (as synonyms) but `hvega` only provides `Mean`, so there will be differences in the JSON comparison below.
```
layerDualAxisSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "A dual axis chart, created by setting y's scale resolution to `\"independent\"`",
"width": 400, "height": 300,
"data": {
"url": "data/weather.csv"
},
"transform": [{"filter": "datum.location == \"Seattle\""}],
"encoding": {
"x": {
"timeUnit": "month",
"field": "date",
"axis": {"format": "%b", "title": null}
}
},
"layer": [
{
"mark": {"opacity": 0.3, "type": "area", "color": "#85C5A6"},
"encoding": {
"y": {
"aggregate": "average",
"field": "temp_max",
"scale": {"domain": [0, 30]},
"title": "Avg. Temperature (°C)",
"axis": {"titleColor": "#85C5A6"}
},
"y2": {
"aggregate": "average",
"field": "temp_min"
}
}
},
{
"mark": {"stroke": "#85A9C5", "type": "line", "interpolate": "monotone"},
"encoding": {
"y": {
"aggregate": "average",
"field": "precipitation",
"title": "Precipitation (inches)",
"axis": {"titleColor":"#85A9C5"}
}
}
}
],
"resolve": {"scale": {"y": "independent"}}
}
|]
layerDualAxis =
let label = description "A dual axis chart, created by setting y's scale resolution to `\"independent\"`"
dvals = dataFromUrl "data/weather.csv" []
trans = transform (filter (FExpr "datum.location == \"Seattle\"") [])
enc = encoding (position X [ PName "date", PTimeUnit (TU Month)
, PAxis [AxFormat "%b", AxNoTitle]
]
[])
col1 = "#85C5A6"
col2 = "#85A9C5"
enc1 = encoding
. position Y [ PName "temp_max", PAggregate Mean
, PScale [SDomain (DNumbers [0, 30])]
, PTitle "Avg. Temperature (°C)", PAxis [AxTitleColor col1]
]
. position Y2 [PName "temp_min", PAggregate Mean]
enc2 = encoding
. position Y [ PName "precipitation", PAggregate Mean
, PTitle "Precipitation (inches)", PAxis [AxTitleColor col2]
]
lyr1 = [mark Area [MOpacity 0.3, MColor col1], enc1 []]
lyr2 = [mark Line [MStroke col2, MInterpolate Monotone], enc2 []]
lyr = layer (map asSpec [lyr1, lyr2])
rsv = resolve (resolution (RScale [(ChY, Independent)]) [])
in toVegaLite [label, height 300, width 400, dvals, trans, enc, lyr, rsv]
vlShow layerDualAxis
validate layerDualAxisSpec layerDualAxis
```
Return to the [Table of Contents](#Table-of-Contents).
### Weekly Weather Plot
From https://vega.github.io/vega-lite/examples/bar_layered_weather.html
```
barLayeredWeatherSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"description": "A layered bar chart with floating bars representing weekly weather data",
"title": {
"text": ["Weekly Weather", "Observations and Predictions"],
"frame": "group"
},
"data": {
"url": "data/weather.json"
},
"width": 250,
"height": 200,
"encoding": {
"x": {
"field": "id",
"type": "ordinal",
"axis": {
"domain": false,
"ticks": false,
"labels": false,
"title": null,
"titlePadding": 25,
"orient": "top"
}
},
"y": {
"type": "quantitative",
"scale": {"domain": [10, 70]},
"axis": {"title": "Temperature (F)"}
}
},
"layer": [
{
"mark": {
"type": "bar",
"style": "box"
},
"encoding": {
"y": {"field": "record.low"},
"y2": {"field": "record.high"},
"size": {"value": 20},
"color": {"value": "#ccc"}
}
},
{
"mark": {
"type": "bar",
"style": "box"
},
"encoding": {
"y": {"field": "normal.low"},
"y2": {"field": "normal.high"},
"size": {"value": 20},
"color": {"value": "#999"}
}
},
{
"mark": {"type": "bar", "style": "box"},
"encoding": {
"y": {"field": "actual.low"},
"y2": {"field": "actual.high"},
"size": {"value": 12},
"color": {"value": "#000"}
}
},
{
"mark": {"type": "bar", "style": "box"},
"encoding": {
"y": {"field": "forecast.low.low"},
"y2": {"field": "forecast.low.high"},
"size": {"value": 12},
"color": {"value": "#000"}
}
},
{
"mark": {"type": "bar", "style": "box"},
"encoding": {
"y": {"field": "forecast.low.high"},
"y2": {"field": "forecast.high.low"},
"size": {"value": 3},
"color": {"value": "#000"}
}
},
{
"mark": {"type": "bar", "style": "box"},
"encoding": {
"y": {"field": "forecast.high.low"},
"y2": {"field": "forecast.high.high"},
"size": {"value": 12},
"color": {"value": "#000"}
}
},
{
"mark": {"type": "text", "align": "center", "baseline": "bottom", "y": -5},
"encoding": {
"text": {"field": "day"}
}
}
]
}
|]
barLayeredWeather =
let label = description "A layered bar chart with floating bars representing weekly weather data"
dvals = dataFromUrl "data/weather.json" []
-- hvega splits titles and sub-titles on \n to create multi-line
-- labels.
titleOpts = title "Weekly Weather\nObservations and Predictions" [TFrame FrGroup]
axis1 = [AxDomain False, AxTicks False, AxLabels False, AxNoTitle, AxTitlePadding 25, AxOrient STop]
enc = encoding
. position X [PName "id", PmType Ordinal, PAxis axis1]
. position Y [ PmType Quantitative
, PScale [SDomain (DNumbers [10, 70])]
, PAxis [AxTitle "Temperature (F)"]
]
$ []
enc1 = encoding
. position Y [PName "record.low"]
. position Y2 [PName "record.high"]
. size [MNumber 20]
. color [MString "#ccc"]
lyr1 = [mark Bar [MStyle ["box"]], enc1 []]
enc2 = encoding
. position Y [PName "normal.low"]
. position Y2 [PName "normal.high"]
. size [MNumber 20]
. color [MString "#999"]
lyr2 = [mark Bar [MStyle ["box"]], enc2 []]
enc3 = encoding
. position Y [PName "actual.low"]
. position Y2 [PName "actual.high"]
. size [MNumber 12]
. color [MString "#000"]
lyr3 = [mark Bar [MStyle ["box"]], enc3 []]
enc4 = encoding
. position Y [PName "forecast.low.low"]
. position Y2 [PName "forecast.low.high"]
. size [MNumber 12]
. color [MString "#000"]
lyr4 = [mark Bar [MStyle ["box"]], enc4 []]
enc5 = encoding
. position Y [PName "forecast.low.high"]
. position Y2 [PName "forecast.high.low"]
. size [MNumber 3]
. color [MString "#000"]
lyr5 = [mark Bar [MStyle ["box"]], enc5 []]
enc6 = encoding
. position Y [PName "forecast.high.low"]
. position Y2 [PName "forecast.high.high"]
. size [MNumber 12]
. color [MString "#000"]
lyr6 = [mark Bar [MStyle ["box"]], enc6 []]
enc7 = encoding (text [TName "day"] [])
lyr7 = [mark Text [MAlign AlignCenter, MBaseline AlignBottom, MY (-5)], enc7]
lyr = layer (map asSpec [lyr1, lyr2, lyr3, lyr4, lyr5, lyr6, lyr7])
in toVegaLite [label, titleOpts, dvals, width 250, height 200, enc, lyr]
vlShow barLayeredWeather
validate barLayeredWeatherSpec barLayeredWeather
```
Return to the [Table of Contents](#Table-of-Contents).
### Wheat and Wages Example
From https://vega.github.io/vega-lite/examples/wheat_wages.html
```
wheatWagesSpec = [aesonQQ|
{
"$schema": "https://vega.github.io/schema/vega-lite/v4.json",
"width": 900,
"height": 400,
"data": { "url": "data/wheat.json"},
"transform": [{"calculate": "+datum.year + 5", "as": "year_end"}],
"encoding": {
"y": {
"type": "quantitative",
"axis": { "zindex": 1 }
},
"x": {
"type": "quantitative",
"axis": {"tickCount": 5, "format": "d"}
}
},
"layer": [
{
"mark": {"type": "bar", "fill": "#aaa", "stroke": "#999"},
"encoding": {
"x": {"field": "year"},
"x2": {"field": "year_end"},
"y": {"field": "wheat"}
}
},
{
"data": {
"values": [
{ "year": "1600" },
{ "year": "1650" },
{ "year": "1700" },
{ "year": "1750" },
{ "year": "1800" }
]
},
"mark": {
"type": "rule",
"stroke": "#000",
"strokeWidth": 0.6,
"opacity": 0.5
},
"encoding": {
"x": {"field": "year"}
}
},
{
"mark": {
"type": "area",
"color": "#a4cedb",
"opacity": 0.7
},
"encoding": {
"x": {"field": "year"},
"y": {"field": "wages"}
}
},
{
"mark": {"type": "line", "color": "#000", "opacity": 0.7},
"encoding": {
"x": {"field": "year"},
"y": {"field": "wages"}
}
},
{
"mark": {"type": "line", "yOffset": -2, "color": "#EE8182"},
"encoding": {
"x": {"field": "year"},
"y": {"field": "wages"}
}
},
{
"data": {"url": "data/monarchs.json"},
"transform": [
{ "calculate": "((!datum.commonwealth && datum.index % 2) ? -1: 1) * 2 + 95", "as": "offset" },
{ "calculate": "95", "as": "y" }
],
"mark": {"type": "bar", "stroke": "#000"},
"encoding": {
"x": {"field": "start"},
"x2": {"field": "end"},
"y": {"field": "y"},
"y2": { "field": "offset" },
"fill": {
"field": "commonwealth",
"scale": { "range": ["black", "white"] },
"legend": null
}
}
},
{
"data": {"url": "data/monarchs.json"},
"transform": [
{ "calculate": "((!datum.commonwealth && datum.index % 2) ? -1: 1) + 95", "as": "off2" },
{ "calculate": "+datum.start + (+datum.end - +datum.start)/2", "as": "x"}
],
"mark": {
"type": "text",
"yOffset": 16,
"fontSize": 9,
"baseline": "bottom",
"fontStyle": "italic"
},
"encoding": {
"x": {"field": "x"},
"y": {"field": "off2"},
"text": {"field": "name"}
}
}
],
"config": {
"axis": {
"title": null,
"gridColor": "white",
"gridOpacity": 0.25,
"domain": false
},
"view": { "stroke": "transparent" }
}
}
|]
wheatWages =
let dvals = dataFromUrl "data/wheat.json" []
trans = transform (calculateAs "+datum.year + 5" "year_end" [])
enc = encoding
. position Y [PmType Quantitative, PAxis [AxZIndex 1]]
. position X [PmType Quantitative, PAxis [AxTickCount 5, AxFormat "d"]]
$ []
posX = position X [PName "year"]
enc1 = encoding
. posX
. position X2 [PName "year_end"]
. position Y [PName "wheat"]
lyr1 = [mark Bar [MFill "#aaa", MStroke "#999"], enc1 []]
dvals2 = dataFromColumns []
. dataColumn "year" (Strings ["1600", "1650", "1700", "1750", "1800"])
enc2 = encoding
. posX
lyr2 = [dvals2 [], mark Rule [MStroke "#000", MStrokeWidth 0.6, MOpacity 0.5], enc2 []]
enc3 = encoding
. posX
. position Y [PName "wages"]
lyr3 = [mark Area [MColor "#a4cedb", MOpacity 0.7], enc3 []]
lyr4 = [mark Line [MColor "#000", MOpacity 0.7], enc3 []]
lyr5 = [mark Line [MColor "#EE8182", MYOffset (-2)], enc3 []]
dvals6 = dataFromUrl "data/monarchs.json" []
trans6 = transform
. calculateAs "((!datum.commonwealth && datum.index % 2) ? -1: 1) * 2 + 95" "offset"
. calculateAs "95" "y"
enc6 = encoding
. position X [PName "start"]
. position X2 [PName "end"]
. position Y [PName "y"]
. position Y2 [PName "offset"]
. fill [MName "commonwealth", MScale [SRange (RStrings ["black", "white"])], MLegend []]
lyr6 = [dvals6, trans6 [], mark Bar [MStroke "#000"], enc6 []]
trans7 = transform
. calculateAs "((!datum.commonwealth && datum.index % 2) ? -1: 1) + 95" "off2"
. calculateAs "+datum.start + (+datum.end - +datum.start)/2" "x"
enc7 = encoding
. position X [PName "x"]
. position Y [PName "off2"]
. text [TName "name"]
markOpts7 = mark Text [MYOffset 16, MFontSize 9, MBaseline AlignBottom, MFontStyle "italic"]
lyr7 = [dvals6, trans7 [], markOpts7, enc7 []]
lyr = layer (map asSpec [lyr1, lyr2, lyr3, lyr4, lyr5, lyr6, lyr7])
cnf = configure
. configuration (Axis [NoTitle, GridColor "white", GridOpacity 0.25, Domain False])
. configuration (ViewStyle [ViewStroke "transparent"])
in toVegaLite [dvals, height 400, width 900, trans, enc, lyr, cnf []]
vlShow wheatWages
validate wheatWagesSpec wheatWages
```
Return to the [Table of Contents](#Table-of-Contents).
| github_jupyter |
<a href="https://colab.research.google.com/github/manabuishii/Py4Bio/blob/update-chapter10-ipynb/Chapter_10_Web_Applications.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Python for Bioinformatics
-----------------------------

This Jupyter notebook is intented to be used alongside the book [Python for Bioinformatics](http://py3.us/)
**Note:** Before opening the file, this file should be accesible from this Jupyter notebook. In order to do so, the following commands will download these files from Github and extract them into a directory called samples.
Chapter 10: Web Applications
-----------------------------
```
!wget https://github.com/Serulab/Py4Bio/archive/master.zip
!unzip master.zip
!mv Py4Bio-master/code/ch10/* ./
!apt-get -y install apache2
!pip install bottle
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
```
## **CGI IN PYTHON**
**Server for Listing 10.1**
This command shows URL which is used for Listing 10.1
```
get_ipython().system_raw('./ngrok http 80 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!/etc/init.d/apache2 restart
!a2enmod cgid
!service apache2 restart
```
**Listing 10.1:** firstcgi.py: First CGI script
```
!cp firstcgi.py /usr/lib/cgi-bin/
!chmod 755 /usr/lib/cgi-bin/firstcgi.py
```
*Access URL and put the /cgi-bin/firstcgi.py*
ex https://xxxxxx.ngrok.io/cgi-bin/firstcgi.py
## **Sending Data to a CGI Program**
**Listing 10.2:** greeting.html: HTML front end to send data to a CGI program
```
!cp greeting.html /var/www/html
```
**Listing 10.3:** greeting.py: CGI program that processes the form in
greeting.html.
```
!cp greeting.py /usr/lib/cgi-bin/
!chmod 755 /usr/lib/cgi-bin/greeting.py
```
Access URL(same for Listing 10.1) and put the /greeting.html
ex https://xxxxxx.ngrok.io/greeting.html
## Web Program to Calculate the Net Charge of a Protein (CGI version)
**Listing 10.4:** protcharge.html: HTML front end to send data to a CGI program
```
!cp protcharge.html /var/www/html
```
**Listing 10.5:** protcharge.py: Back-end code to calculate the net charge of a
protein and proportion of charged amino acid
```
%%writefile /usr/lib/cgi-bin/protcharge.py
#!/usr/bin/env python
import cgi, cgitb
def chargeandprop(aa_seq):
protseq = aa_seq.upper()
charge = -0.002
cp = 0
aa_charge = {'C':-.045,'D':-.999,'E':-.998,'H':.091,
'K':1,'R':1,'Y':-.001}
for aa in protseq:
charge += aa_charge.get(aa, 0)
if aa in aa_charge:
cp += 1
prop = float(cp)/len(aa_seq)*100
return (charge, prop)
cgitb.enable()
print('Content-Type: text/html\n')
form = cgi.FieldStorage()
seq = form.getvalue('aaseq', 'QWERTYYTREWQRTYEYTRQWE')
prop = form.getvalue('prop', 'n')
jobtitle = form.getvalue('title','No title')
charge, propvalue = chargeandprop(seq)
print('<html><body>Job title:{0}<br/>'.format(jobtitle))
print('Your sequence is:<br/>{0}<br/>'.format(seq))
print('Net charge: {0}<br/>'.format(charge))
if prop == 'y':
print('Proportion of charged AA: {0:.2f}<br/>'
.format(propvalue))
print('</body></html>')
!chmod 755 /usr/lib/cgi-bin/protcharge.py
```
Access URL(same for Listing 10.1) and put the /protcharge.html
ex https://xxxxxx.ngrok.io/protcharge.html
Sample Data: MARLQTALLVVLVLLAVALQATEAGPYGANMEDSVCCRDYVRYRLPLRVVKHFYWTSDSCPRPGVVLLTFRDKEICADPRVPWVKMILNKLSQ
*Stop Apache server*
```
!service apache2 stop
```
*Stop ngrok*
```
!pkill ngrok
```
## **Bottle: A Python Web Framework for WSGI**
**Server for Listing 10.6**
This command shows URL which is used for Listing 10.6
```
get_ipython().system_raw('./ngrok http 8000 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
```
*Check URL is different from the first one(for Listing 10.1)*
*If the same URL, please stop ngrok*
**Listing 10.6:** hellobottle.py: Hello World in Bottle
```
!python helloworldbottle.py
```
*Access URL for Listing 10.6*
ex https://xxxxxx.ngrok.io/
**Stop the server Press stop botton 2 upper cell**
Left of *!python helloworldbottle.py*
Successfully stopped there is `^C`
## **Bottle Components**
**Routes**
```
get_ipython().system_raw('./ngrok http 8000 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
%%writefile helloworldbottle.py
from bottle import route, run
@route('/')
def index():
return 'Top level, or Index Page'
@route('/about')
def about():
return 'The about page'
run(host='localhost', port=8000)
!python helloworldbottle.py
```
Access URL(for Routes) toppage and /about
ex https://xxxxxx.ngrok.io/
ex https://xxxxxx.ngrok.io/about
**Stop the server Press stop botton 2 upper cell**
Left of *!python helloworldbottle.py*
Successfully stopped there is `^C`
**URL with Variable Parts**
```
get_ipython().system_raw('./ngrok http 8000 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
%%writefile helloworldbottle.py
from bottle import route, run
@route('/')
def index():
return 'Top level, or Index Page'
@route('/about')
def about():
return 'The about page'
@route('/greets/<name>')
def shows_greeting(name):
return 'Hello {0}'.format(name)
run(host='localhost', port=8000)
!python helloworldbottle.py
```
Access URL(for URL with Variable Parts) toppage and /about
ex https://xxxxxx.ngrok.io/
ex https://xxxxxx.ngrok.io/about
ex https://xxxxxx.ngrok.io/greets/Adele
**Stop the server Press stop botton 2 upper cell**
Left of *!python helloworldbottle.py*
Successfully stopped there is `^C`
## Templates
**Listing 10.7:** index.tpl: Template for Bottle with variables
**Listing 10.8:** indextemplate1.py: Bottle code for template with variables
```
get_ipython().system_raw('./ngrok http 8000 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!python indextemplate1.py
```
Access URL(for URL Listing 10.7 and 10.8)
ex https://xxxxxx.ngrok.io/greets/Bob
**Stop the server Press stop botton 2 upper cell**
Left of *!python indextemplate1.py*
Successfully stopped there is `^C`
**Listing 10.9:** index2.tpl: Template for Bottle with variables and ow control
**Listing 10.10:** index2.py: Bottle code for template with variables
```
get_ipython().system_raw('./ngrok http 8000 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!python indextemplate2.py
```
Access URL(for URL Listing 10.9 and 10.10)
ex https://xxxxxx.ngrok.io/greets/Chris
**Stop the server Press stop botton 2 upper cell**
Left of *!python indextemplate2.py*
Successfully stopped there is `^C`
**Listing 10.11:** indextemplate3.py: Bottle code with logic in code instead of in templates
**Listing 10.12:** index3.tpl: template for indextemplate3.py
```
get_ipython().system_raw('./ngrok http 8000 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!python indextemplate3.py
```
Access URL(for URL Listing 10.11 and 10.12)
ex https://xxxxxx.ngrok.io/greets/Denis
ex https://xxxxxx.ngrok.io/greets/123Denis
**Stop the server Press stop botton 2 upper cell**
Left of *!python indextemplate3.py*
Successfully stopped there is `^C`
## Web Program to Calculate the Net Charge of a Protein (Bottle Version)
**Listing 10.13:** protchargebottle.py: Back-end of the program to calculate the
net charge of a protein using Bottle
**Listing 10.14:** result.html: Template for showing the result of method
protcharge
```
get_ipython().system_raw('./ngrok http 8000 &')
%%sh
curl -s http://localhost:4040/api/tunnels | python -c "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!python protchargebottle.py
```
Access URL(same for Listing 10.13 and 10.14)
ex https://xxxxxx.ngrok.io/
Sample Data: MARLQTALLVVLVLLAVALQATEAGPYGANMEDSVCCRDYVRYRLPLRVVKHFYWTSDSCPRPGVVLLTFRDKEICADPRVPWVKMILNKLSQ
**Stop the server Press stop botton 2 upper cell**
Left of *!python protchargebottle.py*
Successfully stopped there is `^C`
| github_jupyter |
# Main
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import json
from datetime import datetime
from api_keys import key1
from pprint import pprint
cities_csv_file = "./cities.csv"
cities_data = pd.read_csv(cities_csv_file, header = None)
url = "https://maps.googleapis.com/maps/api/geocode/json?address="
city_names = []
city_names = cities_data[0]
lats = []
lngs = []
for i in range(30):
query_url = url + city_names[i] + ",CA" + f"&key={key1}"
response = requests.get(query_url).json()
lats.append(response["results"][0]["geometry"]["location"]["lat"])
lngs.append(response["results"][0]["geometry"]["location"]["lng"])
url2 = "https://api.darksky.net/forecast"
api_key = "98d641033a2a414ed23fa6b53766900d"
time = 1514764800 # 01/01/2018 00h:00m:00s
# time = 1540857600
cloudiness = []
date = []
humidity = []
temp_high = []
temp_low = []
wind_speed = []
rain = []
city = []
summary = []
dew_point = []
moon_phase = []
for i in range(1): # City List city_names
for j in range(365): # Amount of time 1 = 1 Day
print(f"Processing: {datetime.utcfromtimestamp(time).strftime('%m-%d-%Y %H:%M:%S')}.")
time = time + 86400
query_url2 = f"{url2}/{api_key}/32.7157,-117.1611,{str(time)}?exclude=hourly,currently,minutely,flags"
#print(query_url2)
weather_response = requests.get(query_url2).json()
cloudiness.append(weather_response["daily"]["data"][0]["cloudCover"])
date.append(weather_response["daily"]["data"][0]["time"])
humidity.append(weather_response["daily"]["data"][0]["humidity"])
try:
temp_high.append(weather_response["daily"]["data"][0]["temperatureHigh"])
except:
temp_high.append(weather_response["daily"]["data"][0]["temperatureMax"])
try:
temp_low.append(weather_response["daily"]["data"][0]["temperatureLow"])
except:
temp_low.append(weather_response["daily"]["data"][0]["temperatureMin"])
wind_speed.append(weather_response["daily"]["data"][0]["windSpeed"])
rain.append(weather_response["daily"]["data"][0]["precipIntensityMax"])
summary.append(weather_response["daily"]["data"][0]["summary"])
dew_point.append(weather_response["daily"]["data"][0]["dewPoint"])
moon_phase.append(weather_response["daily"]["data"][0]["moonPhase"])
'''
print(f"t = {time}")
'''
print("All done!")
#del(humidity[302])
print(date)
weather_df = pd.DataFrame({"Date" : date, "Cloudiness" : cloudiness,
"High Temperature" : temp_high,
"Low Temperature" : temp_low,
"Humidity" : humidity,
"Precipitation" : rain,
"Wind Speed" : wind_speed,
"Summary" : summary,
"Dew Point" : dew_point,
"Moon Phase" : moon_phase})
weather_df["Date"] = pd.to_datetime(weather_df["Date"], unit = 's')
#weather_df["Date"] = datetime.utcfromtimestamp(weather_df["Date"]).strftime('%m-%d-%Y %H:%M:%S')
#weather_df = weather_df.set_index("City")
weather_df.to_csv(path_or_buf="./sandiego_weather_2018.csv",index=False,encoding="UTF-8")
weather_df
print(weather_df["High Temperature"].max())
```
| github_jupyter |
```
%matplotlib inline
import warnings
from datetime import datetime
import os
from pathlib import Path
import quandl
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pandas_datareader.data as web
from pandas_datareader.famafrench import get_available_datasets
from pyfinance.ols import PandasRollingOLS
from sklearn.feature_selection import mutual_info_classif
warnings.filterwarnings('ignore')
plt.style.use('fivethirtyeight')
idx = pd.IndexSlice
```
## Get Data
```
with pd.HDFStore('../data/assets.h5') as store:
prices = store['quandl/wiki/prices'].loc[idx['2000':'2018', :], 'adj_close'].unstack('ticker')
stocks = store['us_equities/stocks'].loc[:, ['marketcap', 'ipoyear', 'sector']]
stocks = stocks[~stocks.index.duplicated()]
```
### Keep data with stock info
```
stocks.info()
prices.info()
```
## Create monthly return series
Winsorize outliers
```
monthly_prices = prices.resample('M').last()
outlier_cutoff = 0.01
data = pd.DataFrame()
for i in [1, 2, 3, 6, 9, 12]:
data[f'return_{i}m'] = (monthly_prices
.pct_change(i)
.stack()
.pipe(lambda x: x.clip(lower=x.quantile(outlier_cutoff),
upper=x.quantile(1-outlier_cutoff)))
.add(1)
.pow(1/i)
.sub(1)
)
data = data.swaplevel().dropna()
data.info()
```
## Drop stocks with less than 10 yrs of returns
```
min_obs = 120
nobs = data.groupby(level='ticker').size()
keep = nobs[nobs>min_obs].index
data = data.loc[idx[keep,:], :]
data.info()
data.describe()
# cmap = sns.diverging_palette(10, 220, as_cmap=True)
sns.clustermap(data.corr('spearman'), annot=True, center=0, cmap='Blues');
data.index.get_level_values('ticker').nunique()
```
## Rolling Factor Betas
```
factors = ['Mkt-RF', 'SMB', 'HML', 'RMW', 'CMA']
factor_data = web.DataReader('F-F_Research_Data_5_Factors_2x3', 'famafrench', start='2000')[0].drop('RF', axis=1)
factor_data.index = factor_data.index.to_timestamp()
factor_data = factor_data.resample('M').last().div(100)
factor_data.index.name = 'date'
factor_data.info()
factor_data = factor_data.join(data['return_1m']).sort_index()
factor_data.info()
T = 24
betas = (factor_data
.groupby(level='ticker', group_keys=False)
.apply(lambda x: PandasRollingOLS(window=min(T, x.shape[0]-1), y=x.return_1m, x=x.drop('return_1m', axis=1)).beta))
betas.describe().join(betas.sum(1).describe().to_frame('total'))
cmap = sns.diverging_palette(10, 220, as_cmap=True)
sns.clustermap(betas.corr(), annot=True, cmap=cmap, center=0);
data = (data
.join(betas
.groupby(level='ticker')
.shift()))
data.info()
```
### Impute mean for missing factor betas
```
data.loc[:, factors] = data.groupby('ticker')[factors].apply(lambda x: x.fillna(x.mean()))
data.info()
```
## Momentum factors
```
for lag in [2,3,6,9,12]:
data[f'momentum_{lag}'] = data[f'return_{lag}m'].sub(data.return_1m)
data[f'momentum_3_12'] = data[f'return_12m'].sub(data.return_3m)
```
## Date Indicators
```
dates = data.index.get_level_values('date')
data['year'] = dates.year
data['month'] = dates.month
```
## Lagged returns
```
for t in range(1, 7):
data[f'return_1m_t-{t}'] = data.groupby(level='ticker').return_1m.shift(t)
# data = data.dropna(thresh=int(len(data.columns) * outlier_threshold))
# data = data.dropna()
data.info()
```
## Target: Holding Period Returns
```
for t in [1,2,3,6,12]:
data[f'target_{t}m'] = data.groupby(level='ticker')[f'return_{t}m'].shift(-t)
cols = ['target_1m',
'target_2m',
'target_3m', 'return_1m',
'return_2m',
'return_3m',
'return_1m_t-1',
'return_1m_t-2',
'return_1m_t-3']
data[cols].dropna().sort_index().head(10)
data.info()
```
## Create age proxy
```
data = (data
.join(pd.qcut(stocks.ipoyear, q=5, labels=list(range(1, 6)))
.astype(float)
.fillna(0)
.astype(int)
.to_frame('age')))
data.age = data.age.fillna(-1)
```
## Create dynamic size proxy
```
stocks.marketcap = stocks.marketcap.str.replace('$', '')
stocks['mcap'] = stocks.marketcap.str[-1]
stocks.marketcap = pd.to_numeric(stocks.marketcap.str[:-1])
stocks = stocks[stocks.mcap.isin(['B', 'M'])]
stocks.info()
stocks.marketcap = stocks.apply(lambda x: x.marketcap * 1000 if x.mcap == 'B' else x.marketcap, axis=1)
stocks.marketcap.describe()
size_factor = (monthly_prices
.loc[data.index.get_level_values('date').unique(),
data.index.get_level_values('ticker').unique()]
.sort_index(ascending=False)
.pct_change()
.fillna(0)
.add(1)
.cumprod())
size_factor.info()
msize = (size_factor
.mul(stocks
.loc[size_factor.columns, 'marketcap'])).dropna(axis=1, how='all')
data['msize'] = (msize
.apply(lambda x: pd.qcut(x, q=10, labels=list(range(1, 11)))
.astype(int), axis=1)
.stack()
.swaplevel())
data.msize = data.msize.fillna(-1)
```
## Combine data
```
data = data.join(stocks[['sector']])
data.sector = data.sector.fillna('Unknown')
data.info()
```
## Store data
```
with pd.HDFStore('data.h5') as store:
store.put('data', data.sort_index().loc[idx[:, :datetime(2018, 3, 1)], :])
```
## Create Dummy variables
```
dummy_data = pd.get_dummies(data,
columns=['year','month', 'msize', 'age', 'sector'],
prefix=['year','month', 'msize', 'age', ''],
prefix_sep=['_', '_', '_', '_', ''])
dummy_data = dummy_data.rename(columns={c:c.replace('.0', '') for c in dummy_data.columns})
dummy_data.info()
```
### Mutual Information
#### Original Data
```
target_labels = [f'target_{i}m' for i in [1,2,3,6,12]]
targets = data.dropna().loc[:, target_labels]
features = data.dropna().drop(target_labels, axis=1)
features.sector = pd.factorize(features.sector)[0]
cat_cols = ['year', 'month', 'msize', 'age', 'sector']
discrete_features = [features.columns.get_loc(c) for c in cat_cols]
mutual_info = pd.DataFrame()
for label in target_labels:
mi = mutual_info_classif(X=features,
y=(targets[label]> 0).astype(int),
discrete_features=discrete_features,
random_state=42
)
mutual_info[label] = pd.Series(mi, index=features.columns)
mutual_info.sum()
```
#### Normalized MI Heatmap
```
fig, ax= plt.subplots(figsize=(15, 4))
sns.heatmap(mutual_info.div(mutual_info.sum()).T, ax=ax, cmap='Blues');
```
#### Dummy Data
```
target_labels = [f'target_{i}m' for i in [1, 2, 3, 6, 12]]
dummy_targets = dummy_data.dropna().loc[:, target_labels]
dummy_features = dummy_data.dropna().drop(target_labels, axis=1)
cat_cols = [c for c in dummy_features.columns if c not in features.columns]
discrete_features = [dummy_features.columns.get_loc(c) for c in cat_cols]
mutual_info_dummies = pd.DataFrame()
for label in target_labels:
mi = mutual_info_classif(X=dummy_features,
y=(dummy_targets[label]> 0).astype(int),
discrete_features=discrete_features,
random_state=42
)
mutual_info_dummies[label] = pd.Series(mi, index=dummy_features.columns)
mutual_info_dummies.sum()
fig, ax= plt.subplots(figsize=(4, 20))
sns.heatmap(mutual_info_dummies.div(mutual_info_dummies.sum()), ax=ax, cmap='Blues');
```
| github_jupyter |
# Bite Size Bayes
Copyright 2020 Allen B. Downey
License: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Review
So far we have been working with distributions of only one variable. In this notebook we'll take a step toward multivariate distributions, starting with two variables.
We'll use cross-tabulation to compute a **joint distribution**, then use the joint distribution to compute **conditional distributions** and **marginal distributions**.
We will re-use `pmf_from_seq`, which I introduced in a previous notebook.
```
def pmf_from_seq(seq):
"""Make a PMF from a sequence of values.
seq: sequence
returns: Series representing a PMF
"""
pmf = pd.Series(seq).value_counts(sort=False).sort_index()
pmf /= pmf.sum()
return pmf
```
## Cross tabulation
To understand joint distributions, I'll start with cross tabulation. And to demonstrate cross tabulation, I'll generate a dataset of colors and fruits.
Here are the possible values.
```
colors = ['red', 'yellow', 'green']
fruits = ['apple', 'banana', 'grape']
```
And here's a random sample of 100 fruits.
```
np.random.seed(2)
fruit_sample = np.random.choice(fruits, 100, replace=True)
```
We can use `pmf_from_seq` to compute the distribution of fruits.
```
pmf_fruit = pmf_from_seq(fruit_sample)
pmf_fruit
```
And here's what it looks like.
```
pmf_fruit.plot.bar(color='C0')
plt.ylabel('Probability')
plt.title('Distribution of fruit');
```
Similarly, here's a random sample of colors.
```
color_sample = np.random.choice(colors, 100, replace=True)
```
Here's the distribution of colors.
```
pmf_color = pmf_from_seq(color_sample)
pmf_color
```
And here's what it looks like.
```
pmf_color.plot.bar(color='C1')
plt.ylabel('Probability')
plt.title('Distribution of colors');
```
Looking at these distributions, we know the proportion of each fruit, ignoring color, and we know the proportion of each color, ignoring fruit type.
But if we only have the distributions and not the original data, we don't know how many apples are green, for example, or how many yellow fruits are bananas.
We can compute that information using `crosstab`, which computes the number of cases for each combination of fruit type and color.
```
xtab = pd.crosstab(color_sample, fruit_sample,
rownames=['color'], colnames=['fruit'])
xtab
```
The result is a DataFrame with colors along the rows and fruits along the columns.
## Heatmap
The following function plots a cross tabulation using a pseudo-color plot, also known as a heatmap.
It represents each element of the cross tabulation with a colored square, where the color corresponds to the magnitude of the element.
The following function generates a heatmap using the Matplotlib function `pcolormesh`:
```
def plot_heatmap(xtab):
"""Make a heatmap to represent a cross tabulation.
xtab: DataFrame containing a cross tabulation
"""
plt.pcolormesh(xtab)
# label the y axis
ys = xtab.index
plt.ylabel(ys.name)
locs = np.arange(len(ys)) + 0.5
plt.yticks(locs, ys)
# label the x axis
xs = xtab.columns
plt.xlabel(xs.name)
locs = np.arange(len(xs)) + 0.5
plt.xticks(locs, xs)
plt.colorbar()
plt.gca().invert_yaxis()
plot_heatmap(xtab)
```
## Joint Distribution
A cross tabulation represents the "joint distribution" of two variables, which is a complete description of two distributions, including all of the conditional distributions.
If we normalize `xtab` so the sum of the elements is 1, the result is a joint PMF:
```
joint = xtab / xtab.to_numpy().sum()
joint
```
Each column in the joint PMF represents the conditional distribution of color for a given fruit.
For example, we can select a column like this:
```
col = joint['apple']
col
```
If we normalize it, we get the conditional distribution of color for a given fruit.
```
col / col.sum()
```
Each row of the cross tabulation represents the conditional distribution of fruit for each color.
If we select a row and normalize it, like this:
```
row = xtab.loc['red']
row / row.sum()
```
The result is the conditional distribution of fruit type for a given color.
## Conditional distributions
The following function takes a joint PMF and computes conditional distributions:
```
def conditional(joint, name, value):
"""Compute a conditional distribution.
joint: DataFrame representing a joint PMF
name: string name of an axis
value: value to condition on
returns: Series representing a conditional PMF
"""
if joint.columns.name == name:
cond = joint[value]
elif joint.index.name == name:
cond = joint.loc[value]
return cond / cond.sum()
```
The second argument is a string that identifies which axis we want to select; in this example, `'fruit'` means we are selecting a column, like this:
```
conditional(joint, 'fruit', 'apple')
```
And `'color'` means we are selecting a row, like this:
```
conditional(joint, 'color', 'red')
```
**Exercise:** Compute the conditional distribution of color for bananas. What is the probability that a banana is yellow?
```
# Solution goes here
# Solution goes here
```
## Marginal distributions
Given a joint distribution, we can compute the unconditioned distribution of either variable.
If we sum along the rows, which is axis 0, we get the distribution of fruit type, regardless of color.
```
joint.sum(axis=0)
```
If we sum along the columns, which is axis 1, we get the distribution of color, regardless of fruit type.
```
joint.sum(axis=1)
```
These distributions are called "[marginal](https://en.wikipedia.org/wiki/Marginal_distribution#Multivariate_distributions)" because of the way they are often displayed. We'll see an example later.
As we did with conditional distributions, we can write a function that takes a joint distribution and computes the marginal distribution of a given variable:
```
def marginal(joint, name):
"""Compute a marginal distribution.
joint: DataFrame representing a joint PMF
name: string name of an axis
returns: Series representing a marginal PMF
"""
if joint.columns.name == name:
return joint.sum(axis=0)
elif joint.index.name == name:
return joint.sum(axis=1)
```
Here's the marginal distribution of fruit.
```
pmf_fruit = marginal(joint, 'fruit')
pmf_fruit
```
And the marginal distribution of color:
```
pmf_color = marginal(joint, 'color')
pmf_color
```
The sum of the marginal PMF is the same as the sum of the joint PMF, so if the joint PMF was normalized, the marginal PMF should be, too.
```
joint.to_numpy().sum()
pmf_color.sum()
```
However, due to floating point error, the total might not be exactly 1.
```
pmf_fruit.sum()
```
**Exercise:** The following cells load the data from the General Social Survey that we used in Notebooks 1 and 2.
```
# Load the data file
import os
if not os.path.exists('gss_bayes.csv'):
!wget https://github.com/AllenDowney/BiteSizeBayes/raw/master/gss_bayes.csv
gss = pd.read_csv('gss_bayes.csv', index_col=0)
```
As an exercise, you can use this data to explore the joint distribution of two variables:
* `partyid` encodes each respondent's political affiliation, that is, the party the belong to. [Here's the description](https://gssdataexplorer.norc.org/variables/141/vshow).
* `polviews` encodes their political alignment on a spectrum from liberal to conservative. [Here's the description](https://gssdataexplorer.norc.org/variables/178/vshow).
The values for `partyid` are
```
0 Strong democrat
1 Not str democrat
2 Ind,near dem
3 Independent
4 Ind,near rep
5 Not str republican
6 Strong republican
7 Other party
```
The values for `polviews` are:
```
1 Extremely liberal
2 Liberal
3 Slightly liberal
4 Moderate
5 Slightly conservative
6 Conservative
7 Extremely conservative
```
Make a cross tabulation of `gss['partyid']` and `gss['polviews']` and normalize it to make a joint PMF.
```
# Solution goes here
```
Use `plot_heatmap` to display a heatmap of the joint distribution. What patterns do you notice?
```
plot_heatmap(joint2)
plt.xlabel('polviews')
plt.title('Joint distribution of polviews and partyid');
```
Use `marginal` to compute the marginal distributions of `partyid` and `polviews`, and plot the results.
```
# Solution goes here
# Solution goes here
```
Use `conditional` to compute the conditional distribution of `partyid` for people who identify themselves as "Extremely conservative" (`polviews==7`). How many of them are "strong Republicans" (`partyid==6`)?
```
# Solution goes here
```
Use `conditional` to compute the conditional distribution of `polviews` for people who identify themselves as "Strong Democrat" (`partyid==0`). How many of them are "Extremely liberal" (`polviews==1`)?
```
# Solution goes here
```
## Review
In this notebook we started with cross tabulation, which we normalized to create a joint distribution, which describes the distribution of two (or more) variables and all of their conditional distributions.
We used heatmaps to visualize cross tabulations and joint distributions.
Then we defined `conditional` and `marginal` functions that take a joint distribution and compute conditional and marginal distributions for each variables.
As an exercise, you had a chance to apply the same methods to explore the relationship between political alignment and party affiliation using data from the General Social Survey.
You might have noticed that we did not use Bayes's Theorem in this notebook. [In the next notebook](https://colab.research.google.com/github/AllenDowney/BiteSizeBayes/blob/master/11_faceoff.ipynb) we'll take the ideas from this notebook and apply them Bayesian inference.
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_1_feature_encode.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 4: Training for Tabular Data**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 4 Material
* **Part 4.1: Encoding a Feature Vector for Keras Deep Learning** [[Video]](https://www.youtube.com/watch?v=Vxz-gfs9nMQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_1_feature_encode.ipynb)
* Part 4.2: Keras Multiclass Classification for Deep Neural Networks with ROC and AUC [[Video]](https://www.youtube.com/watch?v=-f3bg9dLMks&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_2_multi_class.ipynb)
* Part 4.3: Keras Regression for Deep Neural Networks with RMSE [[Video]](https://www.youtube.com/watch?v=wNhBUC6X5-E&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_3_regression.ipynb)
* Part 4.4: Backpropagation, Nesterov Momentum, and ADAM Neural Network Training [[Video]](https://www.youtube.com/watch?v=VbDg8aBgpck&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_4_backprop.ipynb)
* Part 4.5: Neural Network RMSE and Log Loss Error Calculation from Scratch [[Video]](https://www.youtube.com/watch?v=wmQX1t2PHJc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_04_5_rmse_logloss.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 4.1: Encoding a Feature Vector for Keras Deep Learning
Neural networks can accept many types of data. We will begin with tabular data, where there are well defined rows and columns. This is the sort of data you would typically see in Microsoft Excel. An example of tabular data is shown below.
Neural networks require numeric input. This numeric form is called a feature vector. Each row of training data typically becomes one vector. The individual input neurons each receive one feature (or column) from this vector. In this section, we will see how to encode the following tabular data into a feature vector.
```
import pandas as pd
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 5)
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
pd.set_option('display.max_columns', 9)
pd.set_option('display.max_rows', 5)
display(df)
```
The following observations can be made from the above data:
* The target column is the column that you seek to predict. There are several candidates here. However, we will initially use product. This field specifies what product someone bought.
* There is an ID column. This column should not be fed into the neural network as it contains no information useful for prediction.
* Many of these fields are numeric and might not require any further processing.
* The income column does have some missing values.
* There are categorical values: job, area, and product.
To begin with, we will convert the job code into dummy variables.
```
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 5)
dummies = pd.get_dummies(df['job'],prefix="job")
print(dummies.shape)
pd.set_option('display.max_columns', 9)
pd.set_option('display.max_rows', 10)
display(dummies)
```
Because there are 33 different job codes, there are 33 dummy variables. We also specified a prefix, because the job codes (such as "ax") are not that meaningful by themselves. Something such as "job_ax" also tells us the origin of this field.
Next, we must merge these dummies back into the main data frame. We also drop the original "job" field, as it is now represented by the dummies.
```
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 5)
df = pd.concat([df,dummies],axis=1)
df.drop('job', axis=1, inplace=True)
pd.set_option('display.max_columns', 9)
pd.set_option('display.max_rows', 10)
display(df)
```
We also introduce dummy variables for the area column.
```
pd.set_option('display.max_columns', 7)
pd.set_option('display.max_rows', 5)
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
pd.set_option('display.max_columns', 9)
pd.set_option('display.max_rows', 10)
display(df)
```
The last remaining transformation is to fill in missing income values.
```
med = df['income'].median()
df['income'] = df['income'].fillna(med)
```
There are more advanced ways of filling in missing values, but they require more analysis. The idea would be to see if another field might give a hint as to what the income were. For example, it might be beneficial to calculate a median income for each of the areas or job categories. This is something to keep in mind for the class Kaggle competition.
At this point, the Pandas dataframe is ready to be converted to Numpy for neural network training. We need to know a list of the columns that will make up *x* (the predictors or inputs) and *y* (the target).
The complete list of columns is:
```
print(list(df.columns))
```
This includes both the target and predictors. We need a list with the target removed. We also remove **id** because it is not useful for prediction.
```
x_columns = df.columns.drop('product').drop('id')
print(list(x_columns))
```
### Generate X and Y for a Classification Neural Network
We can now generate *x* and *y*. Note, this is how we generate y for a classification problem. Regression would not use dummies and would simply encode the numeric value of the target.
```
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
```
We can display the *x* and *y* matrices.
```
print(x)
print(y)
```
The x and y values are now ready for a neural network. Make sure that you construct the neural network for a classification problem. Specifically,
* Classification neural networks have an output neuron count equal to the number of classes.
* Classification neural networks should use **categorical_crossentropy** and a **softmax** activation function on the output layer.
### Generate X and Y for a Regression Neural Network
For a regression neural network, the *x* values are generated the same. However, *y* does not use dummies. Make sure to replace **income** with your actual target.
```
y = df['income'].values
```
# Module 4 Assignment
You can find the first assignment here: [assignment 4](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class4.ipynb)
| github_jupyter |
# Environment Perception For Self-Driving Cars
Welcome to the final assignment for this course. In this assignment, you will learn how to use the material so far to extract useful scene information to allow self-driving cars to safely and reliably traverse their environment.
**In this assignment, you will:**
- Use the output of semantic segmentation neural networks to implement drivable space estimation in 3D.
- Use the output of semantic segmentation neural networks to implement lane estimation.
- Use the output of semantic segmentation to filter errors in the output of 2D object detectors.
- Use the filtered 2D object detection results to determine how far obstacles are from the self-driving car.
For most exercises, you are provided with a suggested outline. You are encouraged to diverge from the outline if you think there is a better, more efficient way to solve a problem.
Please go through cells in order. Lower cells will depend on higher cells to work properly.
You are only allowed to use the packages loaded bellow, mainly numpy, OpenCV, and the custom functions explained in the notebook. Run the cell bellow to import the required packages:
```
import numpy as np
import cv2
from matplotlib import pyplot as plt
from m6bk import *
%matplotlib inline
%load_ext autoreload
%autoreload 2
np.random.seed(1)
np.set_printoptions(precision=2, threshold=np.nan)
```
## 0 - Loading and Visualizing the Data
We provide you with a convenient dataset handler class to read and iterate through samples taken from the CARLA simulator. Run the following code to create a dataset handler object.
```
dataset_handler = DatasetHandler()
```
The dataset handler contains three test data frames 0, 1, and 2. Each frames contains:
- DatasetHandler().rgb: a camera RGB image
- DatasetHandler().depth: a depth image containing the depth in meters for every pixel.
- DatasetHandler().segmentation: an image containing the output of a semantic segmentation neural network as the category per pixel.
- DatasetHandler().object_detection: a numpy array containing the output of an object detection network.
Frame 0 will be used throughout this note book as a running example. **Frame 1 will be used for submission and grading of this assesment.** Frame 2 is provided as a challange for learners interested in a more difficult example.
The current data frame being read can be known through the following line of code:
```
dataset_handler.current_frame
```
Upon creation of the dataset handler object, frame 0 will be automatically read and loaded. The frame contents can be accessed by using four different attributes of the dataset handler object: image, depth, object_detection, and semantic segmentation. As an example, to access the image, camera calibration matrix, and depth run the following three cells:
```
image = dataset_handler.image
plt.imshow(image)
k = dataset_handler.k
print(k)
depth = dataset_handler.depth
plt.imshow(depth, cmap='jet')
```
The semantic segmentation output can be accessed in a similar manner through:
```
segmentation = dataset_handler.segmentation
plt.imshow(segmentation)
```
### Segmentation Category Mappings:
The output segmentation image contains mapping indices from every pixel to a road scene category. To visualize the semantic segmentation output, we map the mapping indices to different colors. The mapping indices and visualization colors for every road scene category can be found in the following table:
|Category |Mapping Index| Visualization Color|
| --- | --- | --- |
| Background | 0 | Black |
| Buildings | 1 | Red |
| Pedestrians | 4 | Teal |
| Poles | 5 | White |
| Lane Markings | 6| Purple |
| Roads | 7 | Blue |
| Side Walks| 8 | Yellow |
| Vehicles| 10 | Green |
The vis_segmentation function of the dataset handler transforms the index image to a color image for visualization:
```
colored_segmentation = dataset_handler.vis_segmentation(segmentation)
plt.imshow(colored_segmentation)
```
The set_frame function takes as an input a frame number from 0 to 2 and loads that frame allong with all its associated data. It will be useful for testing and submission at the end of this assesment.
```
dataset_handler.set_frame(2)
dataset_handler.current_frame
image = dataset_handler.image
plt.imshow(image)
```
## 1 - Drivable Space Estimation Using Semantic Segmentation Output
Your first task is to implement drivable space estimation in 3D. You are given the output of a semantic segmentation neural network, the camera calibration matrix K, as well as the depth per pixel.
### 1.1 - Estimating the x, y, and z coordinates of every pixel in the image:
You will be using the equations learned in module 1 to compute the x, y, and z coordinates of every pixel in the image. As a reminder, the equations to get the required 3D coordinates are:
$$z = depth $$
$x = \frac{(u - c_u) * z}{f} \tag{1}$
$y = \frac{(v - c_v) * z}{f} \tag{2}$
Here, $c_u$, $c_v$, and $f$ are the intrinsic calibration parameters found in the camera calibration matrix K such that:
$$K = \begin{pmatrix} f & 0 & u_c \\ 0 & f & u_v \\ 0& 0 & 1 \end{pmatrix}$$
***Note***: Make sure you are on frame 0 for the rest of this assessment. You will use the rest of the frames for testing after the assessment is done.
**Exercise**: Implement the estimation of the x and y coordinates of every pixel using equations (1) and (2):
```
# GRADED FUNCTION: xy_from_depth
def xy_from_depth(depth, k):
"""
Computes the x, and y coordinates of every pixel in the image using the depth map and the calibration matrix.
Arguments:
depth -- tensor of dimension (H, W), contains a depth value (in meters) for every pixel in the image.
k -- tensor of dimension (3x3), the intrinsic camera matrix
Returns:
x -- tensor of dimension (H, W) containing the x coordinates of every pixel in the camera coordinate frame.
y -- tensor of dimension (H, W) containing the y coordinates of every pixel in the camera coordinate frame.
"""
### START CODE HERE ### (≈ 7 lines in total)
# Get the shape of the depth tensor
#print(depth.shape)
H, W = depth.shape
# Grab required parameters from the K matrix
f = k[0,0]
c_u= k[0,2]
c_v= k[1,2]
# Generate a grid of coordinates corresponding to the shape of the depth map
u_mtx, v_mtx = np.meshgrid(np.arange(W), np.arange(H))
# Compute x and y coordinates
for i in range(H):
for j in range(W):
x = (u_mtx - c_u) * depth / f
y = (v_mtx - c_v) * depth / f
### END CODE HERE ###
return x, y
dataset_handler.set_frame(0)
k = dataset_handler.k
z = dataset_handler.depth
x, y = xy_from_depth(z, k)
print('x[800,800] = ' + str(x[800, 800]))
print('y[800,800] = ' + str(y[800, 800]))
print('z[800,800] = ' + str(z[800, 800]) + '\n')
print('x[500,500] = ' + str(x[500, 500]))
print('y[500,500] = ' + str(y[500, 500]))
print('z[500,500] = ' + str(z[500, 500]) + '\n')
```
**Expected Output**:
$x[800,800] = 0.720$
<br />
$y[800,800] = 1.436$
<br />
$z[800,800] = 2.864$
$x[500,500] = -9.5742765625$
<br />
$y[500,500] = 1.4464734375$
<br />
$z[500,500] = 44.083$
### 1.2 - Estimating The Ground Plane Using RANSAC:
In the context of self-driving cars, drivable space includes any space that the car is physically capable of traversing in 3D. The task of estimating the drivable space is equivalent to estimating pixels belonging to the ground plane in the scene. For the next exercise, you will use RANSAC to estimate the ground plane in the 3D camera coordinate frame from the x,y, and z coordinates estimated above.
The first step is to process the semantic segmentation output to extract the relevant pixels belonging to the class you want consider as ground. For this assessment, that class is the road class with a mapping index of 7. To extract the x,y,z coordinates of the road, run the following cell:
```
# Get road mask by choosing pixels in segmentation output with value 7
road_mask = np.zeros(segmentation.shape)
road_mask[segmentation == 7] = 1
# Show road mask
plt.imshow(road_mask)
# Get x,y, and z coordinates of pixels in road mask
x_ground = x[road_mask == 1]
y_ground = y[road_mask == 1]
z_ground = dataset_handler.depth[road_mask == 1]
xyz_ground = np.stack((x_ground, y_ground, z_ground))
```
The next step is to use the extracted x, y, and z coordinates of pixels belonging to the road to estimate the ground plane. RANSAC will be used for robustness against outliers.
**Exercise**: Implement RANSAC for plane estimation. Here are the 6 steps:
1. Choose a minimum of 3 points from xyz_ground at random.
2. Compute the ground plane model using the chosen random points, and the provided function compute_plane.
3. Compute the distance from the ground plane model to every point in xyz_ground, and compute the number of inliers based on a distance threshold.
4. Check if the current number of inliers is greater than all previous iterations and keep the inlier set with the largest number of points.
5. Repeat until number of iterations $\geq$ a preset number of iterations, or number of inliers $\geq$ minimum number of inliers.
6. Recompute and return a plane model using all inliers in the final inlier set.
Useful functions: `np.random.choice()`, `compute_plane()`, `dist_to_plane()`.
The two custom functions are provided to help you finish this part:
1. ***compute_plane(xyz):***
```
Computes plane coefficients a,b,c,d of the plane in the form ax+by+cz+d = 0
Arguments:
xyz -- tensor of dimension (3, N), contains points needed to fit plane.
k -- tensor of dimension (3x3), the intrinsic camera matrix
Returns:
p -- tensor of dimension (1, 4) containing the plane parameters a,b,c,d
```
2. ***dist_to_plane(plane, x, y, z):***
```
Computes distance from N points to a plane in 3D, given the plane parameters and the x,y,z coordinate of points.
Arguments:
plane -- tensor of dimension (4,1), containing the plane parameters [a,b,c,d]
x -- tensor of dimension (Nx1), containing the x coordinates of the points
y -- tensor of dimension (Nx1), containing the y coordinates of the points
z -- tensor of dimension (Nx1), containing the z coordinates of the points
Returns:
distance -- tensor of dimension (N, 1) containing the distance between points and the plane
```
The functions are already loaded through the import statement at the beginning of the notebook. You also could perform plane estimation yourself if you are up for a challenge!
```
# GRADED FUNCTION: RANSAC Plane Fitting
def ransac_plane_fit(xyz_data):
"""
Computes plane coefficients a,b,c,d of the plane in the form ax+by+cz+d = 0
using ransac for outlier rejection.
Arguments:
xyz_data -- tensor of dimension (3, N), contains all data points from which random sampling will proceed.
num_itr --
distance_threshold -- Distance threshold from plane for a point to be considered an inlier.
Returns:
p -- tensor of dimension (1, 4) containing the plane parameters a,b,c,d
"""
### START CODE HERE ### (≈ 23 lines in total)
# Set thresholds:
num_itr = 100 # RANSAC maximum number of iterations
min_num_inliers = xyz_data.shape[1] / 2 # RANSAC minimum number of inliers
distance_threshold = 0.01 # Maximum distance from point to plane for point to be considered inlier
largest_number_of_inliers = 0
largest_inlier_set_indexes = 0
for i in range(num_itr):
# Step 1: Choose a minimum of 3 points from xyz_data at random.
indexes = np.random.choice(xyz_data.shape[1], 3, replace = False)
# pt1 = xyz_data[:, indexes[0]]
# pt2 = xyz_data[:, indexes[1]]
# pt3 = xyz_data[:, indexes[2]]
# pts = np.stack((pt1, pt2, pt3))
pts = xyz_data[:, indexes]
# print(pts.shape)
# Step 2: Compute plane model
p = compute_plane(pts)
# Step 3: Find number of inliers
distance = dist_to_plane(p, xyz_data[0, :].T, xyz_data[1, :].T, xyz_data[2, :].T)
number_of_inliers = len(distance[distance > distance_threshold])
# Step 4: Check if the current number of inliers is greater than all previous iterations and keep the inlier set with the largest number of points.
if number_of_inliers > largest_number_of_inliers:
largest_number_of_inliers = number_of_inliers
largest_inlier_set_indexes = np.where(distance < distance_threshold)[0]
# Step 5: Check if stopping criterion is satisfied and break.
if (number_of_inliers > min_num_inliers):
break
# Step 6: Recompute the model parameters using largest inlier set.
output_plane = compute_plane(xyz_data[:, largest_inlier_set_indexes])
### END CODE HERE ###
return output_plane
p_final = ransac_plane_fit(xyz_ground)
print('Ground Plane: ' + str(p_final))
```
**Expected Output**:
Ground Plane: [0.01791606 -0.99981332 0.00723433 1.40281479]
To verify that the estimated plane is correct, we can visualize the inlier set computed on the whole image. Use the cell bellow to compute and visualize the ground mask in 2D image space.
```
dist = np.abs(dist_to_plane(p_final, x, y, z))
ground_mask = np.zeros(dist.shape)
ground_mask[dist < 0.1] = 1
ground_mask[dist > 0.1] = 0
plt.imshow(ground_mask)
```
We also provide a function to visualize the estimated drivable space in 3D. Run the following cell to visualize your estimated drivable space in 3D.
```
dataset_handler.plot_free_space(ground_mask)
```
The above visualization only shows where the self-driving car can physically travel. The obstacles such as the SUV to the left of the image, can be seen as dark pixels in our visualization:
<tr>
<td> <img src="images/image.png" style="width:320px;height:240px;"> </td>
<td> <img src="images/occ_grid.png" style="width:240px;height:240px;"> </td>
</tr>
However, estimating the drivable space is not enough for the self-driving car to get on roads. The self-driving car still needs to perform lane estimation to know where it is legally allowed to drive. Once you are comfortable with the estimated drivable space, continue the assessment to estimate the lane where the car can drive.
## 2 - Lane Estimation Using The Semantic Segmentation Output
Your second task for this assessment is to use the output of semantic segmentation to estimate the lane boundaries of the current lane the self-driving car is using. This task can be separated to two subtasks, lane line estimation, and post-processing through horizontal line filtering and similar line merging.
### 2.1 Estimating Lane Boundary Proposals:
The first step to perform this task is to estimate any line that qualifies as a lane boundary using the output from semantic segmentation. We call these lines 'proposals'.
**Exercise**: Estimate lane line proposals using OpenCv functions. Here are the 3 steps:
1. Create an image containing the semantic segmentation pixels belonging to categories relevant to the lane boundaries, similar to what we have done previously for the road plane. For this assessment, these pixels have the value of 6 and 8 in the neural network segmentation output.
2. Perform edge detection on the derived lane boundary image.
3. Perform line estimation on the output of edge detection.
Useful functions: `cv2.Canny()`, `cv2.HoughLinesP()`, `np.squeeze()`.
```
# GRADED FUNCTION: estimate_lane_lines
def estimate_lane_lines(segmentation_output):
"""
Estimates lines belonging to lane boundaries. Multiple lines could correspond to a single lane.
Arguments:
segmentation_output -- tensor of dimension (H,W), containing semantic segmentation neural network output
minLineLength -- Scalar, the minimum line length
maxLineGap -- Scalar, dimension (Nx1), containing the z coordinates of the points
Returns:
lines -- tensor of dimension (N, 4) containing lines in the form of [x_1, y_1, x_2, y_2], where [x_1,y_1] and [x_2,y_2] are
the coordinates of two points on the line in the (u,v) image coordinate frame.
"""
### START CODE HERE ### (≈ 7 lines in total)
# Step 1: Create an image with pixels belonging to lane boundary categories from the output of semantic segmentation
lane_boundary_mask = np.zeros(segmentation_output.shape).astype(np.uint8)
lane_boundary_mask[segmentation_output==6] = 255
lane_boundary_mask[segmentation_output==8] = 255
plt.imshow(lane_boundary_mask)
# Step 2: Perform Edge Detection using cv2.Canny()
edges = cv2.Canny(lane_boundary_mask, 100, 150)
# Step 3: Perform Line estimation using cv2.HoughLinesP()
lines = cv2.HoughLinesP(edges, rho=10, theta=np.pi/180, threshold=200, minLineLength=150, maxLineGap=50)
lines = lines.reshape((-1, 4))
# Note: Make sure dimensions of returned lines is (N x 4)
### END CODE HERE ###
return lines
lane_lines = estimate_lane_lines(segmentation)
print(lane_lines.shape)
plt.imshow(dataset_handler.vis_lanes(lane_lines))
```
***Expected Output***
<img src="images/lanes_1.png" style="width:320px;height:240px;">
### 2.2 - Merging and Filtering Lane Lines:
The second subtask to perform the estimation of the current lane boundary is to merge redundant lines, and filter out any horizontal lines apparent in the image. Merging redundant lines can be solved through grouping lines with similar slope and intercept. Horizontal lines can be filtered out through slope thresholding.
**Exercise**: Post-process the output of the function ``estimate_lane_lines`` to merge similar lines, and filter out horizontal lines using the slope and the intercept. The three steps are:
1. Get every line's slope and intercept using the function provided.
2. Determine lines with slope less than horizontal slope threshold. Filtering can be performed later if needed.
3. Cluster lines based on slope and intercept as you learned in Module 6 of the course.
4. Merge all lines in clusters using mean averaging.
Usefull Functions:
1. ***get_slope_intecept(lines):***
```
Computes distance from N points to a plane in 3D, given the plane parameters and the x,y,z coordinate of points.
Arguments:
lines -- tensor of dimension (N,4) containing lines in the form of [x_1, y_1, x_2, y_2], the coordinates of two points on the line
Returns:
slopes -- tensor of dimension (N, 1) containing the slopes of the lines
intercepts -- tensor of dimension (N,1) containing the intercepts of the lines
```
This function is already loaded through the import statement at the beginning of the notebook. You also could perform plane estimation yourself if you are up for a challenge!
```
# Graded Function: merge_lane_lines
def merge_lane_lines(
lines):
"""
Merges lane lines to output a single line per lane, using the slope and intercept as similarity measures.
Also, filters horizontal lane lines based on a minimum slope threshold.
Arguments:
lines -- tensor of dimension (N, 4) containing lines in the form of [x_1, y_1, x_2, y_2],
the coordinates of two points on the line.
Returns:
merged_lines -- tensor of dimension (N, 4) containing lines in the form of [x_1, y_1, x_2, y_2],
the coordinates of two points on the line.
"""
### START CODE HERE ### (≈ 25 lines in total)
# Step 0: Define thresholds
slope_similarity_threshold = 0.1
intercept_similarity_threshold = 40
min_slope_threshold = 0.3
clusters = []
current_inds = []
itr = 0
# Step 1: Get slope and intercept of lines
slopes, intercepts = get_slope_intecept(lines)
# Step 2: Determine lines with slope less than horizontal slope threshold.
slopes_horizontal = np.abs(slopes) > min_slope_threshold
# Step 3: Iterate over all remaining slopes and intercepts and cluster lines that are close to each other using a slope and intercept threshold.
for slope, intercept in zip(slopes, intercepts):
in_clusters = np.array([itr in current for current in current_inds])
if not in_clusters.any():
slope_cluster = np.logical_and(slopes < (slope+slope_similarity_threshold), slopes > (slope-slope_similarity_threshold))
intercept_cluster = np.logical_and(intercepts < (intercept+intercept_similarity_threshold), intercepts > (intercept-intercept_similarity_threshold))
inds = np.argwhere(slope_cluster & intercept_cluster & slopes_horizontal).T
if inds.size:
current_inds.append(inds.flatten())
clusters.append(lines[inds])
itr += 1
# Step 4: Merge all lines in clusters using mean averaging
merged_lines = [np.mean(cluster, axis=1) for cluster in clusters]
merged_lines = np.array(merged_lines).reshape((-1, 4))
# Note: Make sure dimensions of returned lines is (N x 4)
### END CODE HERE ###
return merged_lines
merged_lane_lines = merge_lane_lines(lane_lines)
plt.imshow(dataset_handler.vis_lanes(merged_lane_lines))
```
***Expected Output***
<img src="images/lanes_2.png" style="width:320px;height:240px;">
You now should have one line per lane as an output! The final step is to extrapolate the lanes to start at the beginning of the road, and end at the end of the road, and to determine the lane markings belonging to the current lane. We provide you with functions that perform these tasks in the cell bellow. Run the cell to visualize the final lane boundaries!
```
max_y = dataset_handler.image.shape[0]
min_y = np.min(np.argwhere(road_mask == 1)[:, 0])
extrapolated_lanes = extrapolate_lines(merged_lane_lines, max_y, min_y)
final_lanes = find_closest_lines(extrapolated_lanes, dataset_handler.lane_midpoint)
plt.imshow(dataset_handler.vis_lanes(final_lanes))
```
***Expected Output***
<img src="images/lanes_final.png" style="width:320px;height:240px;">
## 3 - Computing Minimum Distance To Impact Using The Output of 2D Object Detection.
Your final task for this assessment is to use 2D object detection output to determine the minimum distance to impact with objects in the scene. However, the task is complicated by the fact that the provided 2D detections are from a high recall, low precision 2D object detector. You will first be using the semantic segmentation output to determine which bounding boxes are valid. Then, you will compute the minimum distance to impact using the remaining bounding boxes and the depth image. Let us begin with a visualization of the output detection for our current frame. For visualization, you use the provided dataset handler function ``vis_object_detection`` as follows:
```
detections = dataset_handler.object_detection
plt.imshow(dataset_handler.vis_object_detection(detections))
```
Detections have the format [category, x_min, y_min, x_max, y_max, score]. The Category is a string signifying the classification of the bounding box such as 'Car', 'Pedestrian' or 'Cyclist'. [x_min,y_min] are the coordinates of the top left corner, and [x_max,y_max] are the coordinates of the bottom right corners of the objects. The score signifies the output of the softmax from the neural network.
```
print(detections)
```
### 3.1 - Filtering Out Unreliable Detections:
The first thing you can notice is that an wrong detection occures on the right side of the image. What is interestingis that this wrong detection has a high output score of 0.76 for being a car. Furthermore, two bounding boxes are assigned to the vehicle to the left of the image, both with a very high score, greater than 0.9. This behaviour is expected from a high precision, low recall object detector. To solve this problem, the output of the semantic segmentation network has to be used to eliminate unreliable detections.
**Exercise**: Eliminate unreliable detections using the output of semantic segmentation. The three steps are:
1. For each detection, compute how many pixels in the bounding box belong to the category predicted by the neural network.
2. Devide the computed number of pixels by the area of the bounding box (total number of pixels).
3. If the ratio is greater than a threshold keep the detection. Else, remove the detection from the list of detections.
Usefull functions: ``np.asfarray()``
***Note***: Make sure to handle both the 'Car' and 'Pedestrian' categories in the code.
```
# Graded Function: filter_detections_by_segmentation
def filter_detections_by_segmentation(detections, segmentation_output):
"""
Filter 2D detection output based on a semantic segmentation map.
Arguments:
detections -- tensor of dimension (N, 5) containing detections in the form of [Class, x_min, y_min, x_max, y_max, score].
segmentation_output -- tensor of dimension (HxW) containing pixel category labels.
Returns:
filtered_detections -- tensor of dimension (N, 5) containing detections in the form of [Class, x_min, y_min, x_max, y_max, score].
"""
### START CODE HERE ### (≈ 20 lines in total)
# Set ratio threshold:
ratio_threshold = 0.3 # If 1/3 of the total pixels belong to the target category, the detection is correct.
filtered_detections = []
for detection in detections:
# Step 1: Compute number of pixels belonging to the category for every detection.
class_name, x_min, y_min, x_max, y_max, score = detection
x_min = int(float(x_min))
y_min = int(float(y_min))
x_max = int(float(x_max))
y_max = int(float(y_max))
box_area = (x_max-x_min) * (y_max-y_min)
if class_name == 'Car' or class_name =='Cyclist':
class_index = 10
elif class_name == 'Pedestrian':
class_index = 4
correct_pixels = len(np.where(segmentation_output[y_min:y_max, x_min:x_max] == class_index)[0])
# Step 2: Devide the computed number of pixels by the area of the bounding box (total number of pixels).
ratio = correct_pixels / box_area
# Step 3: If the ratio is greater than a threshold keep the detection. Else, remove the detection from the list of detections
if ratio > ratio_threshold:
filtered_detections.append(detection)
### END CODE HERE ###
return filtered_detections
filtered_detections = filter_detections_by_segmentation(detections, segmentation)
plt.imshow(dataset_handler.vis_object_detection(filtered_detections))
```
### 3.2 - Estimating Minimum Distance To Impact:
The final task for this assessment is to estimate the minimum distance to every bounding box in the input detections. This can be performed by simply taking the minimum distance from the pixels in the bounding box to the camera center.
**Exercise**: Compute the minimum distance to impact between every object remaining after filtering and the self-driving car. The two steps are:
1. Compute the distance to the camera center using the x,y,z arrays from part I. This can be done according to the equation: $ distance = \sqrt{x^2 + y^2 + z^2}$.
2. Find the value of the minimum distance of all pixels inside the bounding box.
```
# Graded Function: find_min_distance_to_detection:
def find_min_distance_to_detection(detections, x, y, z):
"""
Filter 2D detection output based on a semantic segmentation map.
Arguments:
detections -- tensor of dimension (N, 5) containing detections in the form of [Class, x_min, y_min, x_max, y_max, score].
x -- tensor of dimension (H, W) containing the x coordinates of every pixel in the camera coordinate frame.
y -- tensor of dimension (H, W) containing the y coordinates of every pixel in the camera coordinate frame.
z -- tensor of dimensions (H,W) containing the z coordinates of every pixel in the camera coordinate frame.
Returns:
min_distances -- tensor of dimension (N, 1) containing distance to impact with every object in the scene.
"""
### START CODE HERE ### (≈ 20 lines in total)
min_distances= []
for detection in detections:
class_name, x_min, y_min, x_max, y_max, score = detection
x_min = int(float(x_min))
y_min = int(float(y_min))
x_max = int(float(x_max))
y_max = int(float(y_max))
box_x = x[y_min:y_max, x_min:x_max]
box_y = y[y_min:y_max, x_min:x_max]
box_z = z[y_min:y_max, x_min:x_max]
box_distances = np.sqrt(box_x**2 + box_y**2 + box_z**2)
# Step 2: Find minimum distance
min_distances.append(np.min(box_distances))
### END CODE HERE ###
return min_distances
min_distances = find_min_distance_to_detection(filtered_detections, x, y, z)
print('Minimum distance to impact is: ' + str(min_distances))
```
**Expected Output**
Minimum distance to impact is: 8.51
Run the cell bellow to visualize your estimated distance along with the 2D detection output.
```
font = {'family': 'serif','color': 'red','weight': 'normal','size': 12}
im_out = dataset_handler.vis_object_detection(filtered_detections)
for detection, min_distance in zip(filtered_detections, min_distances):
bounding_box = np.asfarray(detection[1:5])
plt.text(bounding_box[0], bounding_box[1] - 20, 'Distance to Impact:' + str(np.round(min_distance, 2)) + ' m', fontdict=font)
plt.imshow(im_out)
```
## 4 - Submission:
Evaluation of all the functions will be based on **three** outputs for frame 1 of the dataset:
1. The estimated ground plane from part 1.
2. The estimated lanes from part 2.
3. The estimated distances from part 3.
Please run the cell bellow, then copy its output to the provided output.yaml file for submission on the programming assignment page.
```
dataset_handler = DatasetHandler()
dataset_handler.set_frame(1)
segmentation = dataset_handler.segmentation
detections = dataset_handler.object_detection
z = dataset_handler.depth
# Part 1
k = dataset_handler.k
x, y = xy_from_depth(z, k)
road_mask = np.zeros(segmentation.shape)
road_mask[segmentation == 7] = 1
x_ground = x[road_mask == 1]
y_ground = y[road_mask == 1]
z_ground = dataset_handler.depth[road_mask == 1]
xyz_ground = np.stack((x_ground, y_ground, z_ground))
p_final = ransac_plane_fit(xyz_ground)
# Part II
lane_lines = estimate_lane_lines(segmentation)
merged_lane_lines = merge_lane_lines(lane_lines)
max_y = dataset_handler.image.shape[0]
min_y = np.min(np.argwhere(road_mask == 1)[:, 0])
extrapolated_lanes = extrapolate_lines(merged_lane_lines, max_y, min_y)
final_lanes = find_closest_lines(extrapolated_lanes, dataset_handler.lane_midpoint)
# Part III
filtered_detections = filter_detections_by_segmentation(detections, segmentation)
min_distances = find_min_distance_to_detection(filtered_detections, x, y, z)
# Print Submission Info
final_lane_printed = [list(np.round(lane)) for lane in final_lanes]
print('plane:')
print(list(np.round(p_final, 2)))
print('\n lanes:')
print(final_lane_printed)
print('\n min_distance')
print(list(np.round(min_distances, 2)))
```
### Visualize your Results:
Make sure your results visualization is appealing before submitting your results.
```
# Original Image
plt.imshow(dataset_handler.image)
# Part I
dist = np.abs(dist_to_plane(p_final, x, y, z))
ground_mask = np.zeros(dist.shape)
ground_mask[dist < 0.1] = 1
ground_mask[dist > 0.1] = 0
plt.imshow(ground_mask)
# Part II
plt.imshow(dataset_handler.vis_lanes(final_lanes))
# Part III
font = {'family': 'serif','color': 'red','weight': 'normal','size': 12}
im_out = dataset_handler.vis_object_detection(filtered_detections)
for detection, min_distance in zip(filtered_detections, min_distances):
bounding_box = np.asfarray(detection[1:5])
plt.text(bounding_box[0], bounding_box[1] - 20, 'Distance to Impact:' + str(np.round(min_distance, 2)) + ' m', fontdict=font)
plt.imshow(im_out)
```
<font color='blue'>
**What you should remember**:
- The output of semantic segmentation can be used to estimate drivable space.
- Classical computer vision can be used to find lane boundaries.
- The output of semantic segmentation can be used to filter out unreliable output from object detection.
Congrats on finishing this assignment!
| github_jupyter |
# Mean Normalization
In machine learning we use large amounts of data to train our models. Some machine learning algorithms may require that the data is *normalized* in order to work correctly. The idea of normalization, also known as *feature scaling*, is to ensure that all the data is on a similar scale, *i.e.* that all the data takes on a similar range of values. For example, we might have a dataset that has values between 0 and 5,000. By normalizing the data we can make the range of values be between 0 and 1.
In this lab, you will be performing a different kind of feature scaling known as *mean normalization*. Mean normalization will scale the data, but instead of making the values be between 0 and 1, it will distribute the values evenly in some small interval around zero. For example, if we have a dataset that has values between 0 and 5,000, after mean normalization the range of values will be distributed in some small range around 0, for example between -3 to 3. Because the range of values are distributed evenly around zero, this guarantees that the average (mean) of all elements will be zero. Therefore, when you perform *mean normalization* your data will not only be scaled but it will also have an average of zero.
# To Do:
You will start by importing NumPy and creating a rank 2 ndarray of random integers between 0 and 5,000 (inclusive) with 1000 rows and 20 columns. This array will simulate a dataset with a wide range of values. Fill in the code below
```
# import NumPy into Python
import numpy as np
# Create a 1000 x 20 ndarray with random integers in the half-open interval [0, 5001).
X = np.random.randint(0,5001,size=(1000, 20))
# print the shape of X
print("Shape of X is: ", X.shape)
```
Now that you created the array we will mean normalize it. We will perform mean normalization using the following equation:
$\mbox{Norm_Col}_i = \frac{\mbox{Col}_i - \mu_i}{\sigma_i}$
where $\mbox{Col}_i$ is the $i$th column of $X$, $\mu_i$ is average of the values in the $i$th column of $X$, and $\sigma_i$ is the standard deviation of the values in the $i$th column of $X$. In other words, mean normalization is performed by subtracting from each column of $X$ the average of its values, and then by dividing by the standard deviation of its values. In the space below, you will first calculate the average and standard deviation of each column of $X$.
```
# Average of the values in each column of X
ave_cols =np.mean(X, axis=0)
# Standard Deviation of the values in each column of X
std_cols = np.std(X, axis=0)
```
If you have done the above calculations correctly, then `ave_cols` and `std_cols`, should both be vectors with shape `(20,)` since $X$ has 20 columns. You can verify this by filling the code below:
```
# Print the shape of ave_cols
print("The shape of ave_cols is: ", ave_cols.shape)
# Print the shape of std_cols
print("The shape of std_cols is: ", std_cols.shape)
```
You can now take advantage of Broadcasting to calculate the mean normalized version of $X$ in just one line of code using the equation above. Fill in the code below
```
# Mean normalize X
X_norm = (X - ave_cols) / std_cols
```
If you have performed the mean normalization correctly, then the average of all the elements in $X_{\tiny{\mbox{norm}}}$ should be close to zero, and they should be evenly distributed in some small interval around zero. You can verify this by filing the code below:
```
# Print the average of all the values of X_norm
# You can use either the function or a method. So, there are multiple ways to solve.
print("The average of all the values of X_norm is: ")
print(np.mean(X_norm))
print(X_norm.mean())
# Print the average of the minimum value in each column of X_norm
print("The average of the minimum value in each column of X_norm is: ")
print(X_norm.min(axis = 0).mean())
print(np.mean(np.sort(X_norm, axis=0)[0]))
# Print the average of the maximum value in each column of X_norm
print("The average of the maximum value in each column of X_norm is: ")
print(np.mean(np.sort(X_norm, axis=0)[-1]))
print(X_norm.max(axis = 0).mean())
```
You should note that since $X$ was created using random integers, the above values will vary.
# Data Separation
After the data has been mean normalized, it is customary in machine learnig to split our dataset into three sets:
1. A Training Set
2. A Cross Validation Set
3. A Test Set
The dataset is usually divided such that the Training Set contains 60% of the data, the Cross Validation Set contains 20% of the data, and the Test Set contains 20% of the data.
In this part of the lab you will separate `X_norm` into a Training Set, Cross Validation Set, and a Test Set. Each data set will contain rows of `X_norm` chosen at random, making sure that we don't pick the same row twice. This will guarantee that all the rows of `X_norm` are chosen and randomly distributed among the three new sets.
You will start by creating a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`. You can do this by using the `np.random.permutation()` function. The `np.random.permutation(N)` function creates a random permutation of integers from 0 to `N - 1`. Let's see an example:
```
# We create a random permutation of integers 0 to 4
np.random.permutation(5)
```
# To Do
In the space below create a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`. You can do this in one line of code by extracting the number of rows of `X_norm` using the `shape` attribute and then passing it to the `np.random.permutation()` function. Remember the `shape` attribute returns a tuple with two numbers in the form `(rows,columns)`.
```
# Create a rank 1 ndarray that contains a random permutation of the row indices of `X_norm`
row_indices = np.random.permutation(X_norm.shape[0])
```
Now you can create the three datasets using the `row_indices` ndarray to select the rows that will go into each dataset. Rememeber that the Training Set contains 60% of the data, the Cross Validation Set contains 20% of the data, and the Test Set contains 20% of the data. Each set requires just one line of code to create. Fill in the code below
```
# Make any necessary calculations.
# You can save your calculations into variables to use later.
# You have to extract the number of rows in each set using row_indices.
# Note that the row_indices are random integers in a 1-D array.
# Hence, if you use row_indices for slicing, it will NOT give the correct result.
# Let's get the count of 60% rows. Since, len(X_norm) has a lenght 1000, therefore, 60% = 600
sixty = int(len(X_norm) * 0.6)
# Let's get the count of 80% rows
eighty = int(len(X_norm) * 0.8)
# Create a Training Set
# Here row_indices[:sixty] will give you first 600 values, e.g., [93 255 976 505 281 292 977,.....]
# Those 600 values will will be random, because row_indices is a 1-D array of random integers.
# Next, extract all rows represented by these 600 indices, as X_norm[row_indices[:sixty], :]
X_train = X_norm[row_indices[:sixty], :]
# Create a Cross Validation Set
X_crossVal = X_norm[row_indices[sixty: eighty], :]
# Create a Test Set
X_test = X_norm[row_indices[eighty: ], :]
```
If you performed the above calculations correctly, then `X_tain` should have 600 rows and 20 columns, `X_crossVal` should have 200 rows and 20 columns, and `X_test` should have 200 rows and 20 columns. You can verify this by filling the code below:
```
# Print the shape of X_train
print(X_train.shape)
# Print the shape of X_crossVal
print(X_crossVal.shape)
# Print the shape of X_test
print(X_test.shape)
```
| github_jupyter |
```
!pip install pillow
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import Dropout, Flatten, Dense, GlobalAveragePooling2D
from keras import backend as k
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TensorBoard, EarlyStopping
from keras.preprocessing import image
from keras.utils.np_utils import to_categorical
import numpy as np
#from sklearn.preprocessing import LabelEncoder
from keras.optimizers import Adam
from keras.layers import Input, Dense, Convolution2D, MaxPooling2D, AveragePooling2D, ZeroPadding2D, Dropout, Flatten, merge, Reshape, Activation
import PIL
import PIL.Image
#!pip install pillow,scikit-learn
#import PIL,sklearn
import glob,matplotlib as plt
%matplotlib inline
from keras.applications.vgg16 import VGG16, preprocess_input,decode_predictions
#from keras.applications.resnet50 import ResNet50,preprocess_input, decode_predictions
img_width, img_height = 224, 224
model = applications.VGG16(weights = "imagenet", include_top=False, input_shape = (img_width, img_height, 3))
l=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
path='train/'
img_train=[]
for i in l:
img_list = glob.glob(path+str(i)+'**/*.jpg', recursive=True)
img_train +=img_list
path='test/'
img_test= glob.glob(path+'**/*.jpg', recursive=True)
len(img_train),len(img_test)
import pandas as pd
label=pd.DataFrame()
label['id']=img_train
label['target']=label.id.apply(lambda x : x.split("/")[1])
#label['id']=label.id.apply(lambda x : x.split("/")[2])
label.drop_duplicates(keep='first',inplace=True)
!pip install sklearn
from sklearn.utils import shuffle
#
label = shuffle(label)
label.head()
import cv2
def read_image(img_path,H,W):
img = cv2.imread(img_path, cv2.IMREAD_COLOR)
#img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (W,H)) # you can resize to (128,128) or (256,256)
return img
import gc
gc.collect()
%%time
image_lr=[]
train_img=[]
for i in list(label.id):
image_lr.append(read_image(i,224,224))
train_img=np.array(image_lr,dtype=np.float32)
train_img=preprocess_input(train_img,mode='tf')
%%time
image_lr=[]
test_img=[]
for i in img_test:
image_lr.append(read_image(i,224,224))
test_img=np.array(image_lr,dtype=np.float32)
test_img=preprocess_input(test_img,mode='tf')
train_img.shape#,test_img.shape
'''
# Initiate the train and test generators with data Augumentation
train_datagen = ImageDataGenerator(
#rescale = 1./255,
horizontal_flip = True,
fill_mode = "nearest",
zoom_range = 0.2,
width_shift_range = 0.2,
height_shift_range=0.2,
rotation_range=20)
test_datagen = ImageDataGenerator(
#rescale = 1./255,
horizontal_flip = True,
fill_mode = "nearest",
zoom_range = 0.2,
width_shift_range = 0.2,
height_shift_range=0.2,
rotation_range=20)
gc.collect()
'''
#l=train_img.shape[0]-int(0.1*train_img.shape[0])
#l
from keras.utils import to_categorical
Y=to_categorical(label.target,num_classes=15)
Y.shape
i=55910
print(np.argmax(Y[i]))
plt.pyplot.imshow(train_img[i])
x_train=train_img[:l,:,:,:]
y_train=Y[:l]
x_valid=train_img[l:,:,:,:]
y_valid=Y[l:]
x_train.shape,y_train.shape,x_valid.shape,y_valid.shape
'''
train_generator = train_datagen.flow(x_train,y_train,
batch_size = batch_size,
shuffle=True)
#class_mode = "categorical")
validation_generator = test_datagen.flow(x_valid,y_valid,
batch_size = batch_size,
shuffle=True)
#class_mode = "categorical")
'''
#model = applications.ResNet50(weights = "imagenet", include_top=False, input_shape = (img_width, img_height, 3))
model.layers[16:]
# Freeze the layers which you don't want to train. Here I am freezing the first 5 layers.
for layer in model.layers[:16]:
layer.trainable = False
for layer in model.layers[16:]:
layer.trainable = True
#Adding custom Layers
x = model.output
x = GlobalAveragePooling2D()(x)
x = Dense(512, activation="relu")(x)
x = Dropout(0.3)(x)
x = Dense(128, activation="relu")(x)
predictions = Dense(15, activation="softmax")(x)
# creating the final model
model_final = Model(input = model.input, output = predictions)
#model.load_weights('caavo-1.h5')
# Save the model according to the conditions
#checkpoint = ModelCheckpoint("vgg16_1.h5", monitor='val_acc', verbose=1, save_best_only=True, save_weights_only=False, mode='auto', period=1)
#early = EarlyStopping(monitor='val_acc', min_delta=0, patience=10, verbose=1, mode='auto')
from keras.metrics import categorical_accuracy
adam = Adam(lr=1e-3)
model_final.compile(optimizer=adam, loss='categorical_crossentropy', metrics=['accuracy'])
model_final.fit(train_img, Y,batch_size=256,epochs=2,shuffle=True,verbose=1)
# Train the model
k.set_value(adam.lr, 0.0001)
model_final.fit(train_img, Y,batch_size=256,epochs=3,shuffle=True,verbose=1)
#callbacks = [checkpoint, early])
# serialize model to JSON
from keras.models import model_from_json
model_json = model_final.to_json()
with open("caavo-1.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model_final.save_weights("caavo-1.h5")
print("Saved model to disk")
# later...
```
### PREDICTIONG / SUBMISSION
##### LOAD MODEL AND WEIGHTS
```
# serialize model to JSON
from keras.models import model_from_json
# load json and create model
json_file = open('caavo-1.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
model_final = model_from_json(loaded_model_json)
# load weights into new model
model_final.load_weights("caavo-1.h5")
print("Loaded model from disk")
pred=model_final.predict(test_img, batch_size=128,verbose=1)
i=5200
print(np.argmax(pred[i]))
plt.pyplot.imshow(test_img[i])
p=[]
for i in range(test_img.shape[0]):
p.append(np.argmax(pred[i]))
len(p)
sub=pd.read_csv('sample_submissioin.csv')
sub.head()
sub1=pd.DataFrame()
sub1['image_name']=img_test
sub1['image_name']=sub1['image_name'].apply(lambda x : x.split("/")[1])
sub1.head()
sub1['category']=p
sub1.to_csv('sub1.csv',index=False)
sub1.head()
sub1.category.value_counts()
```
| github_jupyter |
# Python 101 Exercises
#### Exercise 1
Write a function which takes a integer number as input an checks if its even or odd
```
def even_odd(num):
if (num % 2) == 0:
print("{0} is Even".format(num))
else:
print("{0} is Odd".format(num))
even_odd(num = 1)
```
#### Exercise 2
Write a function that removes all occurrences of a specific value from a list.
*For example:
remove_value(list = [3, 2, 1, 5, 4, 0, 0, 4, 2, 0, 7, 5, 0], value = 0)
Output: [3, 2, 1, 5, 4,4, 2, 7, 5]*
```
def remove_value(sample_list, val):
l = []
for i in sample_list:
if i != val:
l.append(i)
return l
remove_value(sample_list = [3, 2, 1, 5, 4, 0, 0, 4, 2, 0, 7, 5, 0], val = 0)
```
#### Exercise 3
Write a function which prints a Fibonnaci sequence of variable length
*For example:
fibonnaci(n=7)
Output: 1, 1, 2, 3, 5, 8, 13*
```
def fib(n):
fib = [0, 1]
for i in range(2, n):
fib.append(fib[i-1]+fib[i-2])
print(', '.join(map(str, fib)))
fib(7)
```
#### Exercise 4
Given a string, return a new string where the first and last characters have been exchanged
*For example:
exchange(string='abcdefg')
Output: 'gbcdefa'
```
def exchange(str1):
return str1[-1:] + str1[1:-1] + str1[:1]
print(exchange('abcd'))
print(exchange('12345'))
```
#### Exercise 5
Get the key of a minimum value from the following dictionary
sample_dict = {'A': 23, 'B': 65, 'C': 75, 'D': 3, 'E': 15, 'F': 175}
```
sample_dict = {'A': 23,'B': 65,'C': 75,'D': 3,'E': 15,'F': 175}
print(min(sample_dict, key=sample_dict.get))
```
#### Exercise 6
Use the ```sum()``` function to find the sum of dictionary values.
```
def returnSum(myDict):
list = []
for i in myDict:
list.append(myDict[i])
final = sum(list)
return final
dict = {'a': 100, 'b':200, 'c':300}
print("Sum :", returnSum(dict))
```
#### Exercise 7
Write a program which can compute the factorial of a given number.
```
def fact(x):
if x == 0:
return 1
return x * fact(x - 1)
x=6
fact(x)
```
#### Exercise 8
Write a function that checks if the first and last number of a list is the same.
The function should return ```True``` if the first and last number of a given list is same. If the numbers are not the same then return ```False```.
```
def first_last_same(numberList):
print("Given list:", numberList)
first_num = numberList[0]
last_num = numberList[-1]
if first_num == last_num:
return True
else:
return False
numbers_x = [10, 20, 30, 40, 10]
print("result is", first_last_same(numbers_x))
numbers_y = [75, 65, 35, 75, 30]
print("result is", first_last_same(numbers_y))
```
#### Exercise 9
Write a function that prints the characters of a string that are present at an even index number.
*For example
even_characters(string = "Python")
Output: ‘p’, ‘t’, ‘o’*
```
def even_characters(string):
size = len(string)
for i in range(0, size - 1, 2):
print("index[", i, "]", string[i])
even_characters('Hello World')
```
#### Exercise 10
Iterate over the given list of numbers and print only those numbers which are divisible by 5
```
def divisible(num_list):
for num in num_list:
if num % 5 == 0:
print(num)
num_list = [10, 20, 33, 46, 55, 88, 32, 100]
divisible(num_list)
```
#### Exercise 11
Write a function which checks if the given number is a palindrome number.
```
def palindrome(number):
print("original number", number)
original_num = number
reverse_num = 0
while number > 0:
reminder = number % 10
reverse_num = (reverse_num * 10) + reminder
number = number // 10
if original_num == reverse_num:
print("Given number palindrome")
else:
print("Given number is not palindrome")
palindrome(121)
palindrome(125)
```
#### Exercise 12
Write a function to print all prime numbers within a certain range
```
def prime(start, end):
for num in range(start, end + 1):
if num > 1:
for i in range(2, num):
if (num % i) == 0:
break
else:
print(num)
start = 25
end = 50
prime(start, end)
```
#### Exercise 14
Given a string, print all words with even length in the given string.
```
def printWords(s):
s = s.split(' ')
for word in s:
# if length is even
if len(word)%2==0:
print(word)
s = "I like Python"
printWords(s)
```
| github_jupyter |
# Single cell data analysis using Scanpy
* __Notebook version__: `v0.0.2`
* __Created by:__ `Imperial BRC Genomics Facility`
* __Maintained by:__ `Imperial BRC Genomics Facility`
* __Docker image:__ `imperialgenomicsfacility/scanpy-notebook-image:release-v0.0.1`
* __Github repository:__ [imperial-genomics-facility/scanpy-notebook-image](https://github.com/imperial-genomics-facility/scanpy-notebook-image)
* __Created on:__ `2020-April-04 15:30`
* __Contact us:__ [Imperial BRC Genomics Facility](https://www.imperial.ac.uk/medicine/research-and-impact/facilities/genomics-facility/contact/)
* __License:__ [Apache License 2.0](https://github.com/imperial-genomics-facility/scanpy-notebook-image/blob/master/LICENSE)
## Table of contents
* [Introduction](#Introduction)
* [Loading required libraries](#Loading-required-libraries)
* [Reading data from Cellranger output](#Reading-data-from-Cellranger-output)
* [Data processing and visualization](#Data-processing-and-visualization)
* [Checking highly variable genes](#Checking-highly-variable-genes)
* [Quality control](#Quality-control)
* [Computing metrics for cell QC](#Computing-metrics-for-cell-QC)
* [Plotting MT gene fractions](#Plottng-MT-gene-fractions)
* [Count depth distribution](#Count-depth-distribution)
* [Gene count distribution](#Gene-count-distribution)
* [Counting cells per gene](#Counting-cells-per-gene)
* [Ploting count depth vs MT fraction](#Ploting-count-depth-vs-MT-fraction)
* [Checking thresholds and filtering data](#Checking-thresholds-and-filtering-data)
* [Normalization](#Normalization)
* [Highly variable genes](#Highly-variable-genes)
* [Regressing out technical effects](#Regressing-out-technical-effects)
* [Principal component analysis](#Principal-component-analysis)
* [Neighborhood graph](#Neighborhood-graph)
* [Clustering the neighborhood graph](#Clustering-the-neighborhood-graph)
* [Embed the neighborhood graph using UMAP](#Embed-the-neighborhood-graph-using-UMAP)
* [Plotting 3D UMAP](#Plotting-3D-UMAP)
* [Plotting 2D UMAP](#Plotting-2D-UMAP)
* [Embed the neighborhood graph using tSNE](#Embed-the-neighborhood-graph-using-tSNE)
* [Finding marker genes](#Finding-marker-genes)
* [Stacked violin plot of ranked genes](#Stacked-violin-plot-of-ranked-genes)
* [Dot plot of ranked genes](#Dot-plot-of-ranked-genes)
* [Matrix plot of ranked genes](#Matrix-plot-of-ranked-genes)
* [Heatmap plot of ranked genes](#Heatmap-plot-of-ranked-genes)
* [Tracks plot of ranked genes](#Tracks-plot-of-ranked-genes)
* [References](#References)
* [Acknowledgement](#Acknowledgement)
## Introduction
This notebook for running single cell data analysis (for a single sample) using Scanpy package. Most of the codes and documentation used in this notebook has been copied from the following sources:
* [Scanpy - Preprocessing and clustering 3k PBMCs](https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html)
* [Single-cell-tutorial](https://github.com/theislab/single-cell-tutorial)
## Loading required libraries
We need to load all the required libraries to environment before we can run any of the analysis steps. Also, we are checking the version information for most of the major packages used for analysis.
```
%matplotlib inline
import numpy as np
import pandas as pd
import scanpy as sc
import seaborn as sns
import matplotlib.pyplot as plt
from copy import deepcopy
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
sc.settings.verbosity = 0
sc.logging.print_versions()
```
We are setting the output file path to $/tmp/scanpy\_output.h5ad$
```
results_file = '/tmp/scanpy_output.h5ad'
```
The following steps are only required for downloading test data from 10X Genomics's website.
```
%%bash
rm -rf cache
rm -rf /tmp/data
mkdir -p /tmp/data
wget -q -O /tmp/data/pbmc3k_filtered_gene_bc_matrices.tar.gz \
/tmp/data http://cf.10xgenomics.com/samples/cell-exp/1.1.0/pbmc3k/pbmc3k_filtered_gene_bc_matrices.tar.gz
cd /tmp/data
tar -xzf pbmc3k_filtered_gene_bc_matrices.tar.gz
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
## Reading data from Cellranger output
Load the Cellranger output to Scanpy
```
adata = \
sc.read_10x_mtx(
'/tmp/data/filtered_gene_bc_matrices/hg19/',
var_names='gene_symbols',
cache=True)
```
Converting the gene names to unique values
```
adata.var_names_make_unique()
```
Checking the data dimensions before checking QC
```
adata
```
Scanpy stores the count data is an annotated data matrix ($observations$ e.g. cell barcodes × $variables$ e.g gene names) called [AnnData](https://anndata.readthedocs.io) together with annotations of observations($obs$), variables ($var$) and unstructured annotations ($uns$)
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
## Data processing and visualization
### Checking highly variable genes
Computing fraction of counts assigned to each gene over all cells. The top genes with the highest mean fraction over all cells are
plotted as boxplots.
```
plt.rcParams['figure.figsize']=(10,10)
sc.pl.highest_expr_genes(adata, n_top=20)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Quality control
Checking $obs$ section of the AnnData object
```
adata.obs.head()
```
Checking the $var$ section of the AnnData object
```
adata.var.head()
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Computing metrics for cell QC
Listing the Mitochondrial genes detected in the cell population
```
mt_genes = 0
mt_genes = [gene for gene in adata.var_names if gene.startswith('MT-')]
mito_genes = adata.var_names.str.startswith('MT-')
if len(mt_genes)==0:
print('Looking for mito genes with "mt-" prefix')
mt_genes = [gene for gene in adata.var_names if gene.startswith('mt-')]
mito_genes = adata.var_names.str.startswith('mt-')
if len(mt_genes)==0:
print("No mitochondrial genes found")
else:
print("Mitochondrial genes: count: {0}, lists: {1}".format(len(mt_genes),mt_genes))
```
Typical quality measures for assessing the quality of a cell includes the following components
* Number of molecule counts (UMIs or $n\_counts$ )
* Number of expressed genes ($n\_genes$)
* Fraction of counts that are mitochondrial ($percent\_mito$)
We are calculating the above mentioned details using the following codes
```
adata.obs['mito_counts'] = np.sum(adata[:, mito_genes].X, axis=1).A1
adata.obs['percent_mito'] = \
np.sum(adata[:, mito_genes].X, axis=1).A1 / np.sum(adata.X, axis=1).A1
adata.obs['n_counts'] = adata.X.sum(axis=1).A1
adata.obs['log_counts'] = np.log(adata.obs['n_counts'])
adata.obs['n_genes'] = (adata.X > 0).sum(1)
```
Checking $obs$ section of the AnnData object again
```
adata.obs.head()
```
Sorting barcodes based on the $percent\_mito$ column
```
adata.obs.sort_values('percent_mito',ascending=False).head()
```
A high fraction of mitochondrial reads being picked up can indicate cell stress, as there is a low proportion of nuclear mRNA in the cell. It should be noted that high mitochondrial RNA fractions can also be biological signals indicating elevated respiration. <p/>
Cell barcodes with high count depth, few detected genes and high fraction of mitochondrial counts may indicate cells whose cytoplasmic mRNA has leaked out due to a broken membrane and only the mRNA located in the mitochondria has survived. <p/>
Cells with high UMI counts and detected genes may represent doublets (it requires further checking).
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Ploting MT gene fractions
```
plt.rcParams['figure.figsize']=(10,8)
sc.pl.violin(\
adata,
['n_genes', 'n_counts', 'percent_mito'],
jitter=0.4,
log=True,
stripplot=True,
show=False,
multi_panel=False)
```
Violin plot (above) shows the computed quality measures of UMI counts, gene counts and fraction of mitochondrial counts.
```
plt.rcParams['figure.figsize']=(10,8)
ax = sc.pl.scatter(adata, 'n_counts', 'n_genes', color='percent_mito',show=False,)
ax.set_title('Fraction mitochondrial counts', fontsize=12)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Number of genes",fontsize=12)
ax.tick_params(labelsize=12)
ax.axhline(700, 0,1, color='red')
ax.axvline(1500, 0,1, color='red')
```
The above scatter plot shows number of genes vs number of counts with $MT$ fraction information. We will be using a cutoff of 1500 counts and 700 genes (<span style="color:red">red lines</span>) to filter out dying cells.
```
ax = sc.pl.scatter(adata[adata.obs['n_counts']<10000], 'n_counts', 'n_genes', color='percent_mito',show=False)
ax.set_title('Fraction mitochondrial counts', fontsize=12)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Number of genes",fontsize=12)
ax.tick_params(labelsize=12)
ax.axhline(700, 0,1, color='red')
ax.axvline(1500, 0,1, color='red')
```
A similar scatter plot, but this time we have restricted the counts to below _10K_
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Count depth distribution
```
count_data = adata.obs['n_counts'].copy()
count_data.sort_values(inplace=True, ascending=False)
order = range(1, len(count_data)+1)
ax = plt.semilogy(order, count_data, 'b-')
plt.gca().axhline(1500, 0,1, color='red')
plt.xlabel("Barcode rank", fontsize=12)
plt.ylabel("Count depth", fontsize=12)
plt.tick_params(labelsize=12)
```
The above plot is similar to _UMI counts_ vs _Barcodes_ plot of Cellranger report and it shows the count depth distribution from high to low count depths. This plot can be used to decide the threshold of count depth to filter out empty droplets.
```
ax = sns.distplot(adata.obs['n_counts'], kde=False)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Frequency",fontsize=12)
ax.axvline(1500, 0,1, color='red')
```
The above histogram plot shows the distribution of count depth and the <span style="color:red">red line</span> marks the count threshold 1500.
```
if (adata.obs['n_counts'].max() - 10000)> 10000:
print('Checking counts above 10K')
ax = sns.distplot(adata.obs['n_counts'][adata.obs['n_counts']>10000], kde=False, bins=60)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Frequency",fontsize=12)
else:
print("Skip checking counts above 10K")
if adata.obs['n_counts'].max() > 2000:
print('Zooming into first 2000 counts')
ax = sns.distplot(adata.obs['n_counts'][adata.obs['n_counts']<2000], kde=False, bins=60)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Frequency",fontsize=12)
ax.axvline(1500, 0,1, color='red')
else:
print("Failed to zoom into the counts below 2K")
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Gene count distribution
```
ax = sns.distplot(adata.obs['n_genes'], kde=False)
ax.set_xlabel("Number of genes",fontsize=12)
ax.set_ylabel("Frequency",fontsize=12)
ax.tick_params(labelsize=12)
ax.axvline(700, 0,1, color='red')
```
The above histogram plot shows the distribution of gene counts and the <span style="color:red">red line</span> marks the gene count threshold 700.
```
if adata.obs['n_genes'].max() > 1000:
print('Zooming into first 1000 gene counts')
ax = sns.distplot(adata.obs['n_genes'][adata.obs['n_genes']<1000], kde=False,bins=60)
ax.set_xlabel("Number of genes",fontsize=12)
ax.set_ylabel("Frequency",fontsize=12)
ax.tick_params(labelsize=12)
ax.axvline(700, 0,1, color='red')
else:
print("Failed to zoom into the gene counts below 1K")
```
We use a permissive filtering threshold of 1500 counts and 700 gene counts to filter out the dying cells or empty droplets with ambient RNA.
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Counting cells per gene
```
adata.var['cells_per_gene'] = np.sum(adata.X > 0, 0).T
ax = sns.distplot(adata.var['cells_per_gene'][adata.var['cells_per_gene'] < 100], kde=False, bins=60)
ax.set_xlabel("Number of cells",fontsize=12)
ax.set_ylabel("Frequency",fontsize=12)
ax.set_title('Cells per gene', fontsize=12)
ax.tick_params(labelsize=12)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Plotting count depth vs MT fraction
```
ax = sc.pl.scatter(adata, x='n_counts', y='percent_mito',show=False)
ax.set_title('Count depth vs Fraction mitochondrial counts', fontsize=12)
ax.set_xlabel("Count depth",fontsize=12)
ax.set_ylabel("Fraction mitochondrial counts",fontsize=12)
ax.tick_params(labelsize=12)
ax.axhline(0.2, 0,1, color='red')
```
The scatter plot showing the count depth vs MT fraction counts and the <span style="color:red">red line</span> shows the default cutoff value for MT fraction 0.2
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Checking thresholds and filtering data
Now we need to filter the cells which doesn't match our thresholds.
```
print('Total number of cells: {0}'.format(adata.n_obs))
min_counts_threshold = 1500
max_counts_threshold = 40000
min_gene_counts_threshold = 700
max_mito_pct_threshold = 0.2
sc.pp.filter_cells(adata, min_counts = min_counts_threshold)
print('Number of cells after min count ({0}) filter: {1}'.format(min_counts_threshold,adata.n_obs))
sc.pp.filter_cells(adata, max_counts = max_counts_threshold)
print('Number of cells after max count ({0}) filter: {1}'.format(max_counts_threshold,adata.n_obs))
sc.pp.filter_cells(adata, min_genes = min_gene_counts_threshold)
print('Number of cells after gene ({0}) filter: {1}'.format(min_gene_counts_threshold,adata.n_obs))
adata = adata[adata.obs['percent_mito'] < max_mito_pct_threshold]
print('Number of cells after MT fraction ({0}) filter: {1}'.format(max_mito_pct_threshold,adata.n_obs))
print('Total number of cells after filtering: {0}'.format(adata.n_obs))
```
Also, we need to filter out any genes that are detected in only less than 20 cells. This operation reduces the dimensions of the matrix by removing zero count genes which are not really informative.
```
min_cell_per_gene_threshold = 20
print('Total number of genes: {0}'.format(adata.n_vars))
sc.pp.filter_genes(adata, min_cells=min_cell_per_gene_threshold)
print('Number of genes after cell filter: {0}'.format(adata.n_vars))
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Normalization
```
sc.pp.normalize_total(adata, target_sum=1e4)
```
We are using a simple total-count based normalization (library-size correct) to transform the data matrix $X$ to 10,000 reads per cell, so that counts become comparable among cells.
```
sc.pp.log1p(adata)
```
Then logarithmize the data matrix
```
adata.raw = adata
```
Copying the normalized and logarithmized raw gene expression data to the `.raw` attribute of the AnnData object for later use.
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Highly variable genes
Following codes blocks are used to identify the highly variable genes (HGV) to further reduce the dimensionality of the dataset and to include only the most informative genes. HGVs will be used for clustering, trajectory inference, and dimensionality reduction/visualization.
```
plt.rcParams['figure.figsize']=(7,7)
sc.pp.highly_variable_genes(adata, flavor='seurat', min_mean=0.0125, max_mean=3, min_disp=0.5)
seurat_hgv = np.sum(adata.var['highly_variable'])
print("Counts of HGVs: {0}".format(seurat_hgv))
sc.pl.highly_variable_genes(adata)
```
We use a 'seurat' flavor based HGV detection step. Then, we run the following codes to do the actual filtering of data. The plots show how the data was normalized to select highly variable genes irrespective of the mean expression of the genes. This is achieved by using the index of dispersion which divides by mean expression, and subsequently binning the data by mean expression and selecting the most variable genes within each bin.
```
adata = adata[:, adata.var.highly_variable]
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Regressing out technical effects
Normalization scales count data to make gene counts comparable between cells. But it still contain unwanted variability. One of the most prominent technical covariates in single-cell data is count depth. Regress out effects of total counts per cell and the percentage of mitochondrial genes expressed can improve the performance of trajectory inference algorithms.
```
sc.pp.regress_out(adata, ['n_counts', 'percent_mito'])
```
Scale each gene to unit variance. Clip values exceeding standard deviation 10.
```
sc.pp.scale(adata, max_value=10)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Principal component analysis
Reduce the dimensionality of the data by running principal component analysis (PCA), which reveals the main axes of variation and denoises the data.
```
sc.tl.pca(adata, svd_solver='arpack')
plt.rcParams['figure.figsize']=(8,6)
sc.pl.pca(adata,color=['CST3'])
```
Let us inspect the contribution of single PCs to the total variance in the data. This gives us information about how many PCs we should consider in order to compute the neighborhood relations of cells.
```
sc.pl.pca_variance_ratio(adata, log=True)
```
Let us compute the neighborhood graph of cells using the PCA representation of the data matrix.
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Neighborhood graph
Computing the neighborhood graph of cells using the PCA representation of the data matrix.
```
sc.pp.neighbors(adata, n_neighbors=10, n_pcs=40)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Clustering the neighborhood graph
Scanpy documentation recommends the Leiden graph-clustering method (community detection based on optimizing modularity) by Traag *et al.* (2018). Note that Leiden clustering directly clusters the neighborhood graph of cells, which we have already computed in the previous section.
```
sc.tl.leiden(adata)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Embed the neighborhood graph using UMAP
Scanpy documentation suggests embedding the graph in 2 dimensions using UMAP (McInnes et al., 2018), see below. It is potentially more faithful to the global connectivity of the manifold than tSNE, i.e., it better preservers trajectories.
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
##### Plotting 3D UMAP
```
leiden_series = deepcopy(adata.obs['leiden'])
cell_clusters = list(leiden_series.value_counts().to_dict().keys())
colors = sc.pl.palettes.default_102[0:len(cell_clusters) ]
dict_map = dict(zip(cell_clusters,colors))
color_map = leiden_series.map(dict_map).values
labels = list(adata.obs.index)
sc.tl.umap(
adata,
n_components=3)
hovertext = \
['cluster: {0}, barcode: {1}'.\
format(
grp,labels[index])
for index,grp in enumerate(leiden_series.values)]
## plotting 3D UMAP as html file
plot(
[go.Scatter3d(
x=adata.obsm['X_umap'][:, 0],
y=adata.obsm['X_umap'][:, 1],
z=adata.obsm['X_umap'][:, 2],
mode='markers',
marker=dict(color=color_map,
size=5),
opacity=0.6,
text=labels,
hovertext=hovertext,
)],
filename='UMAP-3D-plot.html')
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
##### Plotting 2D UMAP
```
sc.tl.umap(adata,n_components=2)
plt.rcParams['figure.figsize']=(10,8)
sc.pl.umap(adata, color=['CST3'])
```
plot the scaled and corrected gene expression by `use_raw=False`
```
sc.pl.umap(adata, color=['leiden'],use_raw=False,palette=sc.pl.palettes.default_102)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Embed the neighborhood graph using tSNE
```
sc.tl.tsne(adata,n_pcs=40)
sc.pl.tsne(adata, color=['leiden'],palette=sc.pl.palettes.default_102)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
### Finding marker genes
Let us compute a ranking for the highly differential genes in each cluster. For this, by default, the `.raw` attribute of AnnData is used in case it has been initialized before. The simplest and fastest method to do so is the t-test.
```
plt.rcParams['figure.figsize']=(6,6)
sc.tl.rank_genes_groups(adata, 'leiden', method='t-test')
sc.pl.rank_genes_groups(adata, n_genes=20, sharey=False,ncols=2)
```
The result of a Wilcoxon rank-sum (Mann-Whitney-U) test is very similar (Sonison & Robinson (2018)).
```
sc.tl.rank_genes_groups(adata, 'leiden', method='wilcoxon')
sc.pl.rank_genes_groups(adata, n_genes=20, sharey=False,ncols=2)
```
Show the 5 top ranked genes per cluster 0, 1, …, 7 in a dataframe
```
pd.DataFrame(adata.uns['rank_genes_groups']['names']).head(5)
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Stacked violin plot of ranked genes
Plot marker genes per cluster using stacked violin plots
```
sc.pl.rank_genes_groups_stacked_violin(
adata, n_genes=5,groupby='leiden',swap_axes=False,figsize=(20,10))
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Dot plot of ranked genes
The dotplot visualization provides a compact way of showing per group, the fraction of cells expressing a gene (dot size) and the mean expression of the gene in those cell (color scale)
```
sc.pl.rank_genes_groups_dotplot(
adata, n_genes=5,groupby='leiden', dendrogram=True,figsize=(20,10))
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Matrix plot of ranked genes
The matrixplot shows the mean expression of a gene in a group by category as a heatmap.
```
sc.pl.rank_genes_groups_matrixplot(adata, n_genes=5, groupby='leiden', figsize=(20,10))
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Heatmap plot of ranked genes
Heatmaps do not collapse cells as in matrix plots. Instead, each cells is shown in a row.
```
sc.pl.rank_genes_groups_heatmap(
adata, n_genes=5, show_gene_labels=True, groupby='leiden', figsize=(20,10))
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
#### Tracks plot of ranked genes
The track plot shows the same information as the heatmap, but, instead of a color scale, the gene expression is represented by height.
```
sc.pl.rank_genes_groups_tracksplot(adata, n_genes=5, cmap='bwr',figsize=(20,30))
```
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
## References
* [Scanpy - Preprocessing and clustering 3k PBMCs](https://scanpy-tutorials.readthedocs.io/en/latest/pbmc3k.html)
* [single-cell-tutorial](https://github.com/theislab/single-cell-tutorial)
<div align="right"><a href="#Table-of-contents">Go to TOC</a></div>
## Acknowledgement
The Imperial BRC Genomics Facility is supported by NIHR funding to the Imperial Biomedical Research Centre.
| github_jupyter |
<a href="https://colab.research.google.com/github/ayulockin/SwAV-TF/blob/master/Train_SwAV_10_epochs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Imports and Setups
```
# Clone this repository to use the utils
!git clone https://github.com/ayulockin/SwAV-TF.git
import sys
sys.path.append('SwAV-TF/utils')
import multicrop_dataset
import architecture
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import numpy as np
import random
import time
import os
from itertools import groupby
from tqdm import tqdm
tf.random.set_seed(666)
np.random.seed(666)
tfds.disable_progress_bar()
%%capture
!pip install wandb
import wandb
wandb.login()
```
# Flower Dataset
```
# Gather Flowers dataset
train_ds, validation_ds = tfds.load(
"tf_flowers",
split=["train[:85%]", "train[85%:]"]
)
# Visualization
plt.figure(figsize=(10, 10))
for i, image in enumerate(train_ds.take(9)):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image['image'])
plt.axis("off")
```
# Multi Crop Resize Data Augmentation
```
# Configs
BS = 32
SIZE_CROPS = [224, 96]
NUM_CROPS = [2, 3]
MIN_SCALE = [0.5, 0.14]
MAX_SCALE = [1., 0.5]
# Experimental options
options = tf.data.Options()
options.experimental_optimization.noop_elimination = True
options.experimental_optimization.map_vectorization.enabled = True
options.experimental_optimization.apply_default_optimizations = True
options.experimental_deterministic = False
options.experimental_threading.max_intra_op_parallelism = 1
# Get multiple data loaders
trainloaders = multicrop_dataset.get_multires_dataset(train_ds,
size_crops=SIZE_CROPS,
num_crops=NUM_CROPS,
min_scale=MIN_SCALE,
max_scale=MAX_SCALE,
options=options)
# Prepare the final data loader
AUTO = tf.data.experimental.AUTOTUNE
# Zipping
trainloaders_zipped = tf.data.Dataset.zip(trainloaders)
# Final trainloader
trainloaders_zipped = (
trainloaders_zipped
.batch(BS)
.prefetch(AUTO)
)
im1, im2, im3, im4, im5 = next(iter(trainloaders_zipped))
print(im1.shape, im2.shape, im3.shape, im4.shape, im5.shape)
```
# Model Architecture
```
feature_backbone = architecture.get_resnet_backbone()
feature_backbone.summary()
projection_prototype = architecture.get_projection_prototype(2048, 128, 15)
projection_prototype.summary()
```
# Sinkhorn Knopp for Cluster Assignment
Reference: A.1 from https://arxiv.org/abs/2006.09882
```
def sinkhorn(sample_prototype_batch):
Q = tf.transpose(tf.exp(sample_prototype_batch/0.05))
Q /= tf.keras.backend.sum(Q)
K, B = Q.shape
u = tf.zeros_like(K, dtype=tf.float32)
r = tf.ones_like(K, dtype=tf.float32) / K
c = tf.ones_like(B, dtype=tf.float32) / B
for _ in range(3):
u = tf.keras.backend.sum(Q, axis=1)
Q *= tf.expand_dims((r / u), axis=1)
Q *= tf.expand_dims(c / tf.keras.backend.sum(Q, axis=0), 0)
final_quantity = Q / tf.keras.backend.sum(Q, axis=0, keepdims=True)
final_quantity = tf.transpose(final_quantity)
return final_quantity
```
# Train Step
```
# @tf.function
# Reference: https://github.com/facebookresearch/swav/blob/master/main_swav.py
def train_step(input_views, feature_backbone, projection_prototype,
optimizer, crops_for_assign, temperature):
# ============ retrieve input data ... ============
im1, im2, im3, im4, im5 = input_views
inputs = [im1, im2, im3, im4, im5]
batch_size = inputs[0].shape[0]
# ============ create crop entries with same shape ... ============
crop_sizes = [inp.shape[1] for inp in inputs] # list of crop size of views
unique_consecutive_count = [len([elem for elem in g]) for _, g in groupby(crop_sizes)] # equivalent to torch.unique_consecutive
idx_crops = tf.cumsum(unique_consecutive_count)
# ============ multi-res forward passes ... ============
start_idx = 0
with tf.GradientTape() as tape:
for end_idx in idx_crops:
concat_input = tf.stop_gradient(tf.concat(inputs[start_idx:end_idx], axis=0))
_embedding = feature_backbone(concat_input) # get embedding of same dim views together
if start_idx == 0:
embeddings = _embedding # for first iter
else:
embeddings = tf.concat((embeddings, _embedding), axis=0) # concat all the embeddings from all the views
start_idx = end_idx
projection, prototype = projection_prototype(embeddings) # get normalized projection and prototype
projection = tf.stop_gradient(projection)
# ============ swav loss ... ============
# https://github.com/facebookresearch/swav/issues/19
loss = 0
for i, crop_id in enumerate(crops_for_assign): # crops_for_assign = [0,1]
with tape.stop_recording():
out = prototype[batch_size * crop_id: batch_size * (crop_id + 1)]
# get assignments
q = sinkhorn(out) # sinkhorn is used for cluster assignment
# cluster assignment prediction
subloss = 0
for v in np.delete(np.arange(np.sum(NUM_CROPS)), crop_id): # (for rest of the portions compute p and take cross entropy with q)
p = tf.nn.softmax(prototype[batch_size * v: batch_size * (v + 1)] / temperature)
subloss -= tf.math.reduce_mean(tf.math.reduce_sum(q * tf.math.log(p), axis=1))
loss += subloss / tf.cast((tf.reduce_sum(NUM_CROPS) - 1), tf.float32)
loss /= len(crops_for_assign)
# ============ backprop ... ============
variables = feature_backbone.trainable_variables + projection_prototype.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return loss
```
# Training Loop
```
def train_swav(feature_backbone,
projection_prototype,
dataloader,
optimizer,
crops_for_assign,
temperature,
epochs=50):
step_wise_loss = []
epoch_wise_loss = []
for epoch in range(epochs):
w = projection_prototype.get_layer('prototype').get_weights()
w = tf.transpose(w)
w = tf.math.l2_normalize(w, axis=1)
projection_prototype.get_layer('prototype').set_weights(tf.transpose(w))
for i, inputs in enumerate(dataloader):
loss = train_step(inputs, feature_backbone, projection_prototype,
optimizer, crops_for_assign, temperature)
step_wise_loss.append(loss)
epoch_wise_loss.append(np.mean(step_wise_loss))
print("epoch: {} loss: {:.3f}".format(epoch + 1, np.mean(step_wise_loss)))
wandb.log({'epoch': epoch, 'loss':np.mean(step_wise_loss)})
return epoch_wise_loss, [feature_backbone, projection_prototype]
# ============ re-initialize the networks and the optimizer ... ============
feature_backbone = architecture.get_resnet_backbone()
projection_prototype = architecture.get_projection_prototype(15)
decay_steps = 1000
lr_decayed_fn = tf.keras.experimental.CosineDecay(
initial_learning_rate=0.1, decay_steps=decay_steps)
opt = tf.keras.optimizers.SGD(learning_rate=lr_decayed_fn)
# ================= initialize wandb ======================
wandb.init(entity='authors', project='swav-tf')
# ======================= train ===========================
epoch_wise_loss, models = train_swav(feature_backbone,
projection_prototype,
trainloaders_zipped,
opt,
crops_for_assign=[0, 1],
temperature=0.1,
epochs=10
)
# Serialize the models
feature_backbone, projection_prototype = models
feature_backbone.save_weights('feature_backbone_10_epochs.h5')
projection_prototype.save_weights('projection_prototype_10_epochs.h5')
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
import collections
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
count.extend(collections.Counter(words).most_common(n_words - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
with open('english-train', 'r') as fopen:
text_from = fopen.read().lower().split('\n')
with open('vietnam-train', 'r') as fopen:
text_to = fopen.read().lower().split('\n')
print('len from: %d, len to: %d'%(len(text_from), len(text_to)))
concat_from = ' '.join(text_from).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
concat_to = ' '.join(text_to).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab to size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
class Chatbot:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate, batch_size):
def cells(reuse=False):
return tf.nn.rnn_cell.GRUCell(size_layer,reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
encoder_embedding = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
decoder_embedding = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
_, encoder_state = tf.nn.dynamic_rnn(
cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]),
inputs = tf.nn.embedding_lookup(encoder_embedding, self.X),
sequence_length = self.X_seq_len,
dtype = tf.float32)
encoder_state = tuple(encoder_state[-1] for _ in range(num_layers))
main = self.Y[:, :-1]
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
dense = tf.layers.Dense(to_dict_size)
decoder_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)])
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embedding, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = training_helper,
initial_state = encoder_state,
output_layer = dense)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
self.training_logits = training_decoder_output.rnn_output
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = decoder_embedding,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = predicting_helper,
initial_state = encoder_state,
output_layer = dense)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 3 * tf.reduce_max(self.X_seq_len))
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
size_layer = 128
num_layers = 2
embedded_size = 128
learning_rate = 0.001
batch_size = 32
epoch = 50
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Chatbot(size_layer, num_layers, embedded_size, vocabulary_size_from + 4,
vocabulary_size_to + 4, learning_rate, batch_size)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
try:
ints.append(dic[k])
except Exception as e:
print(e)
ints.append(2)
X.append(ints)
return X
X = str_idx(text_from, dictionary_from)
Y = str_idx(text_to, dictionary_to)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
def check_accuracy(logits, Y):
acc = 0
for i in range(logits.shape[0]):
internal_acc = 0
for k in range(len(Y[i])):
try:
if Y[i][k] == logits[i][k]:
internal_acc += 1
except:
continue
acc += (internal_acc / len(Y[i]))
return acc / logits.shape[0]
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, (len(text_from) // batch_size) * batch_size, batch_size):
batch_x, seq_x = pad_sentence_batch(X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(Y[k: k+batch_size], PAD)
predicted, loss, _ = sess.run([model.predicting_ids, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += check_accuracy(predicted,batch_y)
total_loss /= (len(text_from) // batch_size)
total_accuracy /= (len(text_from) // batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
```
| github_jupyter |
# =========================
# Load libraries
# =========================
```
import pandas as pd
import numpy as np
from keras import models, layers
import keras_metrics as km
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import matplotlib.pyplot as plt
from sklearn.metrics import f1_score, confusion_matrix, roc_curve, auc
from sklearn.model_selection import train_test_split
from src.data.preprocess_word_embeddings import init_embeddings
```
# =========================
# Load data
# =========================
```
# Load gold data
data = pd.read_csv("../data/processed/stanford.csv")
train_data, test_data, train_labels, test_labels = train_test_split(
data["text"], data["label"], test_size=0.2
)
```
# =========================
# Prepare data
# =========================
```
# Create unique index for every word and fit to training data
tokenizer = Tokenizer(num_words = 10000)
tokenizer.fit_on_texts(train_data)
# Turn each tweet into a sequence of integers of equal length
sequences = tokenizer.texts_to_sequences(train_data)
train_corpus_embeddings = pad_sequences(sequences)
# Print the number of unique words found in the data set (not the limit placed
# on the tokenizer), use this as feedback to the num_words arg of Tokenizer().
print('Found %d unique words.' % len(tokenizer.word_index))
```
# =========================
# Split data
# =========================
```
# Randomly shuffle data
indices = np.arange(train_data.shape[0])
np.random.shuffle(indices)
train_corpus_embeddings = train_corpus_embeddings[indices]
train_labels = train_labels.values[indices]
# Split into training and validation data (approximately 80:20)
x_train = train_corpus_embeddings[:10410]
y_train = train_labels[:10410]
x_val = train_corpus_embeddings[10410:]
y_val = train_labels[10410:]
```
# =========================
# Parse GloVe word-embeddings
# =========================
# You need to download the pre-trained word vectors from:
# https://nlp.stanford.edu/projects/glove/
```
word_embedding_matrix = init_embeddings(tokenizer.word_index, 10000, 200, "/home/tcake/Downloads/glove.6B/glove.6B.200d.txt")
```
# =========================
# Build model
# =========================
```
# Add Embedding layer
# The final sigmoid layer outputs probability values between [0, 1]
model = models.Sequential()
model.add(layers.Embedding(10001, 200, input_length = train_corpus_embeddings.shape[1]))
model.add(layers.Dropout(0.2))
model.add(layers.LSTM(200))
model.add(layers.Dropout(0.1))
model.add(layers.Dense(1, activation = 'sigmoid'))
```
# =========================
# Load GloVe emdebbings
# =========================
```
# Load pretrained word embeddings
model.layers[0].set_weights([word_embedding_matrix])
model.layers[0].trainable = False
```
# =========================
# Train model
# =========================
```
# As the model outputs probabilities, binary crossentropy is the best loss
# metric as it measures the distance between probability distributions
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics=[km.binary_precision(), km.binary_recall()])
history = model.fit(x_train,
y_train,
epochs = 10,
batch_size = 32,
validation_data = (x_val, y_val))
# Prep history dictionary
precision = history.history['precision']
val_precision = history.history['val_precision']
recall = history.history['recall']
val_recall = history.history['val_recall']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(precision) + 1)
# Plot the training and validation precision
plt.plot(epochs, precision, 'bo', label='Training Precision')
plt.plot(epochs, val_precision, 'b', label='Validation Precision')
plt.title('Training and validation Precision')
plt.xlabel('Epochs')
plt.ylabel('Precision')
plt.legend()
plt.show()
# Plot the training and validation accuracy
plt.clf()
plt.plot(epochs, recall, 'bo', label='Training Recall')
plt.plot(epochs, val_recall, 'b', label='Validation Recall')
plt.title('Training and validation recall')
plt.xlabel('Epochs')
plt.ylabel('Recall')
plt.legend()
plt.show()
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
# =========================
# Retrain model
# =========================
model = models.Sequential()
model.add(layers.Embedding(10001, 200, input_length = train_corpus_embeddings.shape[1]))
model.add(layers.Flatten())
model.add(layers.Dense(32, activation = 'relu'))
model.add(layers.Dense(1, activation = 'sigmoid'))
model.layers[0].set_weights([embedding_matrix])
model.layers[0].trainable = False
model.compile(optimizer = 'rmsprop',
loss = 'binary_crossentropy',
metrics=[km.binary_precision(), km.binary_recall()])
model.fit(x_train, y_train, epochs = 5, batch_size = 512)
# =========================
# Evaluate on test data
# =========================
```
# DO NOT retrain the tokenizer. Use the argument oov_token=True to reserve a
# token for unkown words. See https://bit.ly/2lNh15g
# Prepare data
# Ensure sequences are padded to the same length as training data
x_sequences = tokenizer.texts_to_sequences(test_data)
x_test = pad_sequences(x_sequences, train_corpus_embeddings.shape[1])
# Prepare labels, transform to binary and float32
y_test = test_labels.values
# Print results as ['precision', 'recall'] check names with model.metrics_names
model.evaluate(x_test, y_test)[1:]
```
# =========================
# F1 Score
# =========================
https://twitter.com/jessamyn/status/900867154412699649?ref_src=twsrc%5Etfw%7Ctwcamp%5Etweetembed%7Ctwterm%5E900867154412699649&ref_url=https%3A%2F%2Fblog.ceshine.net%2Fpost%2Fkaggle-jigsaw-toxic-2019%2F
```
y_pred = model.predict(x_test, batch_size=32)
y_pred_roc = y_pred
y_pred = y_pred > 0.5
y_pred = y_pred.flatten()
y_pred = y_pred.astype(int)
print("Confusion matrix:")
print(confusion_matrix(y_test, y_pred))
print("Marco F1:%f" % f1_score(y_test, y_pred, average="macro"))
print("Micro F1:%f" % f1_score(y_test, y_pred, average="micro"))
print("Weighted F1:%f" % f1_score(y_test, y_pred, average="weighted"))
```
# =========================
# ROC - AUC
# =========================
```
fpr_keras, tpr_keras, thresholds_keras = roc_curve(y_test, y_pred_roc)
auc_keras = auc(fpr_keras, tpr_keras)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
# Zoom in view of the upper left corner.
plt.figure(2)
plt.xlim(0, 0.2)
plt.ylim(0.0, 1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_keras, tpr_keras, label='Keras (area = {:.3f})'.format(auc_keras))
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve (zoomed in at top left)')
plt.legend(loc='best')
plt.show()
```
| github_jupyter |
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
Run the following cell to load the packages and dependencies that are going to be useful for your journey!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors
from yolo_utils import preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners
from yad2k.models.keras_yolo import preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank [drive.ai](https://www.drive.ai/) for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.
</center></caption>
<img src="nb_images/driveai.png" style="width:100px;height:100;">
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
First things to know:
- The **input** is a batch of images of shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
Lets look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
**Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
2. For each box, find:
- the index of the class with the maximum box score ([Hint](https://keras.io/backend/#argmax)) (Be careful with what axis you choose; consider using axis=-1)
- the corresponding box score ([Hint](https://keras.io/backend/#max)) (Be careful with what axis you choose; consider using axis=-1)
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. ([Hint](https://www.tensorflow.org/api_docs/python/tf/boolean_mask))
Reminder: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score
< threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability
score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w)
coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the
class detected by the selected boxes
Note: "None" is here because you don't know the exact number
of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,)
if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores,
# keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores,axis=-1)
box_class_scores = K.max(box_scores,axis=-1,keepdims = False)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using
# "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes
# you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores ,filtering_mask)
boxes = tf.boolean_mask(boxes ,filtering_mask)
classes = tf.boolean_mask(box_classes ,filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1,
stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4,
seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1,
stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence,
boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
**Exercise**: Implement iou(). Some hints:
- In this exercise only, we define a box using its two corners (upper left and lower right): `(x1, y1, x2, y2)` rather than the midpoint and height/width.
- To calculate the area of a rectangle you need to multiply its height `(y2 - y1)` by its width `(x2 - x1)`.
- You'll also need to find the coordinates `(xi1, yi1, xi2, yi2)` of the intersection of two boxes. Remember that:
- xi1 = maximum of the x1 coordinates of the two boxes
- yi1 = maximum of the y1 coordinates of the two boxes
- xi2 = minimum of the x2 coordinates of the two boxes
- yi2 = minimum of the y2 coordinates of the two boxes
- In order to compute the intersection area, you need to make sure the height and width of the intersection are positive, otherwise the intersection area should be zero. Use `max(height, 0)` and `max(width, 0)`.
In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
"""
# Calculate the (y1, x1, y2, x2) coordinates of the intersection
# of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = np.max([box1[0], box2[0]])
yi1 = np.max([box1[1], box2[1]])
xi2 = np.min([box1[2], box2[2]])
yi2 = np.min([box1[3], box2[3]])
#inter_area = (xi2 - xi1) * (yi2 - yi1)
inter_area = np.maximum(yi2-yi1,0) * np.maximum(xi2-xi1,0)
### END CODE HERE ###
# Calculate the Union area by using Formula:
# Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[3] - box1[1]) * (box1[2] - box1[0])
box2_area = (box2[3] - box2[1]) * (box2[2] - box2[0])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
```
**Expected Output**:
<table>
<tr>
<td>
**iou = **
</td>
<td>
0.14285714285714285
</td>
</tr>
</table>
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute its overlap with all other boxes, and remove boxes that overlap it more than `iou_threshold`.
3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/gather)
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10,
iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes()
that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union
" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously
to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes.
This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32')
# tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor]))
# initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list
# of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores,
max_boxes, iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to
your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of
(608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape,
in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score
< threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used
for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform
# Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence,
boxes, box_class_probs, threshold=score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max
# suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
<font color='blue'>
**Summary for YOLO**:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pretrained model on images
In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by **creating a session to start your graph**. Run the following cell.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a (`sess`) graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file".
Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes,
it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file,
model_image_size = (608, 608))
# Run the session with the correct tensors and choose
# the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... ,
# K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes],
feed_dict={yolo_model.input: image_data, K.learning_phase(): 0})
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
<font color='blue'>
**What you should remember**:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's github repository. The pretrained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are especially grateful to Brody Huval, Chih Hu and Rahul Patel for collecting and providing this dataset.
| github_jupyter |
<a id='inizio'></a>
# Evaluating
In this notebook we'll present you the mainly topics about evaluating performance and structurized the machine learning processes.
<br><br>
This notebook will present the following topics:
- [Choosing the Right Estimator](#right_estimator)<a href='#right_estimator'></a> <br>
- [Confusion Matrix](#conf_matrix)<a href='#conf_matrix'></a> <br>
- [Scoring](#scoring)<a href='#scoring'></a>
- [Cross-Validation](#cross_validation)<a href='#cross_validation'></a>
- [Power-Tuning](#power_tuning)<a href='#power_tuning'></a>
- [Pipeling](#pipeling)<a href='#pipeling'></a>
<a id='right_estimator'></a>
### Choosing the right estimator
Typically, algorithm choice is dictated by a balance of factors: <br>
- The dimensionality of your data; <br>
- The geometric nature of your data; <br>
- The types of features used to represent your data; <br>
- The number of training samples you have at your disposal; <br>
- The required training and prediction speeds needed for your purposes; <br>
- The predictive accuracy level desired; <br>
- How configurable you need your model to be; <br>
The following flow-chart describes the circumstances under which you should use the different machine learning algorithms:
<img src='Algorithm Cheat Sheet.jpg'>
<img src='ScikitLearn_Map.png'>
<br>
[Click here to return to the top of the pag](#inizio)<a href='#inizio'>
<a id='conf_matrix'></a>
### Confusion Matrix
A universal method you can use to deeply study how well any of your supervised learning predictors is doing is by using a **confusion matrix**. <br>
A confusion matrix displays your model's predicted (**testing set**) outputs against the true observational values.<br>
This helps you see how well your algorithm was able to generalize and identify specific target labels, along with which labels were often
confused. <br>
This can be helpful in increasing your accuracy, because you can then engineer additional features that help to better identify highly confusing targets, and then take another run through your data analysis pipeline. <br>
Traditionally, the predicted targets are aligned on the X-axis of the matrix, and the true values are aligned on the Y-axis. Let's say you have the following data:
```
import sklearn.metrics as metrics
y_true = [1,1,2,2,3,3] # Actual, observed testing dataset values
y_pred = [1,1,1,3,2,3] # Predicted values from your model
```
The true labels are encoded data representing cats, dogs, and monkeys, for the three values. <br>
You can compute a confusion matrix using SciKit-Learn as follows:
```
metrics.confusion_matrix(y_true, y_pred)
```
Perhaps a clearer representation of the data would look like this:
<img src='example.jpg'>
<br>
You can derive quite a bit of information from your confusion matrix. <br>
The first is how many actual cats, dogs, and monkeys you have. By summing up the values in a row, you'll know the true count of your data. <br>
You can do similarly with the columns, to see how many times your model predicted a certain target. <br>
By adding up all the values in the Predicted Dog column, we can see our model thought our testing dataset only had a single dog in it. <br>
An important thing to realize is that all of the non-diagonal elements
of the matrix correspond to misclassified targets.<br>
Given all this information, you're able to derive probabilities relating to how accurate your answers are.
<br>
<br>
Given the example above, your algorithm predicted there were two cats in the dataset, and there indeed were two cats. In fact, the two samples the algorithm believed to be cats turned out to be the actual cats. It looks like the model is very good at identifying cats.
<br>
<br>
On the other hand, there were two dogs in the dataset. The model somehow came to the conclusion that one of the monkey's was a dog, and din't even arrive at two dog predictions. It looks like you trained a non-dog friendly model. Regarding monkeys, there were two in the dataset. One of them, the algorithm predicted correctly. The other, the model thought was a dog. It looks like there is some level of confusion here between monkeys and dogs. This is a good indicator that you might consider adding additional feature to your dataset, such as banana-affinity.
```
import matplotlib.pyplot as plt
columns = ['Cat', 'Dog', 'Monkey']
confusion = metrics.confusion_matrix(y_true, y_pred)
plt.imshow(confusion, cmap=plt.cm.Blues, interpolation='nearest')
plt.xticks([0,1,2], columns, rotation='vertical')
plt.yticks([0,1,2], columns)
plt.colorbar()
plt.show()
```
[Click here to return to the top of the pag](#inizio)<a href='#inizio'>
<a id='scoring'></a>
### Scoring
There are some concepts that come into play when evaluating the machine learning models. <br>
They are different as we use classification models, regression, clustering and so on.<br>
Starting with classification models is important understand that when a model try to predict a label of a dataset it is possible to identify the prediction as:
- **True positive**: when the item is correctly labeled as belonging to the positive class;
- **True negative**: when the item is correctly labeled as belonging to the negative class;
- **False positive**: when item is incorrectly labeled as belonging to the positive class;
- **False negative**: when item isn't labeled as belonging to the positive class but should have been.
In a classification task, the **precision** (also called _**positive predictive value**_) _for a class is the number of true positives divided by the total number of elements labeled as belonging to the positive class_ (i.e. the sum of true positives and false positives).<br>
**Recall** (also called _**sensitivity**_) _is defined as the number of true positives divided by the total number of elements that actually belong to the positive class_ (i.e. the sum of true positives and false negatives).<br>
<img src='Precisionrecall.svg'> <br>
A *precision* score of 1.0 for a class C means that every item labeled as belonging to class C does indeed belong to class C (but says nothing about the number of items from class C that were not labeled correctly) whereas a *recall* of 1.0 means that every item from class C was labeled as belonging to class C (but says nothing about how many other items were incorrectly also labeled as belonging to class C). <br>
The terms *positive* and *negative* refer to the classifier's prediction (sometimes known as the expectation), and the terms *true* and *false* refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). <br>
Considering the *confusion matrix* of the earlier paragraph's example, and focus on the cat class. We can build this *table of confusion*: <br>
<img src='Table.jpg'>
With this table completed we can perform other metrics as **true negative rate** and **accuracy**:
<img src='accuracy.jpg'>
SciKit-Learn's metrics model helps you calculate many of these metrics automatically. <br>
Given the following setup:
```
import sklearn.metrics as metrics
y_true = [1,1,1,2,2,2] # Actual, observed testing dataset values
y_pred = [1,1,2,2,2,2] # Predicted values from your model
metrics.confusion_matrix(y_true, y_pred)
print("Precision:", metrics.precision_score(y_true, y_pred))
print("Recall:", round(metrics.recall_score(y_true, y_pred),2))
print("Accuracy:", round(metrics.accuracy_score(y_true, y_pred),2))
```
Another metric computing for classification model is **ROC Curve** (_receveir operating characteristic curve_), that is a graph showing the performance at all classification thresholds. The curve plots 2 parameters:<br>
- True Positive Rate;
- False Positive Rate.
**True Positive Rate** (TPR) is a synonym for recall and is therefore defined as follows: <br>
<img src='TPR.jpg'>
**False Positive Rate** (FPR) is defined as follows:
<img src='FPR.jpg'>
The following figure shows a typical ROC curve.
<img src='ROC.jpg'>
To compute the points in an ROC curve, we could evaluate a logistic regression model many times with different classification thresholds, but this would be inefficient. Fortunately, there's an efficient, sorting-based algorithm that can provide this information for us, called AUC. <br>
**AUC** stands for "Area under the ROC Curve." That is, AUC measures the entire two-dimensional area underneath the entire ROC curve (think integral calculus) from (0,0) to (1,1).
<img src='AUC.jpg'>
One way of interpreting AUC is as the probability that the model ranks a random positive example more highly than a random negative example. <br>
_AUC ranges in value from 0 to 1. A model whose predictions are 100% wrong has an AUC of 0.0; one whose predictions are 100% correct has an AUC of 1.0_.<br>
To investigate other scoring methods for other machine learning algorithms look the following links:
- [Regression metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#regression-metrics)
- [Clustering metrics](https://scikit-learn.org/stable/modules/model_evaluation.html#clustering-metrics)
[Click here to return to the top of the pag](#inizio)<a href='#inizio'>
<a id='cross_validation'></a>
### Cross Validation
We're already well aware of the importance of splitting our data into training and testing sets to validate the accuracy of our models by checking how well it fits the data. This method of avoiding overfitting introduces three issues:
- Each time we run train/test split, unless we set the random_state parameter, we're going to get back different accuracy scores;
- The second issue introduced is by withholding data from training, we essentially lose some of our training data. Machine learning is only as accurate as the data its trained upon, so generally more data means better results. Neglecting to train our models on our hard collected data is like refusing to take our rightful change at the bank;
- But the most important issue introduced is that with some of the more configurable estimators, such as SVC, we will probably end up running our model many times while tinkering with the various parameters, such as C and gamma for producing optimal results. By doing this, we will leak some information from our testing set into our training set. Our model, armed with these secret details about our testing set, might still perform poorly in the real-world if it overfit that data.
The way to overcome this is by using **cross_val_score()** method. <br>
This method takes as input our model along with our training dataset and performs K-fold cross validations on it. <br>
In other words, our training data is first cut into a number of 'K' sets. Then, "K" versions of our model are trained, each using an independent K-1 number of the "K" available sets. Each model is evaluated with the last set, it's **out-of-bag set**. <br>
Here is the code for cross validation score:
```
# 10-Fold Cross Validation on your training data
from sklearn import cross_validation as cval
cval.cross_val_score(model, X_train, y_train, cv=10)
cval.cross_val_score(model, X_train, y_train, cv=10).mean()
```
Cross validation allows us to use all the data we provide as both training and testing. <br>
Many resources online will recommend us don't even do the extra step of splitting our data into a training and testing set and just feed the lot directly into our cross validator. There are advantages and disadvantages of this:
- The main advantage is the overall simplicity of your process;
- The disadvantage is that it still is possible for some information to leak into your training dataset, as we discussed above with the SVC example. This information leak might even occur prior to us fitting our model, for example it might be at the point of transforming our data using isomap or principle component analysis.
In the wild, the best process to use depending on how many samples we have at our disposal and the machine learning algorithms we are using, is either of the following: <br>
1) Split our data into training, validation, and testing sets; <br>
2) Setup a pipeline, and fit it with our **training** set; <br>
3) Access the accuracy of its output using our **validation** set; <br>
4) Fine tune this accuracy by adjusting the hyper-parameters of our pipeline; <br>
5) When we are comfortable with its accuracy, finally evaluate our pipeline with the **testing** set. <br>
<br>
OR <br>
<br>
1) Split our data into training and testing sets; <br>
2) Setup a pipeline with CV and fit/score it with our **training** set; <br>
3) Fine tune this accuracy by adjusting the hyper-parameters of our pipeline; <br>
4) When we are comfortable with its accuracy, finally evaluate our pipeline with the **testing** set. <br>
<br>
<br>
Useful link:
- [Cross-Validation iterators](https://scikit-learn.org/stable/modules/cross_validation.html#cross-validation-iterators)
[Click here to return to the top of the pag](#inizio)<a href='#inizio'></a>
<a id='power_tuning'></a>
### Power Tuning
The method used for parameter tuning is **GridSearchCV**. <br>
In its simplest form, GridSearchCV works by taking in an estimator, a grid of parameters you want optimized, and your cv split value. This is the example from [SciKit-Learn's API page](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html):
```
>>> from sklearn import svm, datasets
>>> from sklearn.model_selection import GridSearchCV
>>> iris = datasets.load_iris()
>>> parameters = {'kernel':('linear', 'rbf'), 'C':[1, 5, 10]}
>>> svc = svm.SVC(gamma="scale")
>>> clf = GridSearchCV(svc, parameters, cv=3)
>>> clf.fit(iris.data, iris.target)
...
GridSearchCV(cv=3, error_score=...,
estimator=SVC(C=1.0, cache_size=..., class_weight=..., coef0=...,
decision_function_shape='ovr', degree=..., gamma=...,
kernel='rbf', max_iter=-1, probability=False,
random_state=None, shrinking=True, tol=...,
verbose=False),
fit_params=None, iid=..., n_jobs=None,
param_grid=..., pre_dispatch=..., refit=..., return_train_score=...,
scoring=..., verbose=...)
```
In this example, GridSearchCV is being used to optimize a support vector classifier model. <br>
Since the exact parameters have been specified, GridSearchCV will build a table of every combination (but not permutation) of the available parameters and crossvalidate each one separately: <br>
<img src='power_tuning.jpg'> <br>
In addition to explicitly defining the parameters you want tested, you can also use randomized parameter optimization with SciKit-Learn's [RandomizedSearchCV](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.RandomizedSearchCV.html) class. The semantics are a bit different here.<br> First, instead of passing a list of grid objects (with GridSearchCV, you can actually perform multiple grid optimizations, consecutively), this
time you pass in a your parameters as a single dictionary that holds either possible, discrete parameter values or distribution over them.
```
parameter_dist = {
'C': scipy.stats.expon(scale=100),
'kernel': ['linear'],
'gamma': scipy.stats.expon(scale=.1),
}
classifier = grid_search.RandomizedSearchCV(model, parameter_dist)
classifier.fit(iris.data, iris.target)
```
RandomizedSearchCV also takes in an optional n_iter parameter you can use to control the number of parameter settings that are sampled. Regardless of the cross validation search tool you end up using, after all of the methods exposed by the class are ran using the estimator that maximized the score of the out-of-bag data. So in the examples above, the .fit() method along with any subsequent methods, such as .predict(), .score(), .transform(), .predict() are all executed and return values as-if they were called on the best found estimator directly.<br>
SciKit-Learn has [a very nice example](https://scikit-learn.org/stable/auto_examples/model_selection/plot_randomized_search.html#sphx-glr-auto-examples-model-selection-plot-randomized-search-py) that compares the execution times as well as scoring results of randomized search versus grid search, while trying to optimize various random forest
parameters.
[Click here to return to the top of the pag](#inizio)<a href='#inizio'></a>
<a id='pipeling'></a>
### Pipeling
SciKit-Learn has created a pipelining class, it wraps around your entire data analysis pipeline from start to finish, and allows you to interact with the pipeline as if it were a single white-box, configurable estimator. <br>
The other added benefit is that once your pipeline has been built, since the pipeline inherits from the estimator base class, you can use it pretty much anywhere you'd use regular estimators-including in your cross validator method. Doing so, you can simultaneously fine tune the parameters of each of the estimators and predictors that comprise your data-analysis pipeline. <br>
If you don't want to encounter errors, there are a few rules you must abide by while using SciKit-Learn's pipeline: <br>
- Every intermediary model, or step within the pipeline must be a transformer. That means its class must implement both the .fit() and the .transform() methods. This is rather important, as the output from each step will serve as the input to the subsequent step;
- The very last step in your analysis pipeline only needs to implement the .fit() method, since it will not be feeding data into another step.
The code to get up and running with your own pipelines looks like this:
```
from sklearn.pipeline import Pipeline
svc = svm.SVC(kernel='linear')
pca = RandomizedPCA()
pipeline = Pipeline([('pca', pca), ('svc', svc)])
pipeline.set_params(pca__n_components=5, svc__C=1, svc__gamma=0.0001)
pipeline.fit(X, y)
```
Notice that when you define parameters, you have to lead with the name you specified for that parameter when you added it to your pipeline, followed by two underscores and the parameter name. This is important because there are many estimators that share the same parameter names within SciKit-Learn's API. Without this, there would be ambiguity. <br>
The pipeline class only has a single attribute called .named_steps, which is a dictionary containing the estimator names you specified as keys. <br>
You can use it to gain access to the underlying estimator steps within your pipeline. Besides directly specifying estimators, you can also have feature unions and nested pipelines as well! On top of that, you can implement your own custom transformers as a minimal class, so long as you provide end-points for .fit(), and .transform().
**Some useful links:**
- [Emanuel Ferm - Cheat Sheet](http://eferm.com/machine-learning-cheat-sheet/)
- [Estimator Parameter Search-Space](https://scikit-learn.org/stable/modules/grid_search.html#grid-search)
- [Getting Crazy With Pipelining](http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html)
- [Overfitting](https://en.wikipedia.org/wiki/Overfitting)
With this paragraph ends the notebook "Evaluating" and the Python Course.
<br><br>
- [Click here to return to the top of the pag](#inizio)<a href='#inizio'></a>
<br><br>
If you have any doubts, you can write to us on Teams!<br>
See you soon!
| github_jupyter |
```
import os
import sys
import random
import numpy as np
import pandas as pd
from dotenv import load_dotenv
load_dotenv(".env")
from src.domain import Track, User, Setlist
from src.driver import SampleDriverImpl
from src.repository import SampleRepository
from src.solver import QuboSolver
from IPython.display import Image, HTML
```
## 汎用関数の定義
```
def millsec_to_str(millSeconds: int, hour: bool = False):
m, s = divmod(millSeconds / 1000, 60)
h, m = divmod(m, 60)
if hour:
str_duration = "%d:%02d:%02d" % (h, m, s)
else:
str_duration = "%02d:%02d" % (m, s)
return str_duration
def url_to_img(src: str):
return f'<img src="{src}" />'
def display_setlist(setlist: Setlist):
time_df = pd.DataFrame([[millsec_to_str(setlist.total_time, hour=True)]], columns=["合計"], index=["時間"])
preference_df = pd.DataFrame(
[setlist.scores + [setlist.score_sum, setlist.score_avg, setlist.score_var]],
columns=[[f"ユーザー{i+1}" for i in range(len(setlist.scores))] + ['合計', '平均', '分散']],
index=['好み']
)
tracks_df = pd.json_normalize(map(
lambda t: dict(
{"image": t.small_image, "アーティスト": t.artist, "曲名": t.name, "時間": millsec_to_str(t.duration_ms)},
**{ f"ユーザー{i+1}": "⭐" * v for i, v in enumerate(t.p) }),
setlist.tracks
))
display(time_df)
display(preference_df)
display(HTML(tracks_df.to_html(escape=False, formatters=dict(image=url_to_img))))
```
## 楽曲のサンプルデータを準備(事前に用意した200曲)
```
sample_driver = SampleDriverImpl()
sample_repository = SampleRepository(driver=sample_driver)
sample_tracks = sample_repository.get_tracks("sample")
sample_tracks_df = pd.json_normalize(map(
lambda t: {"ID": t.id, "アーティスト": t.artist, "曲名": t.name, "時間[ms]": millsec_to_str(t.duration_ms)},
sample_tracks
))
display(sample_tracks_df)
```
## 参加者の好みを設定したデータセットを用意
```
M = 5 # 参加者数
n = 100 # 各参加者のプレイリストの曲数
T = 1800 # 設定時間[秒]
users =[] # 参加者リスト
for idx in range(M):
tracks = random.sample(sample_tracks, n) # ランダムにn個の曲を選択
tracks = [Track(id=t.id, name=t.name, artist=t.artist, small_image=t.small_image, duration_ms=t.duration_ms, priority=random.randint(1, 3)) for t in random.sample(sample_tracks, n)]
users.append(User(id=f"user{idx+1}", tracks=tracks))
qubo_solver = QuboSolver(users=users, time_limit=T)
evaluated_tracks_df = pd.json_normalize(map(
lambda t: dict(
{"ID": t.id, "アーティスト": t.artist, "曲名": t.name, "時間": millsec_to_str(t.duration_ms)},
**{ f"ユーザー{i+1}": "⭐" * v for i, v in enumerate(t.p) }
),
qubo_solver.candidates,
))
display(evaluated_tracks_df)
```
## Amplify Annealing Engine実行
```
setlist = qubo_solver.solve(timeout=5000)
display_setlist(setlist)
```
| github_jupyter |
In this notebook we implement L1 convergence.
```
# Imports
import numpy as np
import torch
from phimal_utilities.data import Dataset
from phimal_utilities.data.burgers import BurgersDelta
from DeePyMoD_SBL.deepymod_torch.library_functions import library_1D_in
from DeePyMoD_SBL.deepymod_torch.DeepMod import DeepModDynamic
from sklearn.linear_model import LassoLarsIC
import time
from DeePyMoD_SBL.deepymod_torch.output import Tensorboard, progress
from DeePyMoD_SBL.deepymod_torch.losses import reg_loss, mse_loss, l1_loss
from DeePyMoD_SBL.deepymod_torch.sparsity import scaling, threshold
from numpy import pi
import matplotlib.pyplot as plt
import seaborn as sns
from phimal_utilities.analysis import load_tensorboard
sns.set()
# Settings and parameters
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
np.random.seed(42)
torch.manual_seed(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
# Making data
```
v = 0.1
A = 1.0
# Making grid
x = np.linspace(-3, 4, 100)
t = np.linspace(0.5, 5.0, 50)
x_grid, t_grid = np.meshgrid(x, t, indexing='ij')
# Making data
dataset = Dataset(BurgersDelta, v=v, A=A)
X_train, y_train = dataset.create_dataset(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1), n_samples=0, noise=0.2, random=True)
theta = dataset.library(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1), poly_order=2, deriv_order=2)
dt = dataset.time_deriv(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1))
```
# Training function
```
def train(model, data, target, optimizer, max_iterations, loss_func_args):
'''Trains the deepmod model with MSE, regression and l1 cost function. Updates model in-place.'''
start_time = time.time()
number_of_terms = [coeff_vec.shape[0] for coeff_vec in model(data)[3]]
board = Tensorboard(number_of_terms)
l1_minimum = torch.tensor(100)
threshold = 1e-2
converged = False
# Training
print('| Iteration | Progress | Time remaining | Cost | MSE | Reg | LL |')
for iteration in torch.arange(0, max_iterations + 1):
# Calculating prediction and library and scaling
prediction, time_deriv_list, sparse_theta_list, coeff_vector_list, theta = model(data)
coeff_vector_scaled_list = scaling(coeff_vector_list, sparse_theta_list, time_deriv_list)
# Calculating loss
loss_mse = mse_loss(prediction, target)
loss_reg = reg_loss(time_deriv_list, sparse_theta_list, coeff_vector_list)
loss = torch.sum(2 * torch.log(2 * pi * loss_mse) + loss_reg / loss_mse)
# Writing
if iteration % 20 == 0:
# Write progress to command line
progress(iteration, start_time, max_iterations, loss.item(), torch.sum(loss_mse).item(), torch.sum(loss_reg).item(), torch.sum(loss_reg).item())
# Calculate error for theta
theta_true = loss_func_args['library']
dt_true = loss_func_args['time_deriv']
mae_library = torch.mean(torch.abs(theta - theta_true), dim=0)
mae_dt = torch.mean(torch.abs(dt_true - time_deriv_list[0]), dim=0)
# Write to tensorboard
board.write(iteration, loss, loss_mse, loss_reg, loss_reg, coeff_vector_list, coeff_vector_scaled_list, mae_library=mae_library, mae_time_deriv=mae_dt)
# Checking convergence
if iteration > 1000:
l1_norm = torch.sum(torch.abs(coeff_vector_scaled_list[0]))
if l1_norm < l1_minimum:
l1_minimum = l1_norm
converged = (l1_norm / l1_minimum).item() > 1 + threshold
if converged:
progress(iteration, start_time, max_iterations, loss.item(), torch.sum(loss_mse).item(), torch.sum(loss_reg).item(), torch.sum(loss_reg).item())
break
# Optimizer step
optimizer.zero_grad()
loss.backward()
optimizer.step()
board.close()
```
# Running
```
# Running deepmod
config = {'n_in': 2, 'hidden_dims': [30, 30, 30, 30, 30], 'n_out': 1, 'library_function':library_1D_in, 'library_args':{'poly_order':2, 'diff_order': 2}, 'sparsity_estimator': LassoLarsIC(fit_intercept=False)}
model = DeepModDynamic(**config)
optimizer = torch.optim.Adam(model.network_parameters(), betas=(0.99, 0.999), amsgrad=True)
train(model, X_train, y_train, optimizer, 6000, loss_func_args={'library':torch.tensor(theta) ,'time_deriv': torch.tensor(dt)})
from phimal_utilities.analysis import load_tensorboard
df = load_tensorboard('runs/Apr28_12-27-20_16939e04ebf6/')
df.keys()
plt.semilogy(df.index, df['MSE_0'])
plt.semilogy(df.index, df['Regression_0'])
coeff_keys = [key for key in df.keys() if key[:5] == 'coeff']
scaled_coeff_keys = [key for key in df.keys() if key[:6] == 'scaled']
plt.plot(df[coeff_keys])
plt.ylim([-1.5, 1.5])
plt.plot(df[scaled_coeff_keys])
plt.ylim([-1.5, 1.5])
plt.semilogy(df[scaled_coeff_keys].abs().sum(axis=1))
plt.ylim([1.5, 3])
df[scaled_coeff_keys].abs().sum(axis=1).iloc[-1] / df[scaled_coeff_keys].abs().sum(axis=1).argmin()
```
Looking good, now let's apply a thresholder:
```
prediction, time_deriv_list, sparse_theta_list, coeff_vector_list, theta = model(X_train)
model.constraints.sparsity_mask = model.calculate_sparsity_mask(theta, time_deriv_list)
```
and train another run:
```
train(model, X_train, y_train, optimizer, 5000, loss_func_args={'library':torch.tensor(theta) ,'time_deriv': torch.tensor(dt)})
model.constraints.coeff_vector[0]
df = load_tensorboard('runs/Apr28_12-45-46_16939e04ebf6/')
coeff_keys = [key for key in df.keys() if key[:5] == 'coeff']
scaled_coeff_keys = [key for key in df.keys() if key[:6] == 'scaled']
plt.semilogy(df.index, df['MSE_0'])
plt.semilogy(df.index, df['Regression_0'])
plt.plot(df[coeff_keys])
plt.ylim([-1.5, 1.5])
plt.plot(df[scaled_coeff_keys])
plt.ylim([-1.5, 1.5])
plt.semilogy(df[scaled_coeff_keys].abs().sum(axis=1))
plt.ylim([1.5, 2])
df[scaled_coeff_keys].abs().sum(axis=1).iloc[-1] / df[scaled_coeff_keys].abs().sum(axis=1).argmin()
prediction, time_deriv_list, sparse_theta_list, coeff_vector_list, theta = model(X_train)
model.constraints.sparsity_mask = model.calculate_sparsity_mask(theta, time_deriv_list)
model.constraints.sparsity_mask
train(model, X_train, y_train, optimizer, 5000, loss_func_args={'library':torch.tensor(theta) ,'time_deriv': torch.tensor(dt)})
df = load_tensorboard('runs/Apr28_12-52-49_16939e04ebf6/')
coeff_keys = [key for key in df.keys() if key[:5] == 'coeff']
scaled_coeff_keys = [key for key in df.keys() if key[:6] == 'scaled']
plt.semilogy(df.index, df['MSE_0'])
plt.semilogy(df.index, df['Regression_0'])
plt.plot(df[coeff_keys])
plt.ylim([-1.5, 1.5])
plt.plot(df[scaled_coeff_keys])
plt.ylim([-1.5, 1.5])
plt.semilogy(df[scaled_coeff_keys].abs().sum(axis=1))
plt.ylim([1.5, 2])
df[scaled_coeff_keys].abs().sum(axis=1).iloc[-1] / df[scaled_coeff_keys].abs().sum(axis=1).min() - 1
prediction, time_deriv_list, sparse_theta_list, coeff_vector_list, theta = model(X_train)
model.constraints.sparsity_mask = model.calculate_sparsity_mask(theta, time_deriv_list)
model.constraints.sparsity_mask
df[scaled_coeff_keys].iloc[-1]
df[coeff_keys].iloc[-1]
from sklearn.linear_model import LassoLarsCV, LassoCV, ElasticNetCV, LarsCV, OrthogonalMatchingPursuitCV, ARDRegression
final_test = LassoLarsIC(fit_intercept=False)
#final_test = LassoLarsCV(fit_intercept=False,)
#final_test = LassoCV(fit_intercept=False)
#final_test = ElasticNetCV(fit_intercept=False, l1_ratio=np.linspace(0.1, 1, 10))
#final_test = LarsCV(fit_intercept=False)
#final_test = OrthogonalMatchingPursuitCV(fit_intercept=False)
#final_test = ARDRegression(fit_intercept=True)
dt = (time_deriv_list[0] / torch.norm(time_deriv_list[0])).detach().cpu().numpy()
theta_normed = (theta / torch.norm(theta, dim=0, keepdim=True)).detach().cpu().numpy()
final_test.fit(theta_normed, dt).coef_
model.constraints.sparsity_mask[0].cpu().numpy()
final_test.fit(theta_normed[:, model.constraints.sparsity_mask[0].cpu().numpy()], dt).coef_
time_derivs_normed = [(time_deriv / torch.norm(time_deriv, keepdim=True)).detach().cpu().numpy() for time_deriv in time_derivs]
theta_normed = (theta / torch.norm(theta, dim=0, keepdim=True)).detach().cpu().numpy()
```
So it seems to work :), let's put it all in one fuinction:
# Putting it al together
```
def train(model, data, target, optimizer, max_iterations, loss_func_args):
'''Trains the deepmod model with MSE, regression and l1 cost function. Updates model in-place.'''
start_time = time.time()
number_of_terms = [coeff_vec.shape[0] for coeff_vec in model(data)[3]]
board = Tensorboard(number_of_terms)
l1_minimum = torch.tensor(100)
coeffs_converged = False
# Training
print('| Iteration | Progress | Time remaining | Cost | MSE | Reg | LL |')
for iteration in torch.arange(0, max_iterations + 1):
# Calculating prediction and library and scaling
prediction, time_deriv_list, sparse_theta_list, coeff_vector_list, theta = model(data)
coeff_vector_scaled_list = scaling(coeff_vector_list, sparse_theta_list, time_deriv_list)
# Calculating loss
loss_mse = mse_loss(prediction, target)
loss_reg = reg_loss(time_deriv_list, sparse_theta_list, coeff_vector_list)
loss = torch.sum(2 * torch.log(2 * pi * loss_mse) + loss_reg / loss_mse)
# Writing
if iteration % 50 == 0:
# Write progress to command line
progress(iteration, start_time, max_iterations, loss.item(), torch.sum(loss_mse).item(), torch.sum(loss_reg).item(), torch.sum(loss_reg).item())
# Before writing to tensorboard, we need to fill the missing values with 0
coeff_vectors_padded = [torch.zeros(mask.size()).masked_scatter_(mask, coeff_vector.squeeze()) for mask, coeff_vector in zip(model.constraints.sparsity_mask, coeff_vector_list)]
scaled_coeff_vectors_padded = [torch.zeros(mask.size()).masked_scatter_(mask, coeff_vector.squeeze()) for mask, coeff_vector in zip(model.constraints.sparsity_mask, coeff_vector_scaled_list)]
board.write(iteration, loss, loss_mse, loss_reg, loss_reg, coeff_vectors_padded, scaled_coeff_vectors_padded)
# Checking convergence
if iteration > loss_func_args['initial_conv_check']:
l1_norm = torch.sum(torch.abs(coeff_vector_scaled_list[0]))
if l1_norm < l1_minimum:
l1_minimum = l1_norm
coeffs_converged = (l1_norm / l1_minimum).item() > 1 + loss_func_args['threshold']
catch_condition = (iteration >= loss_func_args['start_sparsity_update']) and (iteration % loss_func_args['sparsity_update_period'] == 0)
# Updating sparsity mask
if coeffs_converged or catch_condition:
with torch.no_grad():
model.constraints.sparsity_mask = model.calculate_sparsity_mask(theta, time_deriv_list)
# Optimizer step
optimizer.zero_grad()
loss.backward()
optimizer.step()
board.close()
# Running deepmod
config = {'n_in': 2, 'hidden_dims': [30, 30, 30, 30, 30], 'n_out': 1, 'library_function':library_1D_in, 'library_args':{'poly_order':2, 'diff_order': 2}, 'sparsity_estimator': LassoLarsIC(fit_intercept=False)}
model = DeepModDynamic(**config)
optimizer = torch.optim.Adam(model.network_parameters(), betas=(0.99, 0.999), amsgrad=True)
train(model, X_train, y_train, optimizer, 15000, loss_func_args={'initial_conv_check': 1000, 'threshold': 1e-3, 'start_sparsity_update': 2000, 'sparsity_update_period': 500})
model.constraints.sparsity_mask
df = load_tensorboard('runs/Apr28_14-44-16_16939e04ebf6/')
coeff_keys = [key for key in df.keys() if key[:5] == 'coeff']
scaled_coeff_keys = [key for key in df.keys() if key[:6] == 'scaled']
plt.semilogy(df.index, df['MSE_0'])
plt.semilogy(df.index, df['Regression_0'])
plt.plot(df[coeff_keys])
plt.ylim([-1.5, 1.5])
plt.plot(df[scaled_coeff_keys])
plt.ylim([-1.5, 1.5])
plt.semilogy(df[scaled_coeff_keys].abs().sum(axis=1))
plt.ylim([1.5, 2])
df[scaled_coeff_keys].iloc[-1]
model.constraints.coeff_vector
model.sparsity_estimator.coef_
def train(model, data, target, optimizer, max_iterations, loss_func_args):
'''Trains the deepmod model with MSE, regression and l1 cost function. Updates model in-place.'''
start_time = time.time()
number_of_terms = [coeff_vec.shape[0] for coeff_vec in model(data)[3]]
board = Tensorboard(number_of_terms)
l1_minimum = torch.tensor(100)
coeffs_converged = False
# Training
print('| Iteration | Progress | Time remaining | Cost | MSE | Reg | LL |')
for iteration in torch.arange(0, max_iterations + 1):
# Calculating prediction and library and scaling
prediction, time_deriv_list, sparse_theta_list, coeff_vector_list, theta = model(data)
coeff_vector_scaled_list = scaling(coeff_vector_list, sparse_theta_list, time_deriv_list)
# Calculating loss
loss_mse = mse_loss(prediction, target)
loss_reg = reg_loss(time_deriv_list, sparse_theta_list, coeff_vector_list)
loss = torch.sum(2 * torch.log(2 * pi * loss_mse) + loss_reg / loss_mse)
# Writing
if iteration % 50 == 0:
# Write progress to command line
progress(iteration, start_time, max_iterations, loss.item(), torch.sum(loss_mse).item(), torch.sum(loss_reg).item(), torch.sum(loss_reg).item())
# Before writing to tensorboard, we need to fill the missing values with 0
coeff_vectors_padded = [torch.zeros(mask.size()).masked_scatter_(mask, coeff_vector.squeeze()) for mask, coeff_vector in zip(model.constraints.sparsity_mask, coeff_vector_list)]
scaled_coeff_vectors_padded = [torch.zeros(mask.size()).masked_scatter_(mask, coeff_vector.squeeze()) for mask, coeff_vector in zip(model.constraints.sparsity_mask, coeff_vector_scaled_list)]
board.write(iteration, loss, loss_mse, loss_reg, loss_reg, coeff_vectors_padded, scaled_coeff_vectors_padded)
# Checking convergence
if iteration > loss_func_args['initial_conv_check']:
l1_norm = torch.sum(torch.abs(coeff_vector_scaled_list[0]))
if l1_norm < l1_minimum:
l1_minimum = l1_norm
coeffs_converged = (l1_norm / l1_minimum).item() > 1 + loss_func_args['threshold']
catch_condition = (iteration >= loss_func_args['start_sparsity_update']) and (iteration % loss_func_args['sparsity_update_period'] == 0)
# Updating sparsity mask
if coeffs_converged or catch_condition:
with torch.no_grad():
model.constraints.sparsity_mask = model.calculate_sparsity_mask(theta, time_deriv_list)
# Optimizer step
optimizer.zero_grad()
loss.backward()
optimizer.step()
board.close()
```
| github_jupyter |
```
import math
def findGCD(seq):
gcd = seq[0]
for i in range(1,len(seq)):
gcd=math.gcd(gcd, seq[i])
return gcd
def findSignature(seq):
nonzero_seq = [d for d in seq if d!=0]
if len(nonzero_seq)==0:
return seq
sign = 1 if nonzero_seq[0]>0 else -1
gcd = findGCD(seq)
return [sign*x//gcd for x in seq]
def findDerivative(seq):
return [0] if len(seq)<=1 else [seq[i]-seq[i-1] for i in range(1,len(seq))]
def addAll(seq, node, list):
if 'value' in node:
list.append( ( seq, node['value'] ) )
for key in node:
if key != 'value':
addAll(seq + [key], node[key], list)
class prefixTree:
def __init__(self):
self.data={}
self.puts=0
self.nodes=0
def put(self, seq, value):
node=self.data
nodeCreated=False
for i in range(0,len(seq)):
item=seq[i]
if not item in node:
node[item]={}
if 'value' in node:
del node['value']
self.nodes+=1
nodeCreated=True
node=node[item]
if nodeCreated:
node['value']=value
self.puts+=1
elif 'value' in node:
node['value']=max(node['value'], value)
def prefix(self, seq):
list=[]
node=self.data
for i in range(0,len(seq)):
item=seq[i]
if item in node:
node=node[item]
else:
return list
addAll(seq, node, list)
return list
def hasPrefix(self, seq):
node=self.data
for i in range(0,len(seq)):
item=seq[i]
if item in node:
node=node[item]
else:
return False
return True
from functools import reduce
next_elm = []
def findNext(seq, trie):
while True:
nonZeroIndex =-1
for i in range(0,len(seq)):
if seq[i] != 0:
nonZeroIndex = i
break
if nonZeroIndex < 0:
return 0
signature = findSignature(seq)
list = trie.prefix( signature )
list = filter(lambda x: len(x[0])>len(signature), list)
item = next(list, None)
if item != None:
best = reduce(lambda a, b: a if a[1]>b[1] else b if b[1]>a[1] else a if len(b[0])<=len(a[0]) else b, list, item)
nextElement = best[0][len(seq)]
nextElement *= seq[nonZeroIndex]//signature[nonZeroIndex]
return nextElement
if len(seq) <= 3:
break
seq = seq[1:]
return None
def findNextAndDerive(seq, trie):
nextElement=findNext(seq, trie)
if nextElement==None:
der=findDerivative(seq)
if len(der)<=3:
return None
nextElement=findNextAndDerive(der, trie)
if nextElement==None:
return None
return seq[len(seq)-1]+nextElement
return nextElement
##### import datasets #####
import pandas as pd
#train_df= pd.read_csv('./data/train.csv', index_col="Id", nrows=100)
test_df = pd.read_csv('/home/nastya/Desktop/uData/data/test.csv', index_col="Id")
train_df= pd.read_csv('/home/nastya/Desktop/uData/data/train.csv', index_col="Id")
train_df = train_df['Sequence'].to_dict()
test_df = test_df['Sequence'].to_dict()
#seq_train = {0: [1 for x in range(0,400)]}
seq_train = {}
seq_test = {}
#seq_test = {0: [1 for x in range(0,400)]}
for key in train_df:
seq1 = train_df[key]
seq1 = [int(x) for x in seq1.split(',')]
seq_train[key] = seq1
for key in test_df:
seq2 = test_df[key]
seq2 = [int(x) for x in seq2.split(',')]
seq_test[key] = seq2
### our data is dict ###
### creating trie for test data
## not run building two trie together
import json
trieTest = prefixTree()
if True:
for id in seq_test:
der_test = seq_test[id]
for derAttempts in range(4):
test_seq = der_test
firstInTrie = False
for subseqAttempts in range(4-derAttempts):
while len(test_seq)>0 and test_seq[0] == 0:
test_seq = test_seq[1:]
signature = findSignature(test_seq)
if trieTest.hasPrefix( signature ):
if subseqAttempts == 0:
firstInTrie = True
break
trieTest.put( signature, len(test_seq)*100//len(der_test) )
if len(test_seq) <= 3:
break
test_seq = test_seq[1:]
if firstInTrie:
break
der_test = findDerivative(der_test)
### creating trie for train data
trieTrain = prefixTree()
if True:
for id in seq_train:
der_train = seq_train[id]
for derAttempts in range(4):
train_seq = der_train
firstInTrie = False
for subseqAttempts in range(4-derAttempts):
while len(train_seq)>0 and train_seq[0] == 0:
train_seq = train_seq[1:]
signature = findSignature(train_seq)
if trieTrain.hasPrefix( signature ):
if subseqAttempts == 0:
firstInTrie = True
break
trieTrain.put( signature, len(train_seq)*100//len(der_train) )#задаем веса для каждого узла len(train_seq)*100//len(der_train)
if len(train_seq) <= 3:
break
train_seq = train_seq[1:]
if firstInTrie:
break
der_train = findDerivative(der_train)
#### accuracy for test dataset
total=0
guessed=0
for key in test_df:
data_pred_list = list(map(int, test_df[key].split(',')))
data_pred = data_pred_list[0:len(data_pred_list)-1]
tagret = data_pred_list[-1]
next_elm = findNextAndDerive(data_pred,trieTest)
total += 1
if next_elm == None:
next_elm = 0
else:
if next_elm == tagret:
guessed += 1
print('Total %d' %total)
print('Guessed %d' %guessed)
print('Percent %d' %int(guessed*100//total))
total=0
guessed=0
with open('/home/nastya/Desktop/uData/data/result_prefic_trie.csv', 'w+') as output:
output.write('"Id","Last"\n')
for id in train_df:
der = list(map(int, train_df[id].split(',')))
nextElement = findNextAndDerive(der, trieTrain)
output.write(str(id))
output.write(',')
total += 1
if nextElement == None:
output.write('0')
else:
output.write(str(nextElement))
guessed+=1
output.write('\n')
print('Total %d' %total)
print('Guessed %d' %guessed)
print('Percent %d' %int(guessed*100//total))
```
| github_jupyter |
```
from sklearn.datasets import load_digits
from sklearn.cluster import KMeans
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.preprocessing import scale
from sklearn import cluster
from sklearn import metrics
%matplotlib inline
title = ["Alcohol","Malic acid","Ash","Alcalinity of ash","Magnesium","Total phenols"
,"Flavanoids","Nonflavanoid phenols","Proanthocyanins",
"Color intensity","Hue","OD280/OD315 of diluted wines",
"Proline"]
file = pd.read_csv("./wine.data",header=None)
data = file.iloc[:,1:]
data.columns = title
label = file[0]
data
#离散化
x_train = scale(data.copy())
#训练模型,分类为3
model = KMeans(n_clusters=3)
model.fit(x_train)
#预测数据
y_pred = model.predict(x_train)
y_train = label.copy()
y_pred = y_pred+1
print(list(y_pred))
print(list(y_train))
#打印Micro-F1指标和Macro-F1指标
f1_micro = metrics.f1_score(y_train,y_pred,average="micro",)
f1_macro = metrics.f1_score(y_train,y_pred,average="macro")
print(f1_micro)
print(f1_macro)
st_pred = y_pred
from sklearn.metrics import classification_report
y_true = label
y_pred = st_pred
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))
x_train2 = scale(data.copy())
model2 = KMeans(n_clusters=3)
model2.fit(x_train2)
y_pred2 = model2.predict(x_train2)
y_pred2 = y_pred2+1
TP={}
FP={}
FN={}
for i in range(1,4):
TP[i] = 0
FP[i] = 0
FN[i] = 0
#1,2,3类
for j in range(1,4):
#拿到真实标签中j这类的所有元素位置
two = np.where(label==j)
type_b = pd.Series(y_pred2[two])
#本应该都是j类的,混杂了其他类
type_b = type_b.value_counts()
for i,(index,value) in enumerate(type_b.items()):
if i == 0:
TP[index] += value
now_group = index
else:
FP[index] += value
FN[now_group] += value
matrix = pd.DataFrame({"TP":TP,"FP":FP,"FN":FN})
print(matrix)
Sum = matrix.sum(axis=0)
print(Sum)
precison = Sum["TP"]/(Sum["TP"]+Sum["FP"])
recall = Sum["TP"]/(Sum["TP"]+Sum["FN"])
F1 = (2*precison*recall)/(precison+recall)
print("为=微平均")
print(F1)
print("*"*20)
res = []
matrix.loc[1,"TP"]
for one in matrix.index:
precison = matrix.loc[one,"TP"]/(matrix.loc[one,"FP"]+matrix.loc[one,"TP"])
recall = matrix.loc[one,"TP"]/(matrix.loc[one,"FN"]+matrix.loc[one,"TP"])
F1 = (2*precison*recall)/(precison+recall)
res.append(F1)
print("宏平均")
np.mean(res)
X = scale(data.copy())
model = KMeans(n_clusters=3)
model.fit(X)
y_pred = model.predict(X)
metrics.silhouette_score(X,y_pred,sample_size=len(X),metric='euclidean')
#使用轮廓系数选择合适的聚类数目
score = []
for i in range(2,8):
kmeans = KMeans(i)
kmeans.fit(X)
pred = kmeans.predict(X)
score.append(metrics.silhouette_score(X,pred,sample_size=len(X),metric='euclidean'))
plt.plot(range(2,8),score)
# plt.xticks()
#可视化聚类 PCA降维 标准化数据
from sklearn.decomposition import PCA
reduced_data = PCA(n_components=2).fit_transform(scale(data))
kmeans = KMeans(n_clusters=3)
kmeans.fit(reduced_data)
result = kmeans.labels_
centroids = kmeans.cluster_centers_
plt.figure(2)
plt.scatter(reduced_data[:,0],reduced_data[:,1],c=result)
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='^', s=120, linewidths=3,
color='r', zorder=10)
# import matplotlib
# matplotlib.markers?
plt.figure(2)
plt.scatter(reduced_data[:,0],reduced_data[:,1],c=label)
h=0.02
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
# plt.figure(2)
plt.scatter(reduced_data[:,0],reduced_data[:,1],c=result)
```
| github_jupyter |
# Checking the Dependence of Local Cell Density vs. Nucleus Size
### Question:
Check whether increasing local cell density [pixels^2] impacts the size of the nucleus [pixels] as segmented by the U-Net.
### Expectation:
The nucleus size should be indirectly proportional to the local cell density; i.e as local cell density grows, the cell nucleus size decreases (should observe a diagonal from upper left corner to lower right corner)
### Observation:
Successful!
### Methods:
```
import h5py
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append("../")
from Movie_Analysis_Pipeline.Single_Movie_Processing.Server_Movies_Paths import Get_MDCK_Movies_Paths
def Extract_Datasets(hdf5_file):
with h5py.File(hdf5_file, 'r') as f:
density = np.array(f["objects"]["obj_type_1"]["local_density"])
nucleus = np.array(f["objects"]["obj_type_1"]["nucleus_size"])
length = len(f["objects"]["obj_type_1"]["map"])
return density, nucleus, length
```
### Representative 2D histogram plot for single movie (GV0800, pos0):
```
hdf5_file = "/Volumes/lowegrp/Data/Kristina/Cells_MDCK/GV0800/pos0/HDF/segmented.hdf5"
date, pos = hdf5_file.split("/")[-4], hdf5_file.split("/")[-3]
density, nucleus, length = Extract_Datasets(hdf5_file=hdf5_file)
# Plot 2D histogram:
_ = plt.figure(figsize=(20, 10))
plt.hist2d(x=nucleus, y=density, bins=[16, 20], range=[[0, 1600], [0, 0.01]])
plt.title("Local Density vs. Nucleus Size of a Representative MDCK Movie: {}, {} -> len = {}".format(date, pos, length), fontsize=16)
plt.xlabel("Nucleus Size [pixels]", fontsize=12)
plt.ylabel("Local Density [pixels^-2]", fontsize=12)
plt.colorbar()
plt.show()
plt.close()
```
### Plot these histograms for each movie individually across the entire MDCK dataset:
```
movies = Get_MDCK_Movies_Paths()
fig, axs = plt.subplots(figsize=(24, 45), nrows=11, ncols=4)
fig.suptitle(t="Dependence of Local Cell Density vs. Nucleus Size in Individual MDCK Movies", x=0.5, y=0.9, fontsize=20)
fig.text(x=0.5, y=0.11, s="Nucleus Size [pixels]", ha='center', fontsize=16)
fig.text(x=0.09, y=0.5, s="Local Density [pixels^-2]", va='center', rotation='vertical', fontsize=16)
for enum, movie in enumerate(movies):
pos, date = movie.split("/")[-2], movie.split("/")[-3]
hdf5_file = movie + "/HDF/segmented.hdf5"
density, nucleus, length = Extract_Datasets(hdf5_file=hdf5_file)
# Plot the individual 2D histograms for each file:
axs[enum // 4, enum % 4].hist2d(x=nucleus, y=density, bins=[14, 20], range=[[0, 1400], [0, 0.01]])
axs[enum // 4, enum % 4].set_title("Movie: {}, {} -> len = {}".format(date, pos, length))
plt.show()
plt.close()
```
### Now create a matrix of your counts and plot 2D histogram to summarize all of your data:
```
matrix = [[0 for _ in range(50)] for _ in range(25)]
for enum, movie in enumerate(movies):
pos, date = movie.split("/")[-2], movie.split("/")[-3]
hdf5_file = movie + "/HDF/segmented.hdf5"
density, nucleus, length = Extract_Datasets(hdf5_file=hdf5_file)
a, b, c, d = plt.hist2d(x=nucleus, y=density, bins=[25, 50], range=[[0, 2500], [0, 0.1]])
for enum_row, row in enumerate(a):
for enum_col, col in enumerate(row):
matrix[enum_row][enum_col] += col
_ = plt.figure(figsize=(20, 10))
for enum, movie in enumerate(movies):
pos, date = movie.split("/")[-2], movie.split("/")[-3]
hdf5_file = movie + "/HDF/segmented.hdf5"
density, nucleus, lenght = Extract_Datasets(hdf5_file=hdf5_file)
plt.scatter(x=nucleus, y=density, s=1, color="plum", alpha=0.1)
plt.title("All MDCK Movies: Local Density vs. Nucleus Size", fontsize=20)
plt.xlim(-50, 2500)
plt.ylim(-0.002, 0.1)
plt.xlabel("Nucleus Size [pixels]", fontsize=12)
plt.ylabel("Local Density [pixels^-2]", fontsize=12)
plt.show()
plt.close()
```
### 3D BarPlot:
```
# Prepare the figure, subplots, meshgrid & basic features:
from mpl_toolkits.mplot3d import Axes3D # noqa: F401 unused import
fig = plt.figure(figsize=(20, 20))
fig.suptitle(t="Dependence of Local Cell Density vs. Nucleus Size in Entire MDCK Dataset", x=0.5, y=0.9, fontsize=20)
ax = fig.add_subplot(211, projection='3d')
width = 100
depth = 0.002
_x = np.arange(start=0, stop=2500, step=width)
_y = np.arange(start=0, stop=0.1, step=depth)
_xx, _yy = np.meshgrid(_x, _y)
x, y = _xx.ravel(), _yy.ravel()
top = np.array(matrix).ravel()
bottom = np.zeros_like(top)
# Plot the thing:
ax.bar3d(x, y, bottom, width, depth, top, color="plum", shade=True)
ax.set_xlabel('X: Nucleus Size [pixels]', labelpad=5, fontsize=12)
ax.set_ylabel('Y: Local Density [pixels^-2]', labelpad=5, fontsize=12)
ax.set_zlabel('Z: Frequency of Occurrence', labelpad=5, fontsize=12)
plt.show()
plt.close()
# Some probability statistics:
matrix_counter = 0
for enum_row, row in enumerate(matrix):
for enum_col, col in enumerate(row):
matrix_counter += col
max_percentage = []
for enum_row, row in enumerate(matrix):
percentage = [item * 100 / matrix_counter for item in row]
max_percentage.append(np.max(percentage))
print ("Total observations per scatter plot / 3D bar plot = {}; highest peak % = {}".format(matrix_counter, np.max(max_percentage)))
```
| github_jupyter |
### **Connect With Me in Linkedin :-** https://www.linkedin.com/in/dheerajkumar1997/
# Import Libraries
```
import nltk
from nltk.stem import PorterStemmer
from nltk.stem import LancasterStemmer
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
```
# Giving Knowledge as Corpus
```
Corpus = """DATA : It can be any unprocessed fact, value, text, sound or picture that is not being interpreted and analyzed.
Data is the most important part of all Data Analytics, Machine Learning, Artificial Intelligence. Without data, we can’t train
any model and all modern research and automation will go vain. Big Enterprises are spending loads of money just to gather as
much certain data as possible.Example: Why did Facebook acquire WhatsApp by paying a huge price of $19 billion?
The answer is very simple and logical – it is to have access to the users’ information that Facebook may not have but WhatsApp
will have. This information of their users is of paramount importance to Facebook as it will facilitate the task of improvement
in their services.INFORMATION : Data that has been interpreted and manipulated and has now some meaningful inference for the users.
KNOWLEDGE : Combination of inferred information, experiences, learning and insights. Results in awareness or concept building
for an individual or organization.Consider an example:There’s a Shopping Mart Owner who conducted a survey for which he has a long
list of questions and answers that he had asked from the customers, this list of questions and answers is DATA. Now every time when
he want to infer anything and can’t just go through each and every question of thousands of customers to find something
relevant as it would be time-consuming and not helpful. In order to reduce this overhead and time wastage and to make work
easier, data is manipulated through software, calculations, graphs etc. as per own convenience, this inference from
manipulated data is Information. So, Data is must for Information. Now Knowledge has its role in differentiating
between two individuals having same information. Knowledge is actually not a technical content but is linked to human thought
process"""
```
# Sentence Tokenization
```
sentence = nltk.sent_tokenize(Corpus)
sentence
```
# Word Tokenization
```
words = nltk.word_tokenize(Corpus)
words
```
# Stemming using PorterStemmer
```
stemmer = PorterStemmer()
stemmed_words =[]
for i in range(len(sentence)):
words = nltk.word_tokenize(sentence[i])
words = [stemmer.stem(word) for word in words if word not in set(stopwords.words('english'))]
sentence[i] = " ".join(words)
stemmed_words.append(sentence[i])
stemmed_words
```
### **Connect With Me in Linkedin :-** https://www.linkedin.com/in/dheerajkumar1997/
| github_jupyter |
```
import pandas as pd
import numpy as np
from tqdm import tqdm
import torch
import os
from sklearn.metrics import silhouette_score
import umap
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
# !pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
data_path = os.path.join('gdrive', 'My Drive', 'Colab Notebooks', 'text_clustering', 'million-headlines', 'abcnews-date-text.csv')
# dataframe
df_all = pd.read_csv(data_path)
# get year from date and put that in a new column "year"
df_all['year'] = df_all['publish_date'].apply(lambda s: str(s)[:4])
print('shape of dataframe:', df_all.shape)
print('number of news headlines in different years:')
print(df_all.groupby('year')['year'].count())
# get news headlines from 2017 only
df = df_all.loc[df_all['year'] == "2017"]
print('shape of dataframe:', df.shape)
sentences = df['headline_text'].values
print('headlines:')
print(sentences)
# ROBERTA embeddings
model = SentenceTransformer('roberta-base-nli-mean-tokens')
sentence_embs = model.encode(sentences=sentences, batch_size=256, show_progress_bar=True)
# stack embs
sentence_embs = np.stack(sentence_embs)
print(sentence_embs.shape)
################
# kmeans utils #
################
def forgy(X, n_clusters):
_len = len(X)
indices = np.random.choice(_len, n_clusters, replace=False)
initial_state = X[indices]
return initial_state
def do_kmeans_clustering(X, n_clusters, distance='euclidean', tol=1e-4, device=torch.device('cpu')):
print(f'k-means clustering on {device}..')
if distance == 'euclidean':
pairwise_distance_function = pairwise_distance
elif distance == 'cosine':
pairwise_distance_function = pairwise_cosine
else:
raise NotImplementedError
X = X.float()
# transfer to device
X = X.to(device)
initial_state = forgy(X, n_clusters)
iteration = 0
tqdm_meter = tqdm()
while True:
dis = pairwise_distance_function(X, initial_state)
choice_cluster = torch.argmin(dis, dim=1)
initial_state_pre = initial_state.clone()
for index in range(n_clusters):
selected = torch.nonzero(choice_cluster == index).squeeze().to(device)
selected = torch.index_select(X, 0, selected)
initial_state[index] = selected.mean(dim=0)
center_shift = torch.sum(torch.sqrt(torch.sum((initial_state - initial_state_pre) ** 2, dim=1)))
# increment iteration
iteration = iteration + 1
# update tqdm meter
tqdm_meter.set_postfix(iteration=f'{iteration}', center_shift=f'{center_shift ** 2}', tol=f'{tol}')
if center_shift ** 2 < tol:
break
return choice_cluster.cpu(), initial_state.cpu()
def kmeans_predict(X, cluster_centers, distance='euclidean', device=torch.device('cpu')):
print(f'predicting on {device}..')
if distance == 'euclidean':
pairwise_distance_function = pairwise_distance
elif distance == 'cosine':
pairwise_distance_function = pairwise_cosine
else:
raise NotImplementedError
X = X.float()
# transfer to device
X = X.to(device)
dis = pairwise_distance_function(X, cluster_centers)
choice_cluster = torch.argmin(dis, dim=1)
return choice_cluster.cpu()
'''
calculation of pairwise distance, and return condensed result,
i.e. we omit the diagonal and duplicate entries and store
everything in a one-dimensional array
'''
def pairwise_distance(data1, data2=None, device=torch.device('cpu')):
r'''
using broadcast mechanism to calculate pairwise euclidean distance of data
the input data is N*M matrix, where M is the dimension
we first expand the N*M matrix into N*1*M matrix A and 1*N*M matrix B
then a simple elementwise operation of A and B will handle the pairwise operation of points represented by data
'''
if data2 is None:
data2 = data1
data1, data2 = data1.to(device), data2.to(device)
# N*1*M
A = data1.unsqueeze(dim=1)
# 1*N*M
B = data2.unsqueeze(dim=0)
dis = (A - B) ** 2.0
# return N*N matrix for pairwise distance
dis = dis.sum(dim=-1).squeeze()
return dis
def pairwise_cosine(data1, data2=None, device=torch.device('cpu')):
r'''
using broadcast mechanism to calculate pairwise cosine distance of data
the input data is N*M matrix, where M is the dimension
we first expand the N*M matrix into N*1*M matrix A and 1*N*M matrix B
then a simple elementwise operation of A and B will handle the pairwise operation of points represented by data
'''
if data2 is None:
data2 = data1
data1, data2 = data1.to(device), data2.to(device)
# N*1*M
A = data1.unsqueeze(dim=1)
# 1*N*M
B = data2.unsqueeze(dim=0)
# normalize the points | [0.3, 0.4] -> [0.3/sqrt(0.09 + 0.16), 0.4/sqrt(0.09 + 0.16)] = [0.3/0.5, 0.4/0.5]
A_normalized = A / A.norm(dim=-1, keepdim=True)
B_normalized = B / B.norm(dim=-1, keepdim=True)
cosine = A_normalized * B_normalized
# return N*N matrix for pairwise distance
cosine_dis = 1 - cosine.sum(dim=-1).squeeze()
return cosine_dis
def group_pairwise(X, groups, device=torch.device('cpu'), fun=lambda r, c: pairwise_distance(r, c).cpu()):
group_dict = {}
for group_index_r, group_r in enumerate(groups):
for group_index_c, group_c in enumerate(groups):
R, C = X[group_r], X[group_c]
R, C = R.to(device), C.to(device)
group_dict[(group_index_r, group_index_c)] = fun(R, C)
return group_dict
# set device
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
# find good k (number of clusters)
sil_scores = []
for k in [2, 3, 4, 5, 6]:
# k-means clustering
cluster_ids_x, cluster_centers = do_kmeans_clustering(
X=torch.from_numpy(sentence_embs),
n_clusters=k,
distance='cosine',
device=device
)
sil_scores.append(silhouette_score(sentence_embs, cluster_ids_x.numpy(), metric='cosine'))
# plot silhouette scores
plt.figure(figsize=(6, 4), dpi=160)
plt.plot([2, 3, 4, 5, 6], sil_scores, color='xkcd:bluish purple')
plt.xticks([2, 3, 4, 5, 6])
plt.xlabel('number of clusters', fontsize=14)
plt.ylabel('silhouette score', fontsize=14)
plt.title('finding optimum number of clusters', fontsize=14)
plt.show()
# cluster using k with max silhoutte score
num_clusters = np.argmax(sil_scores) + 2 # 0th index means 2 clusters, ..
cluster_ids_x, cluster_centers = do_kmeans_clustering(
X=torch.from_numpy(sentence_embs),
n_clusters=num_clusters,
distance='cosine',
device=device
)
# UMAP
emb_umap = umap.UMAP(metric='cosine', verbose=True).fit_transform(sentence_embs)
# plot
cols = [color for name, color in mcolors.TABLEAU_COLORS.items()]
plt.figure(figsize=(6, 6), dpi=160)
for cluster_id in range(num_clusters):
plt.scatter(
emb_umap[cluster_ids_x == cluster_id][:, 0],
emb_umap[cluster_ids_x == cluster_id][:, 1],
color=cols[cluster_id],
label=f'cluster {cluster_id}'
)
plt.title('UMAP Sentence Embeddings', fontsize=14)
plt.legend(fontsize=10)
plt.show()
np.random.choice(sentences[cluster_ids_x == 0], 20, replace=False)
np.random.choice(sentences[cluster_ids_x == 1], 20, replace=False)
```
| github_jupyter |
```
import numpy as np
from scipy import signal
import scipy.spatial.distance as distfuncs
import scipy.special as special
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from pathlib import Path
import sys
sys.path.append('../')
import irutilities as irutil
import sf_func as sf
# Load ir data
sessionName = "S1-M3969_npy"
sessionPath = Path('..').joinpath(sessionName)
posAll, posSrc, irAll = irutil.loadIR(sessionPath)
# Sampling rate (original)
samplerate_raw = 48000
# Microphone selection by the method proposed in
# Nishida, Ueno, Koyama, and Saruwatari, Proc. EUSIPCO, DOI: 10.23919/Eusipco47968.2020.9287222, 2020.
numMic = 18 # Number of mics
idxMic_all = np.load('mic_idx.npy')
idxMic_sel = idxMic_all[0:numMic]
# Evaluation points
idxEval = np.where(np.isclose(posAll[:,2], 0.0))[0] # xy-plane at z=0
numEval = idxEval.shape[0]
posEval = posAll[idxEval,:]
posEvalX = np.unique(posEval[:,0].round(4))
posEvalY = np.unique(posEval[:,1].round(4))
posEvalZ = np.unique(posEval[:,2].round(4))
numEvalXYZ = (posEvalX.shape[0], posEvalY.shape[0], posEvalZ.shape[0])
# Microphone positions
idxMic = []
for i in range(numMic):
idx = np.where(np.isclose(idxEval, idxMic_sel[i]))[0]
idxMic.append(idx[0])
posMic = posEval[idxMic,:]
idxMicXY = []
for i in range(numMic):
idxX = np.where(np.isclose(posEvalX, posMic[i,0]))[0]
idxY = np.where(np.isclose(posEvalY, posMic[i,1]))[0]
idxMicXY.append([idxX[0], idxY[0]])
idxMicXY = np.array(idxMicXY)
# Plot geometry
plt.rcParams["font.size"] = 14
fig, ax = plt.subplots()
ax.scatter(posEval[:,0], posEval[:,1], marker='.', color='c')
ax.scatter(posMic[:,0], posMic[:,1], marker='o', color='b')
ax.scatter(posSrc[:,0], posSrc[:,1], marker='*', color='r')
ax.set_aspect('equal')
plt.xlabel('x (m)')
plt.ylabel('y (m)')
plt.show()
# IRs for evaluation
irEval = irAll[0,idxEval,:]
# Downdsampling
downSampling = 6
irEval = signal.resample_poly(irEval, up=1, down=downSampling, axis=-1)
samplerate = samplerate_raw // downSampling
print('samplerate (Hz): ', samplerate)
# IR length
irLen = irEval.shape[1]
print('ir length:', irLen)
# Evaluation signal
maxFreq = 500
h = signal.firwin(numtaps=64, cutoff=maxFreq, fs=samplerate)
sigEval = signal.filtfilt(h, 1, irEval, axis=-1)
# Observed signal at microphones
sigMic = sigEval[idxMic,:]
# Time
sigLen = sigEval.shape[1]
t = np.arange(sigLen)/samplerate
# Draw pressure distribution
xx, yy = np.meshgrid(posEvalX, posEvalY)
posEvalXY, sigEvalXY, _ = irutil.sortIR3(posEval, sigEval[None,:,:], numEvalXYZ, posEvalX, posEvalY, posEvalZ)
sigEvalXY = np.squeeze(sigEvalXY)
tIdx = 1422
fig, ax = plt.subplots()
ax = plt.axes()
color = plt.pcolor(xx, yy, sigEvalXY[:,:,tIdx], cmap='RdBu', shading='auto', vmin=-0.025, vmax=0.025)
ax.scatter(posMic[:,0], posMic[:,1], s=30, marker='o', color='w')
ax.scatter(posMic[:,0], posMic[:,1], s=20, marker='o', color='k')
ax.set_aspect('equal')
cbar=plt.colorbar(color)
cbar.set_label('Amplitude')
plt.xlabel('x (m)')
plt.ylabel('y (m)')
plt.show()
# Sound speed (m/s)
c = 347.3
# Regularization parameter
kerReg = 1e-1
# FFT parameters
fftlen = 16384
# Filter parameters
smplShift = 512
filterLen = 1025
freq = np.arange(1,fftlen/2+1)/fftlen*samplerate
# Kernel interpolation filter
k = 2 * np.pi * freq / c
kiFilter = sf.kiFilterGen(k, posMic, posEval, filterLen, smplShift)
# Plot filter
fig, ax = plt.subplots()
ax.plot(kiFilter[:,0,0])
plt.xlabel('Sample')
plt.show()
# Convolution inerpolation filter
specMic = np.fft.fft(sigMic.T, n=fftlen, axis=0)[:,:,None]
specKiFilter = np.fft.fft(kiFilter, n=fftlen, axis=0)
specEst = np.squeeze(specKiFilter @ specMic)
sigEst = np.fft.ifft(specEst, n=fftlen, axis=0).real.T
sigEst = sigEst[:,smplShift:sigLen+smplShift]
# Mean square error of estimation
mse = 10*np.log10(np.sum(np.abs(sigEst - sigEval)**2) / np.sum(np.abs(sigEval)**2))
print('MSE: ', mse)
posEstXY, sigEstXY, _ = irutil.sortIR3(posEval, sigEst[None,:,:], numEvalXYZ, posEvalX, posEvalY, posEvalZ)
sigEstXY = np.squeeze(sigEstXY)
# Error distribution
distErr = 10*np.log10(np.sum(np.abs(sigEstXY - sigEvalXY) ** 2, axis=-1) / np.sum(np.abs(sigEvalXY) ** 2, axis=-1))
# Draw pressure distribution
tIdx = 1422
plt.rcParams["font.size"] = 14
fig, ax = plt.subplots()
ax = plt.axes()
color = plt.pcolor(xx, yy, sigEvalXY[:,:,tIdx], cmap='RdBu', shading='auto', vmin=-0.025, vmax=0.025)
ax.scatter(posMic[:,0], posMic[:,1], s=30, marker='o', color='w')
ax.scatter(posMic[:,0], posMic[:,1], s=20, marker='o', color='k')
ax.set_aspect('equal')
cbar=plt.colorbar(color)
cbar.set_label('Amplitude')
plt.xlabel('x (m)')
plt.ylabel('y (m)')
plt.savefig("reconst_true.pdf")
fig, ax = plt.subplots()
ax = plt.axes()
color = plt.pcolor(xx, yy, sigEstXY[:,:,tIdx], cmap='RdBu', shading='auto', vmin=-0.025, vmax=0.025)
ax.scatter(posMic[:,0], posMic[:,1], s=30, marker='o', color='w')
ax.scatter(posMic[:,0], posMic[:,1], s=20, marker='o', color='k')
ax.set_aspect('equal')
cbar=plt.colorbar(color)
cbar.set_label('Amplitude')
plt.xlabel('x (m)')
plt.ylabel('y (m)')
fig.savefig("reconst_est.pdf")
# Draw error distribution
fig, ax = plt.subplots()
ax = plt.axes()
color = plt.pcolor(xx, yy, distErr, cmap='magma', shading='auto', vmin=-20.0, vmax=0.0)
ax.scatter(posMic[:,0], posMic[:,1], s=30, marker='o', color='w')
ax.scatter(posMic[:,0], posMic[:,1], s=20, marker='o', color='k')
ax.set_aspect('equal')
cbar=plt.colorbar(color)
cbar.set_label('Error (dB)')
plt.xlabel('x (m)')
plt.ylabel('y (m)')
fig.savefig("reconst_error.pdf")
plt.show()
```
| github_jupyter |
## Number of Labels
Accuracy of a supervised model increases with the number of labels available for training. A natural question to ask is how many labels are needed for a given level of accuracy. In this notebook, we will experiment with the MNIST data set and estimate the number of labels needed to classify 10 digits as well as just two digits.
The results are interesting.
This is a Tensorflow 1.x notebook!
```
%tensorflow_version 1.x
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.optimizers import Adam
from matplotlib import pyplot as plt
import numpy as np
import random
```
These are some configuration parameters and hyperparameters.
```
# Input image dimensions
img_rows, img_cols = 28, 28
# The number of training samples per batch. 128 is a reasonable number.
batch_size = 128
# Our data set contains 10 digits, so the number of classes is 10
num_classes = 10
# epochs is the number of times the model is trained with the data set, more can be better, up to a point
epochs = 10
# dropout is a common regularization hyperperameter. It helps to avoid overfitting or memorizing the input.
dropout = 0.5
```
### Load data
Keras has a builting function for loading MNIST data and splitting it into train and test sets. x_train and x_test are arrays of train and test input images respectively. Images are represented as a 28 x 28 matrix of pixel values. y_train and y_test are train and test labels respectively.
Pixel values are normalized into values ranging from 0.0 - 1.0.
```
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Save original test images for display purposes
orig_test = x_test
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train, x_test = x_train / 255.0, x_test / 255.0
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
```
### 10 Classes
All 10 classes are used for training and testing.
A simple CNN model is used for this experiment as it achieves good accuracy with modest computational requirements.
Several sizes of training data is used to measure accuracy.
```
sizeList = [100, 200, 500, 1000, 2000, 5000, 10000]
accuracy = []
for trainingSize in sizeList:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(dropout/2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(), metrics=['acc'])
x = x_train[:trainingSize]
y = y_train[:trainingSize]
history = model.fit(x, y, batch_size=batch_size, epochs=epochs, verbose=0)
score = model.evaluate(x_test, y_test, verbose=0)
print('Training samples: %d, test accuracy: %.2f %%' % (trainingSize, score[1]*100))
accuracy.append(score[1])
plt.plot(sizeList, accuracy)
plt.title('Model Accuracy')
plt.ylabel('Test Accuracy')
plt.xlabel('Training Samples')
plt.show()
```
### Binary Classification
Just two digits are used for training and test. This demonstrates that two classes require less training data than 10 classes.
```
sizeList = [100, 200, 500, 1000, 2000, 5000, 10000]
accuracy = []
# Choose just two classes
classes = [8, 9]
# Training set
xtr = []
ytr = []
for i in range(len(y_train)):
if y_train[i] in classes:
xtr.append(x_train[i])
ytr.append(classes.index(y_train[i]))
xtr = np.asarray(xtr)
ytr = np.asarray(ytr)
# Test set
xte = []
yte = []
for i in range(len(y_test)):
if y_test[i] in classes:
xte.append(x_test[i])
yte.append(classes.index(y_test[i]))
xte = np.asarray(xte)
yte = np.asarray(yte)
for trainingSize in sizeList:
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(dropout/2))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(dropout))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer=Adam(), metrics=['acc'])
x = xtr[:trainingSize]
y = ytr[:trainingSize]
history = model.fit(x, y, batch_size=batch_size, epochs=epochs, verbose=0)
score = model.evaluate(xte, yte, verbose=0)
print('Training samples: %d, test accuracy: %.2f %%' % (trainingSize, score[1]*100))
accuracy.append(score[1])
plt.plot(sizeList, accuracy)
plt.title('Model Accuracy')
plt.ylabel('Test Accuracy')
plt.xlabel('Training Samples')
plt.show()
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/lucyvasserman/unintended-ml-bias-analysis/blob/master/unintended_ml_bias/pinned_auc_demo.ipynb)
Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Conversation AI's Pinned AUC Unintended Model Bias Demo
Author: ldixon@google.com, jetpack@google.com, sorenj@google.com, nthain@google.com, lucyvasserman@google.com
***
Click [here](https://colab.research.google.com/github/conversationai/unintended-ml-bias-analysis/blob/master/unintended_ml_bias/pinned_auc_demo.ipynb) to run this colab interactively at on colab.research.google.com.
***
## Summary
This notebook demonstrates Pinned AUC as an unintended model bias metric for Conversation AI wikipedia models.
See the paper [Measuring and Mitigating Unintended Bias in Text Classification](https://github.com/conversationai/unintended-ml-bias-analysis/blob/master/presentations/measuring-mitigating-unintended-bias-paper.pdf)
for background, detailed explanation, and experimental results.
Also see https://developers.google.com/machine-learning/fairness-overview for more info on Google's
Machine Learning Fairness work.
__Disclaimer__
* This notebook contains experimental code, which may be changed without notice.
* The ideas here are some ideas relevant to fairness - they are not the whole story!
We start by loading some libraries that we will use and customizing the visualization parameters.
```
!pip install -U -q git+https://github.com/conversationai/unintended-ml-bias-analysis
!pip install -U -q pandas==0.22.0
from unintended_ml_bias import model_bias_analysis
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import pandas as pd
import seaborn as sns
import pkg_resources
cm = sns.light_palette("red", as_cmap=True)
```
## Model Families - capture training variance
"Model Families" allows the results to capture training variance by grouping different
training versions of each model together. model_families is a list of lists, each
sub-list ("model_family") contains the names of different training versions of the same model.
We compare 3 versions each of three different models, **"wiki_cnn"**, **"wiki_debias_random"**,
and **"wiki_debias"**.
* **"wiki_cnn"** is a CNN trained on the original, unmodified training data.
* **"wiki_debias"** is the same model as wiki_cnn, but trained on a modified dataset that has undergone bias mitigation: we added presumed-non-toxic data from Wikipedia articles, specifically sampled to address the bias found in the original dataset.
* **"wiki_debias_random"** is the same model, trained on a modified dataset that has had random Wikipedia article data added to it (not specific to the bias we found). This serves as a control for the debiasing treatment.
```
model_families = [
['wiki_cnn_v3_100', 'wiki_cnn_v3_101', 'wiki_cnn_v3_102'],
['wiki_debias_cnn_v3_100', 'wiki_debias_cnn_v3_101', 'wiki_debias_cnn_v3_102'],
['wiki_debias_random_cnn_v3_100', 'wiki_debias_random_cnn_v3_101', 'wiki_debias_random_cnn_v3_102'],
]
# Read the scored data into DataFrame
df = pd.read_csv(pkg_resources.resource_stream("unintended_ml_bias", "eval_datasets/bias_madlibs_77k_scored.csv"))
# Add columns for each subgroup.
terms = [line.strip() for line in pkg_resources.resource_stream("unintended_ml_bias", "bias_madlibs_data/adjectives_people.txt")]
model_bias_analysis.add_subgroup_columns_from_text(df, 'text', terms)
```
## Data Format
At this point, our scored data is in DataFrame df, with columns:
* text: Full text of the comment.
* label: True if the comment is Toxic, False otherwise.
* < model name >: One column per model, cells contain the score from that model.
* < subgroup >: One column per identity, True if the comment mentions this identity.
You can run the analysis below on any data in this format. Subgroup labels can be
generated via words in the text as done above, or come from human labels if you have them.
# Unintended Bias Metrics
## Pinned AUC
Pinned AUC measures the extent of unintended bias of a real-value score function
by measuring each sub-group's divergence from the general distribution.
Let $D$ represent the full data set and $D_g$ be the set of examples in subgroup
$g$. Then:
$$ Pinned \ dataset \ for \ group \ g = pD_g = s(D_g) + s(D), |s(D_g)| = |s(D)| $$
$$ Pinned \ AUC \ for \ group \ g = pAUC_g = AUC(pD_g) $$
$$ Pinned \ AUC \ Squared \ Equality \ Difference = \Sigma_{g \in G}(AUC - pAUC_g)^2 $$
## Pinned AUC Equality Difference
The table below shows the pinned AUC equality difference for each model family.
Lower scores (lighter red) represent more similarity between each group's pinned AUC, which means
less unintended bias.
On this set, the wiki_debias_cnn model demonstrates least unintended bias.
```
eq_diff = model_bias_analysis.per_subgroup_auc_diff_from_overall(df, terms, model_families, squared_error=True)
# sort to guarantee deterministic output
eq_diff.sort_values(by=['model_family'], inplace=True)
eq_diff.reset_index(drop=True, inplace=True)
eq_diff.style.background_gradient(cmap=cm)
```
## Pinned AUC Graphs
The graphs below show per-group Pinned AUC for each subgroup and each model. Each
identity group shows 3 points, each representing the pinned AUC for one training
version of the model. More consistency among the values represents less unintended bias.
```
pinned_auc_results = model_bias_analysis.per_subgroup_aucs(df, terms, model_families, 'label')
for family in model_families:
name = model_bias_analysis.model_family_name(family)
model_bias_analysis.per_subgroup_scatterplots(
pinned_auc_results,
'subgroup',
name + '_aucs',
name + ' Pinned AUC',
y_lim=(0.8, 1.0))
```
| github_jupyter |
## Validate Azure ML SDK installation and get version number for debugging purposes
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
```
## Diagnostics
Opt-in diagnostics for better experience, quality, and security of future releases.
```
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics = True)
```
## Initialize Workspace
Initialize a workspace object from persisted configuration.
```
# Initialize Workspace
from azureml.core import Workspace
ws = Workspace.from_config()
print("Resource group: ", ws.resource_group)
print("Location: ", ws.location)
print("Workspace name: ", ws.name)
```
## Create An Experiment
**Experiment** is a logical container in an Azure ML Workspace. It hosts run records which can include run metrics and output artifacts from your experiments.
```
from azureml.core import Experiment
experiment_name = 'image-retraining'
experiment = Experiment(workspace = ws, name = experiment_name)
```
## Provision the AKS Cluster
We need this cluster later in this exercise to deploy our service.
This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it.
```
from azureml.core.compute import AksCompute, ComputeTarget
from azureml.core.compute_target import ComputeTargetException
aks_name = 'myaks'
try:
aks_target = AksCompute(workspace=ws, name=aks_name)
print('found existing:', aks_target.name)
except ComputeTargetException:
print('creating new.')
# AKS configuration
prov_config = AksCompute.provisioning_configuration(
agent_count=3,
vm_size="Standard_B4ms"
)
# Create the cluster
aks_target = ComputeTarget.create(
workspace = ws,
name = aks_name,
provisioning_configuration = prov_config
)
```
## Create Azure ML Compute cluster (GPU-enabled) as a compute target
```
from azureml.core.compute import AmlCompute
from azureml.core.compute_target import ComputeTargetException
compute_target_name = 'myamlcompute'
try:
aml_compute = AmlCompute(workspace=ws, name=compute_target_name)
print('found existing:', aml_compute.name)
except ComputeTargetException:
print('creating new.')
aml_config = AmlCompute.provisioning_configuration(
vm_size="Standard_NC6",
vm_priority="dedicated",
min_nodes = 0,
max_nodes = 4,
idle_seconds_before_scaledown=300
)
aml_compute = AmlCompute.create(
ws,
name=compute_target_name,
provisioning_configuration=aml_config
)
aml_compute.wait_for_completion(show_output=True)
```
## Upload data files into datastore
Every workspace comes with a default datastore (and you can register more) which is backed by the Azure blob storage account associated with the workspace. We can use it to transfer data from local to the cloud, and access it from the compute target.
```
# get the default datastore
ds = ws.get_default_datastore()
print("Datastore name: ", ds.name)
print("Datastore type: ", ds.datastore_type)
print("Account name: ", ds.account_name)
print("Container name: ", ds.container_name)
```
Download and unpack flower images
```
import os
import shutil
import urllib.request
tmp_path = '../tmp/image_retraining'
os.makedirs(tmp_path, exist_ok=True)
print('Downloading flower photos...')
urllib.request.urlretrieve("http://download.tensorflow.org/example_images/flower_photos.tgz", tmp_path + "/flower_photos.tgz")
print('Unpacking archive...')
shutil.unpack_archive(tmp_path + '/flower_photos.tgz', tmp_path)
print('Done')
```
Upload files to the datastore
```
images_path = tmp_path + '/flower_photos/'
for (dirpath, dirnames, filenames) in os.walk(images_path):
print('Uploading', dirpath, '...')
ds.upload_files(
[dirpath + '/' + f for f in filenames],
target_path=dirpath.replace(tmp_path + '/', ''),
overwrite=True
)
print('Done')
```
## Create a project directory
Create a directory that will contain all the necessary code from your local machine that you will need access to on the remote resource. This includes the training script, and any additional files your training script depends on.
```
import os
import shutil
project_folder = '../projects/image_retraining'
os.makedirs(project_folder, exist_ok=True)
shutil.copy('./scripts/retrain.py', project_folder)
```
## Create a TensorFlow estimator
The AML SDK's TensorFlow estimator enables you to easily submit TensorFlow training jobs for both single-node and distributed runs. For more information on the TensorFlow estimator, refer [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-train-tensorflow).
```
from azureml.train.dnn import TensorFlow
from azureml.core.runconfig import DataReferenceConfiguration
script_params={
'--image_dir': str(ds.as_download()),
'--summaries_dir': './logs',
'--output_graph': './outputs/output_graph.pb',
'--output_labels': './outputs/output_labels.txt',
'--saved_model_dir': './outputs/model'
}
estimator = TensorFlow(source_directory=project_folder,
source_directory_data_store=ds,
compute_target=aml_compute,
script_params=script_params,
entry_script='retrain.py',
pip_packages=['tensorflow_hub==0.2.0'],
node_count=1,
framework_version='1.10',
use_gpu=True)
# Overwrite data store reference
dr = DataReferenceConfiguration(
datastore_name=ds.name,
path_on_datastore='flower_photos',
mode='download', # download files from datastore to compute target
overwrite=True
)
estimator.run_config.data_references = {ds.name: dr}
estimator.run_config.data_references[ds.name]
```
## Submit job
Run your experiment by submitting your estimator object. Note that this call is asynchronous.
```
run = experiment.submit(estimator)
print(run.get_details())
run.wait_for_completion(show_output=True)
```
## Download results
```
import time
status = run.get_status()
seconds = 10
while status != 'Completed' and status != 'Failed':
print('current status: {} - waiting...'.format(status))
time.sleep(seconds)
if seconds < 60:
seconds = seconds + 10
status = run.get_status()
import os
outputs_path = '../outputs/image_retraining'
os.makedirs(outputs_path, exist_ok=True)
for filename in run.get_file_names():
if filename.startswith('outputs'):
print("downloading", filename, '...')
run.download_file(
filename,
output_file_path=outputs_path + filename.replace('outputs/','/')
)
print('completed')
```
## Test model locally
```
import tensorflow as tf
import numpy as np
print("TensorFlow Version: ", tf.__version__)
model_file = os.path.join(outputs_path, "output_graph.pb")
label_file = os.path.join(outputs_path, "output_labels.txt")
def load_graph(model_file):
graph = tf.Graph()
graph_def = tf.GraphDef()
with open(model_file, "rb") as f:
graph_def.ParseFromString(f.read())
with graph.as_default():
tf.import_graph_def(graph_def)
return graph
def read_tensor_from_image_file(file_name,
input_height=299,
input_width=299,
input_mean=0,
input_std=255):
input_name = "file_reader"
output_name = "normalized"
file_reader = tf.read_file(file_name, input_name)
if file_name.endswith(".png"):
image_reader = tf.image.decode_png(file_reader, channels=3, name="png_reader")
elif file_name.endswith(".gif"):
image_reader = tf.squeeze(tf.image.decode_gif(file_reader, name="gif_reader"))
elif file_name.endswith(".bmp"):
image_reader = tf.image.decode_bmp(file_reader, name="bmp_reader")
else:
image_reader = tf.image.decode_jpeg(file_reader, channels=3, name="jpeg_reader")
float_caster = tf.cast(image_reader, tf.float32)
dims_expander = tf.expand_dims(float_caster, 0)
resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
with tf.Session() as sess:
result = sess.run(normalized)
return result
def load_labels(label_file):
label = []
proto_as_ascii_lines = tf.gfile.GFile(label_file).readlines()
for l in proto_as_ascii_lines:
label.append(l.rstrip())
return label
```
### Load Model
```
graph = load_graph(model_file)
input_height = 299
input_width = 299
input_mean = 0
input_std = 255
input_layer = "Placeholder"
output_layer = "final_result"
def predict_flower(data):
input_name = "import/" + input_layer
output_name = "import/" + output_layer
input_operation = graph.get_operation_by_name(input_name)
output_operation = graph.get_operation_by_name(output_name)
with tf.Session(graph=graph) as sess:
results = sess.run(output_operation.outputs[0], {
input_operation.outputs[0]: data
})
results = np.squeeze(results)
top_k = results.argsort()[-5:][::-1]
labels = load_labels(label_file)
for i in top_k:
print(labels[i], results[i])
```
### Predict test data
Feed the test dataset to the model to get predictions.
```
file_name = "./resources/test-images/Daisy1.jpg"
t = read_tensor_from_image_file(
file_name,
input_height=input_height,
input_width=input_width,
input_mean=input_mean,
input_std=input_std
)
predict_flower(t)
file_name = "./resources/test-images/Rose1.jpg"
t = read_tensor_from_image_file(
file_name,
input_height=input_height,
input_width=input_width,
input_mean=input_mean,
input_std=input_std
)
predict_flower(t)
```
## Deploy a model in Azure Kubernetes Services (AKS)
### Register a model
```
from azureml.core.model import Model
model_graph_name = "flower_photos_graph"
model_labels_name = "flower_photos_labels"
model_graph = Model.register(
model_path=model_file,
model_name=model_graph_name,
tags={"data": "flower_photos", "model": "classification"},
description="Retrained Inception V3 model with flower photos",
workspace=ws
)
model_labels = Model.register(
model_path=label_file,
model_name=model_labels_name,
tags={"data": "flower_photos", "model": "classification"},
description="Output labels of the retrained Inception V3 model with flower photos",
workspace=ws
)
```
## Deploy as web service
Once you've tested the model and are satisfied with the results, deploy the model as a web service hosted in ACI.
To build the correct environment for ACI, provide the following:
* A scoring script to show how to use the model
* An environment file to show what packages need to be installed
* A configuration file to build the ACI
* The model you trained before
### Check AKS Cluster state
```
import time
status = aks_target.get_status()
while status != 'Succeeded' and status != 'Failed':
print('current status: {} - waiting...'.format(status))
time.sleep(10)
status = aks_target.get_status()
```
### Create scoring script
Create the scoring script, called score.py, used by the web service call to show how to use the model.
You must include two required functions into the scoring script:
* The `init()` function, which typically loads the model into a global object. This function is run only once when the Docker container is started.
* The `run(input_data)` function uses the model to predict a value based on the input data. Inputs and outputs to the run typically use JSON for serialization and de-serialization, but other formats are supported.
```
%%writefile score_flowers.py
import json
import os
import traceback
import numpy as np
import tensorflow as tf
import time
from azureml.core.model import Model
def load_graph(graph_path):
global graph
global input_operation
global output_operation
print("loading graph from", graph_path, time.strftime("%H:%M:%S"))
graph = tf.Graph()
graph_def = tf.GraphDef()
with open(graph_path, "rb") as f:
graph_def.ParseFromString(f.read())
with graph.as_default():
tf.import_graph_def(graph_def)
input_operation = graph.get_operation_by_name('import/Placeholder')
output_operation = graph.get_operation_by_name('import/final_result')
print("graph loaded successfully.", time.strftime("%H:%M:%S"))
def load_labels(label_path):
global labels
print("loading labels from", label_path, time.strftime("%H:%M:%S"))
labels = []
proto_as_ascii_lines = tf.gfile.GFile(label_path).readlines()
for l in proto_as_ascii_lines:
labels.append(l.rstrip())
print("labels loaded successfully.", time.strftime("%H:%M:%S"))
def init():
try:
print ("model initializing" + time.strftime("%H:%M:%S"))
# retreive the path to the model file using the model name
graph_path = Model.get_model_path('flower_photos_graph')
load_graph(graph_path)
labels_path = Model.get_model_path('flower_photos_labels')
load_labels(labels_path)
print ("model initialized" + time.strftime("%H:%M:%S"))
except Exception as e:
error = str(e)
stacktrace = traceback.format_exc()
print (error + time.strftime("%H:%M:%S"))
print (stacktrace)
raise
def run(raw_data):
try:
data = json.loads(raw_data)
data = np.array(data)
print ("image array: " + str(data)[:50])
# make prediction
with tf.Session(graph=graph) as sess:
results = sess.run(output_operation.outputs[0], {
input_operation.outputs[0]: data
})
results = np.squeeze(results)
top_k = results.argsort()[-5:][::-1]
result = []
for i in top_k:
result.append([labels[i], results[i]])
print ("result: " + str(result))
# you can return any data type as long as it is JSON-serializable
return str(result)
except Exception as e:
error = str(e)
stacktrace = traceback.format_exc()
print (error + time.strftime("%H:%M:%S"))
print (stacktrace)
return stacktrace
```
### Create environment file
Next, create an environment file, called myenv.yml, that specifies all of the script's package dependencies. This file is used to ensure that all of those dependencies are installed in the Docker image. This model needs `tensorflow` and `azureml-sdk`.
```
from azureml.core.conda_dependencies import CondaDependencies
myenv = CondaDependencies()
myenv.add_tensorflow_conda_package(core_type='cpu')
myenv.add_conda_package("numpy")
#myenv.add_pip_package("azureml-monitoring")
with open(os.path.join(project_folder, "myenv.yml"),"w") as f:
f.write(myenv.serialize_to_string())
```
Review the content of the `myenv.yml` file.
```
with open(os.path.join(project_folder, "myenv.yml"),"r") as f:
print(f.read())
```
### Create image configuration
Define the image configuration using:
* The scoring file (`score_flowers.py`)
* The environment file (`myenv.yml`)
```
from azureml.core.image import ContainerImage
# configure the image
image_config = ContainerImage.image_configuration(
execution_script="score_flowers.py",
runtime="python",
conda_file=os.path.join(project_folder, "myenv.yml")
)
```
### Create configuration file
Create a deployment configuration file and specify the number of CPUs and gigabyte of RAM needed for your ACI container. While it depends on your model, the default of 1 core and 1 gigabyte of RAM is usually sufficient for many models. If you feel you need more later, you would have to recreate the image and redeploy the service.
```
from azureml.core.webservice import AksWebservice
aks_config = AksWebservice.deploy_configuration(
cpu_cores=1,
memory_gb=1,
#collect_model_data=True,
enable_app_insights=True,
tags={"data": "flower_photos", "method" : "TensorFlow"},
description='Predict flowers with TensorFlow'
)
```
### Creating and Deploying the image in AKS
Estimated time to complete: **about 5-8 minutes**
The following code goes through these steps:
1. Create the image and store it in the workspace.
1. Send the image to the AKS cluster.
1. Start up a container in AKS using the image.
1. Get the web service HTTP endpoint.
```
%%time
from azureml.core.webservice import Webservice
service = Webservice.deploy_from_model(
workspace=ws,
name='flower-photos-svc',
deployment_config=aks_config,
deployment_target=aks_target,
models=[model_graph, model_labels],
image_config=image_config
)
service.wait_for_deployment(show_output=True)
print(service.state)
```
Get the scoring web service's HTTP endpoint, which accepts REST client calls. This endpoint can be shared with anyone who wants to test the web service or integrate it into an application.
```
print(service.scoring_uri)
```
## Test deployed service
Earlier you scored all the test data with the local version of the model. Now, you can test the deployed model with a random sample of 30 images from the test data.
The following code goes through these steps:
1. Send the data as a JSON array to the web service hosted in ACI.
1. Use the SDK's `run` API to invoke the service. You can also make raw calls using any HTTP tool such as curl.
1. Print the returned predictions and plot them along with the input images. Red font and inverse image (white on black) is used to highlight the misclassified samples.
Since the model accuracy is high, you might have to run the following code a few times before you can see a misclassified sample.
```
import os
import json
file_name = "./resources/test-images/Daisy1.jpg"
#file_name = "./test.png"
for dirpath, dnames, fnames in os.walk("./resources/test-images/"):
for f in fnames:
file_name = os.path.join(dirpath, f)
# load image
print("Loading image", file_name)
data = read_tensor_from_image_file(
file_name,
input_height=input_height,
input_width=input_width,
input_mean=input_mean,
input_std=input_std
)
raw_data = str(data.tolist())
# predict using the deployed model
print("Sending image", f, "to service")
response = service.run(input_data=raw_data)
print("Service response:", response)
#result = json.loads(response)
#print("Predicted class:", result[0][0])
#print("Probability:", result[0][1])
print()
```
You can also send raw HTTP request to test the web service.
```
import requests
import json
api_keys = service.get_keys()
headers = {
'Content-Type':'application/json',
'Authorization':('Bearer '+ api_keys[0])
}
file_name = "./resources/test-images/Daisy1.jpg"
data = read_tensor_from_image_file(
file_name,
input_height=input_height,
input_width=input_width,
input_mean=input_mean,
input_std=input_std
)
input_data = str(data.tolist())
print("POST to url", service.scoring_uri)
resp = requests.post(service.scoring_uri, input_data, headers=headers)
print("prediction:", resp.text)
```
## Clean up resources
To keep the resource group and workspace for other tutorials and exploration, you can delete only the ACI deployment using this API call:
```
service.delete()
if os.path.exists('score_flowers.py'):
os.remove('score_flowers.py')
```
## Start TensorBoard
```
from azureml.tensorboard import Tensorboard
tb = Tensorboard([run])
tb.start()
```
## Stop TensorBoard
When you're done, make sure to call the stop() method of the Tensorboard object, or it will stay running even after your job completes.
```
tb.stop()
```
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Automated Machine Learning
_**Orange Juice Sales Forecasting**_
## Contents
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Data](#Data)
1. [Train](#Train)
1. [Predict](#Predict)
1. [Operationalize](#Operationalize)
## Introduction
In this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.
Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.
The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area.
## Setup
```
import azureml.core
import pandas as pd
import numpy as np
import logging
import warnings
# Squash warning messages for cleaner output in the notebook
warnings.showwarning = lambda *args, **kwargs: None
from azureml.core.workspace import Workspace
from azureml.core.experiment import Experiment
from azureml.train.automl import AutoMLConfig
from sklearn.metrics import mean_absolute_error, mean_squared_error
```
As part of the setup you have already created a <b>Workspace</b>. To run AutoML, you also need to create an <b>Experiment</b>. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
```
ws = Workspace.from_config()
# choose a name for the run history container in the workspace
experiment_name = 'automl-ojforecasting'
# project folder
project_folder = './sample_projects/automl-local-ojforecasting'
experiment = Experiment(ws, experiment_name)
output = {}
output['SDK version'] = azureml.core.VERSION
output['Subscription ID'] = ws.subscription_id
output['Workspace'] = ws.name
output['Resource Group'] = ws.resource_group
output['Location'] = ws.location
output['Project Directory'] = project_folder
output['Run History Name'] = experiment_name
pd.set_option('display.max_colwidth', -1)
outputDf = pd.DataFrame(data = output, index = [''])
outputDf.T
```
## Data
You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type.
```
time_column_name = 'WeekStarting'
data = pd.read_csv("dominicks_OJ.csv", parse_dates=[time_column_name])
data.head()
```
Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred.
The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we thus define the **grain** - the columns whose values determine the boundaries between time-series:
```
grain_column_names = ['Store', 'Brand']
nseries = data.groupby(grain_column_names).ngroups
print('Data contains {0} individual time-series.'.format(nseries))
```
For demonstration purposes, we extract sales time-series for just a few of the stores:
```
use_stores = [2, 5, 8]
data_subset = data[data.Store.isin(use_stores)]
nseries = data_subset.groupby(grain_column_names).ngroups
print('Data subset contains {0} individual time-series.'.format(nseries))
```
### Data Splitting
We now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the grain columns.
```
n_test_periods = 20
def split_last_n_by_grain(df, n):
"""Group df by grain and split on last n rows for each group."""
df_grouped = (df.sort_values(time_column_name) # Sort by ascending time
.groupby(grain_column_names, group_keys=False))
df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])
df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])
return df_head, df_tail
X_train, X_test = split_last_n_by_grain(data_subset, n_test_periods)
```
## Modeling
For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:
* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span
* Impute missing values in the target (via forward-fill) and feature columns (using median column values)
* Create grain-based features to enable fixed effects across different series
* Create time-based features to assist in learning seasonal patterns
* Encode categorical variables to numeric quantities
AutoML will currently train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series.
You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame:
```
target_column_name = 'Quantity'
y_train = X_train.pop(target_column_name).values
```
## Train
The AutoMLConfig object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.
For forecasting tasks, there are some additional parameters that can be set: the name of the column holding the date/time, the grain column names, and the maximum forecast horizon. A time column is required for forecasting, while the grain is optional. If a grain is not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.
The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up-to 20 weeks beyond the latest date in the training data for each series. In this example, we set the maximum horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning organizaion that needs to estimate the next month of sales would set the horizon accordingly. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.
Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *X_valid* and *y_valid* parameters of AutoMLConfig.
Here is a summary of AutoMLConfig parameters used for training the OJ model:
|Property|Description|
|-|-|
|**task**|forecasting|
|**primary_metric**|This is the metric that you want to optimize.<br> Forecasting supports the following primary metrics <br><i>spearman_correlation</i><br><i>normalized_root_mean_squared_error</i><br><i>r2_score</i><br><i>normalized_mean_absolute_error</i>
|**iterations**|Number of iterations. In each iteration, Auto ML trains a specific pipeline on the given data|
|**X**|Training matrix of features as a pandas DataFrame, shape = [n_training_samples, n_features]|
|**y**|Target values as a numpy.ndarray, shape = [n_training_samples, ]|
|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection|
|**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models
|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models
|**debug_log**|Log file path for writing debugging information
|**path**|Relative path to the project folder. AutoML stores configuration files for the experiment under this folder. You can specify a new empty folder.|
|**time_column_name**|Name of the datetime column in the input data|
|**grain_column_names**|Name(s) of the columns defining individual series in the input data|
|**drop_column_names**|Name(s) of columns to drop prior to modeling|
|**max_horizon**|Maximum desired forecast horizon in units of time-series frequency|
```
time_series_settings = {
'time_column_name': time_column_name,
'grain_column_names': grain_column_names,
'drop_column_names': ['logQuantity'],
'max_horizon': n_test_periods
}
automl_config = AutoMLConfig(task='forecasting',
debug_log='automl_oj_sales_errors.log',
primary_metric='normalized_mean_absolute_error',
iterations=10,
X=X_train,
y=y_train,
n_cross_validations=3,
enable_voting_ensemble=False,
enable_stack_ensemble=False,
path=project_folder,
verbosity=logging.INFO,
**time_series_settings)
```
You can now submit a new training run. For local runs, the execution is synchronous. Depending on the data and number of iterations this operation may take several minutes.
Information from each iteration will be printed to the console.
```
local_run = experiment.submit(automl_config, show_output=True)
```
### Retrieve the Best Model
Each run within an Experiment stores serialized (i.e. pickled) pipelines from the AutoML iterations. We can now retrieve the pipeline with the best performance on the validation dataset:
```
best_run, fitted_pipeline = local_run.get_output()
fitted_pipeline.steps
```
# Forecasting
Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. First, we remove the target values from the test set:
```
y_test = X_test.pop(target_column_name).values
X_test.head()
```
To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data.
We will first create a query `y_query`, which is aligned index-for-index to `X_test`. This is a vector of target values where each `NaN` serves the function of the question mark to be replaced by forecast. Passing definite values in the `y` argument allows the `forecast` function to make predictions on data that does not immediately follow the train data which contains `y`. In each grain, the last time point where the model sees a definite value of `y` is that grain's _forecast origin_.
```
# Replace ALL values in y_pred by NaN.
# The forecast origin will be at the beginning of the first forecast period.
# (Which is the same time as the end of the last training period.)
y_query = y_test.copy().astype(np.float)
y_query.fill(np.nan)
# The featurized data, aligned to y, will also be returned.
# This contains the assumptions that were made in the forecast
# and helps align the forecast to the original data
y_pred, X_trans = fitted_pipeline.forecast(X_test, y_query)
```
If you are used to scikit pipelines, perhaps you expected `predict(X_test)`. However, forecasting requires a more general interface that also supplies the past target `y` values. Please use `forecast(X,y)` as `predict(X)` is reserved for internal purposes on forecasting models.
The [energy demand forecasting notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) demonstrates the use of the forecast function in more detail in the context of using lags and rolling window features.
# Evaluate
To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE).
It is a good practice to always align the output explicitly to the input, as the count and order of the rows may have changed during transformations that span multiple rows.
```
def align_outputs(y_predicted, X_trans, X_test, y_test, predicted_column_name = 'predicted'):
"""
Demonstrates how to get the output aligned to the inputs
using pandas indexes. Helps understand what happened if
the output's shape differs from the input shape, or if
the data got re-sorted by time and grain during forecasting.
Typical causes of misalignment are:
* we predicted some periods that were missing in actuals -> drop from eval
* model was asked to predict past max_horizon -> increase max horizon
* data at start of X_test was needed for lags -> provide previous periods in y
"""
df_fcst = pd.DataFrame({predicted_column_name : y_predicted})
# y and X outputs are aligned by forecast() function contract
df_fcst.index = X_trans.index
# align original X_test to y_test
X_test_full = X_test.copy()
X_test_full[target_column_name] = y_test
# X_test_full's index does not include origin, so reset for merge
df_fcst.reset_index(inplace=True)
X_test_full = X_test_full.reset_index().drop(columns='index')
together = df_fcst.merge(X_test_full, how='right')
# drop rows where prediction or actuals are nan
# happens because of missing actuals
# or at edges of time due to lags/rolling windows
clean = together[together[[target_column_name, predicted_column_name]].notnull().all(axis=1)]
return(clean)
df_all = align_outputs(y_pred, X_trans, X_test, y_test)
def MAPE(actual, pred):
"""
Calculate mean absolute percentage error.
Remove NA and values where actual is close to zero
"""
not_na = ~(np.isnan(actual) | np.isnan(pred))
not_zero = ~np.isclose(actual, 0.0)
actual_safe = actual[not_na & not_zero]
pred_safe = pred[not_na & not_zero]
APE = 100*np.abs((actual_safe - pred_safe)/actual_safe)
return np.mean(APE)
print("Simple forecasting model")
rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all['predicted']))
print("[Test Data] \nRoot Mean squared error: %.2f" % rmse)
mae = mean_absolute_error(df_all[target_column_name], df_all['predicted'])
print('mean_absolute_error score: %.2f' % mae)
print('MAPE: %.2f' % MAPE(df_all[target_column_name], df_all['predicted']))
# Plot outputs
import matplotlib.pyplot as plt
%matplotlib inline
test_pred = plt.scatter(df_all[target_column_name], df_all['predicted'], color='b')
test_test = plt.scatter(y_test, y_test, color='g')
plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)
plt.show()
```
# Operationalize
_Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model.
```
description = 'AutoML OJ forecaster'
tags = None
model = local_run.register_model(description = description, tags = tags)
print(local_run.model_id)
```
### Develop the scoring script
Serializing and deserializing complex data frames may be tricky. We first develop the `run()` function of the scoring script locally, then write it into a scoring script. It is much easier to debug any quirks of the scoring function without crossing two compute environments. For this exercise, we handle a common quirk of how pandas dataframes serialize time stamp values.
```
# this is where we test the run function of the scoring script interactively
# before putting it in the scoring script
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# test the run function here before putting in the scoring script
import json
test_sample = json.dumps({'X': X_test.to_json(), 'y' : y_query.tolist()})
response = run(test_sample, fitted_pipeline)
# unpack the response, dealing with the timestamp serialization again
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
y_fcst_all.head()
```
Now that the function works locally in the notebook, let's write it down into the scoring script. The scoring script is authored by the data scientist. Adjust it to taste, adding inputs, outputs and processing as needed.
```
%%writefile score_fcast.py
import pickle
import json
import numpy as np
import pandas as pd
import azureml.train.automl
from sklearn.externals import joblib
from azureml.core.model import Model
def init():
global model
model_path = Model.get_model_path(model_name = '<<modelid>>') # this name is model.id of model that we want to deploy
# deserialize the model file back into a sklearn model
model = joblib.load(model_path)
timestamp_columns = ['WeekStarting']
def run(rawdata, test_model = None):
"""
Intended to process 'rawdata' string produced by
{'X': X_test.to_json(), y' : y_test.to_json()}
Don't convert the X payload to numpy.array, use it as pandas.DataFrame
"""
try:
# unpack the data frame with timestamp
rawobj = json.loads(rawdata) # rawobj is now a dict of strings
X_pred = pd.read_json(rawobj['X'], convert_dates=False) # load the pandas DF from a json string
for col in timestamp_columns: # fix timestamps
X_pred[col] = pd.to_datetime(X_pred[col], unit='ms')
y_pred = np.array(rawobj['y']) # reconstitute numpy array from serialized list
if test_model is None:
result = model.forecast(X_pred, y_pred) # use the global model from init function
else:
result = test_model.forecast(X_pred, y_pred) # use the model on which we are testing
except Exception as e:
result = str(e)
return json.dumps({"error": result})
# prepare to send over wire as json
forecast_as_list = result[0].tolist()
index_as_df = result[1].index.to_frame().reset_index(drop=True)
return json.dumps({"forecast": forecast_as_list, # return the minimum over the wire:
"index": index_as_df.to_json() # no forecast and its featurized values
})
# get the model
from azureml.train.automl.run import AutoMLRun
experiment = Experiment(ws, experiment_name)
ml_run = AutoMLRun(experiment = experiment, run_id = local_run.id)
best_iteration = int(str.split(best_run.id,'_')[-1]) # the iteration number is a postfix of the run ID.
# get the best model's dependencies and write them into this file
from azureml.core.conda_dependencies import CondaDependencies
conda_env_file_name = 'fcast_env.yml'
dependencies = ml_run.get_run_sdk_dependencies(iteration = best_iteration)
for p in ['azureml-train-automl', 'azureml-core']:
print('{}\t{}'.format(p, dependencies[p]))
myenv = CondaDependencies.create(conda_packages=['numpy','scikit-learn'], pip_packages=['azureml-train-automl'])
myenv.save_to_file('.', conda_env_file_name)
# this is the script file name we wrote a few cells above
script_file_name = 'score_fcast.py'
# Substitute the actual version number in the environment file.
# This is not strictly needed in this notebook because the model should have been generated using the current SDK version.
# However, we include this in case this code is used on an experiment from a previous SDK version.
with open(conda_env_file_name, 'r') as cefr:
content = cefr.read()
with open(conda_env_file_name, 'w') as cefw:
cefw.write(content.replace(azureml.core.VERSION, dependencies['azureml-train-automl']))
# Substitute the actual model id in the script file.
with open(script_file_name, 'r') as cefr:
content = cefr.read()
with open(script_file_name, 'w') as cefw:
cefw.write(content.replace('<<modelid>>', local_run.model_id))
```
### Create a Container Image
```
from azureml.core.image import Image, ContainerImage
image_config = ContainerImage.image_configuration(runtime= "python",
execution_script = script_file_name,
conda_file = conda_env_file_name,
tags = {'type': "automl-forecasting"},
description = "Image for automl forecasting sample")
image = Image.create(name = "automl-fcast-image",
# this is the model object
models = [model],
image_config = image_config,
workspace = ws)
image.wait_for_creation(show_output = True)
if image.creation_state == 'Failed':
print("Image build log at: " + image.image_build_log_uri)
```
### Deploy the Image as a Web Service on Azure Container Instance
```
from azureml.core.webservice import AciWebservice
aciconfig = AciWebservice.deploy_configuration(cpu_cores = 1,
memory_gb = 2,
tags = {'type': "automl-forecasting"},
description = "Automl forecasting sample service")
from azureml.core.webservice import Webservice
aci_service_name = 'automl-forecast-01'
print(aci_service_name)
aci_service = Webservice.deploy_from_image(deployment_config = aciconfig,
image = image,
name = aci_service_name,
workspace = ws)
aci_service.wait_for_deployment(True)
print(aci_service.state)
```
### Call the service
```
# we send the data to the service serialized into a json string
test_sample = json.dumps({'X':X_test.to_json(), 'y' : y_query.tolist()})
response = aci_service.run(input_data = test_sample)
# translate from networkese to datascientese
try:
res_dict = json.loads(response)
y_fcst_all = pd.read_json(res_dict['index'])
y_fcst_all[time_column_name] = pd.to_datetime(y_fcst_all[time_column_name], unit = 'ms')
y_fcst_all['forecast'] = res_dict['forecast']
except:
print(res_dict)
y_fcst_all.head()
```
### Delete the web service if desired
```
serv = Webservice(ws, 'automl-forecast-01')
# serv.delete() # don't do it accidentally
```
| github_jupyter |
```
%matplotlib inline
!unzip top_bottom.zip
!ls -al
!rm -rf eyegaze
!unzip eyegaze_new.zip
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
plt.ion() # interactive mode
data_transforms = {
'train': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = 'top_bottom'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=16,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001)
inputs, classes = next(iter(dataloaders['train']))
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
def train_model(model, criterion, optimizer, scheduler, num_epochs=30):
since = time.time()
train_loss_list = list()
train_acc_list = list()
val_loss_list = list()
val_acc_list = list()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train()
else:
model.eval()
running_loss = 0.0
running_corrects = 0
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
optimizer.zero_grad()
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
if phase == 'train':
loss.backward()
optimizer.step()
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
print("???",epoch_acc)
if phase == 'train':
train_loss_list.append(epoch_loss)
train_acc_list.append(epoch_acc.item())
else:
val_loss_list.append(epoch_loss)
val_acc_list.append(epoch_acc.item())
print(val_acc_list)
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
print(val_acc_list)
return model,train_loss_list,train_acc_list,val_loss_list, val_acc_list
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(1, num_images, images_so_far)
ax.axis('off')
ax.set_title('predicted: {} True: {}'.format(class_names[preds[j]], class_names[labels.data[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
model_ft = models.MobileNetV2(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=10, gamma=0.1)
model_ft, train_loss_list,train_acc_list,val_loss_list, val_acc_list = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=30)
print(train_acc_list)
print(val_acc_list[0].item())
x = range(0, len(val_loss_list))
plt.subplot(2, 1, 1)
plt.plot(x, val_loss_list, color="green", label = "test loss")
plt.plot(x, train_loss_list, color="red", label = "train loss")
plt.title('loss vs. epoches')
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(x, val_acc_list, color="green",label = "test acc")
plt.plot(x, train_acc_list, color="red",label = "train acc")
plt.xlabel('accurancy vs. epoches')
plt.legend()
plt.show()
visualize_model(model_ft)
```
| github_jupyter |
```
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Notebook authors: Kevin P. Murphy (murphyk@gmail.com)
# and Mahmoud Soliman (mjs@aucegypt.edu)
# This notebook reproduces figures for chapter 3 from the book
# "Probabilistic Machine Learning: An Introduction"
# by Kevin Murphy (MIT Press, 2021).
# Book pdf is available from http://probml.ai
```
<a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a>
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/figures/chapter3_probability_multivariate_models_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Figure 3.1:<a name='3.1'></a> <a name='corrcoefWikipedia'></a>
Several sets of $(x, y)$ points, with the correlation coefficient of $x$ and $y$ for each set. Note that the correlation reflects the noisiness and direction of a linear relationship (top row), but not the slope of that relationship (middle), nor many aspects of nonlinear relationships (bottom). (Note: the figure in the center has a slope of 0 but in that case the correlation coefficient is undefined because the variance of $Y$ is zero.) From https://en.wikipedia.org/wiki/Pearson_correlation_coefficient . Used with kind permission of Wikipedia author Imagecreator.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.1.png")
```
## Figure 3.2:<a name='3.2'></a> <a name='spuriousCorrelation'></a>
Examples of spurious correlation between causally unrelated time series. (a) Consumption is ice cream (red) and US murder rate (black) over time. From https://bit.ly/2zfbuvf . Used with kind permission of Karen Cyphers. (b) Divorce rate in Maine strongly correlates with per-capita US consumption of margarine (black). From http://www.tylervigen.com/spurious-correlations . Used with kind permission of Tyler Vigen.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.2_A.png")
show_image("/pyprobml/book1/figures/images/Figure_3.2_B.png")
```
## Figure 3.3:<a name='3.3'></a> <a name='simpsonsParadoxGraph'></a>
Illustration of Simpson's paradox. (a) Overall, $y$ decreases $y$ with $x$. (b) Within each group, $y$ increases with $x$. From https://en.wikipedia.org/wiki/Simpson's_paradox . Used with kind permission of Wikipedia author Pace\nobreakspace svwiki
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.3_A.png")
show_image("/pyprobml/book1/figures/images/Figure_3.3_B.png")
```
## Figure 3.4:<a name='3.4'></a> <a name='simpsonsCovid'></a>
Illustration of Simpson's paradox using COVID-19\xspace , (a) Case fatality rates (CFRs) in Italy and China by age group, and in aggregated form (``Total'', last pair of bars), up to the time of reporting (see legend). (b) Proportion of all confirmed cases included in (a) within each age group by country. From Figure 1 of <a href='#VonKugelgen2020'>[JLB20]</a> . Used with kind permission of Julius von K \"u gelgen.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.4_A.png")
show_image("/pyprobml/book1/figures/images/Figure_3.4_B.png")
```
## Figure 3.5:<a name='3.5'></a> <a name='gaussPlot2dSurf'></a>
Visualization of a 2d Gaussian density as a surface plot. (a) Distribution using a full covariance matrix can be oriented at any angle. (b) Distribution using a diagonal covariance matrix must be parallel to the axis. (c) Distribution using a spherical covariance matrix must have a symmetric shape.
Figure(s) generated by [gauss_plot_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gauss_plot_2d.py)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
google.colab.files.view("./gauss_plot_2d.py")
%run gauss_plot_2d.py
```
## Figure 3.6:<a name='3.6'></a> <a name='gaussPlot2dContour'></a>
Visualization of a 2d Gaussian density in terms of level sets of constant probability density. (a) A full covariance matrix has elliptical contours. (b) A diagonal covariance matrix is an \bf axis aligned ellipse. (c) A spherical covariance matrix has a circular shape.
Figure(s) generated by [gauss_plot_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gauss_plot_2d.py)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
google.colab.files.view("./gauss_plot_2d.py")
%run gauss_plot_2d.py
```
## Figure 3.7:<a name='3.7'></a> <a name='imputeMvn'></a>
Illustration of data imputation using an MVN. (a) Visualization of the data matrix. Blank entries are missing (not observed). Red are positive, green are negative. Area of the square is proportional to the value. (This is known as a \bf Hinton diagram , named after Geoff Hinton, a famous ML researcher.) (b) True data matrix (hidden). (c) Mean of the posterior predictive distribution, based on partially observed data in that row, using the true model parameters.
Figure(s) generated by [gaussImputationDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/gaussImputationDemo.py)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
google.colab.files.view("./gaussImputationDemo.py")
%run gaussImputationDemo.py
```
## Figure 3.8:<a name='3.8'></a> <a name='gauss2dupdate'></a>
Illustration of Bayesian inference for a 2d Gaussian random vector $\mathbf z $. (a) The data is generated from $\mathbf y _n \sim \mathcal N (\mathbf z ,\boldsymbol \Sigma _y)$, where $\mathbf z =[0.5, 0.5]^ \top $ and $\boldsymbol \Sigma _y=0.1 [2, 1; 1, 1])$. We assume the sensor noise covariance $\boldsymbol \Sigma _y$ is known but $\mathbf z $ is unknown. The black cross represents $\mathbf z $. (b) The prior is $p(\mathbf z ) = \mathcal N (\mathbf z |\boldsymbol 0 ,0.1 \mathbf I _2)$. (c) We show the posterior after 10 data points have been observed.
Figure(s) generated by [gaussInferParamsMean2d.m](https://github.com/probml/pmtk3/blob/master/demos/gaussInferParamsMean2d.m)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.8.png")
```
## Figure 3.9:<a name='3.9'></a> <a name='demoGaussBayes2dUnequal'></a>
We observe $\mathbf y _1=(0,-1)$ (red cross) and $\mathbf y _2=(1,0)$ (green cross) and estimate $\mathbb E \left [ \mathbf z |\mathbf y _1,\mathbf y _2 \right ]$ (black cross). (a) Equally reliable sensors, so the posterior mean estimate is in between the two circles. (b) Sensor 2 is more reliable, so the estimate shifts more towards the green circle. (c) Sensor 1 is more reliable in the vertical direction, Sensor 2 is more reliable in the horizontal direction. The estimate is an appropriate combination of the two measurements.
Figure(s) generated by [sensor_fusion_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/sensor_fusion_2d.py)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
google.colab.files.view("./sensor_fusion_2d.py")
%run sensor_fusion_2d.py
```
## Figure 3.10:<a name='3.10'></a> <a name='gmm3'></a>
A mixture of 3 Gaussians in 2d. (a) We show the contours of constant probability for each component in the mixture. (b) A surface plot of the overall density. Adapted from Figure 2.23 of <a href='#BishopBook'>[Bis06]</a> .
Figure(s) generated by [mixGaussPlotDemo.py](https://github.com/probml/pyprobml/blob/master/scripts/mixGaussPlotDemo.py)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
google.colab.files.view("./mixGaussPlotDemo.py")
%run mixGaussPlotDemo.py
```
## Figure 3.11:<a name='3.11'></a> <a name='gmm2d'></a>
(a) Some data in 2d. (b) A possible clustering using $K=3$ clusters computed using a GMM.
Figure(s) generated by [gmm_2d.py](https://github.com/probml/pyprobml/blob/master/scripts/gmm_2d.py)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
google.colab.files.view("./gmm_2d.py")
%run gmm_2d.py
```
## Figure 3.12:<a name='3.12'></a> <a name='mixBernoulliMnist'></a>
We fit a mixture of 20 Bernoullis to the binarized MNIST digit data. We visualize the estimated cluster means $ \boldsymbol \mu _k$. The numbers on top of each image represent the estimated mixing weights $ \pi _k$. No labels were used when training the model.
Figure(s) generated by [mixBerMnistEM.py](https://github.com/probml/pyprobml/blob/master/scripts/mixBerMnistEM.py)
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
google.colab.files.view("./mixBerMnistEM.py")
%run mixBerMnistEM.py
```
## Figure 3.13:<a name='3.13'></a> <a name='sprinkler'></a>
Water sprinkler PGM with corresponding binary CPTs. T and F stand for true and false.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.13.png")
```
## Figure 3.14:<a name='3.14'></a> <a name='ARDGM'></a>
Illustration of first and second order autoregressive (Markov) models.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.14_A.png")
show_image("/pyprobml/book1/figures/images/Figure_3.14_B.png")
```
## Figure 3.15:<a name='3.15'></a> <a name='platesIID'></a>
Left: data points $\mathbf y _n$ are conditionally independent given $\boldsymbol \theta $. Right: Same model, using plate notation. This represents the same model as the one on the left, except the repeated $\mathbf y _n$ nodes are inside a box, known as a plate; the number in the lower right hand corner, $N$, specifies the number of repetitions of the $\mathbf y _n$ node.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.15.png")
```
## Figure 3.16:<a name='3.16'></a> <a name='gmmDgm'></a>
A Gaussian mixture model represented as a graphical model.
```
#@title Click me to run setup { display-mode: "form" }
try:
if PYPROBML_SETUP_ALREADY_RUN:
print('skipping setup')
except:
PYPROBML_SETUP_ALREADY_RUN = True
print('running setup...')
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null
%cd -q /pyprobml/scripts
import pyprobml_utils as pml
import colab_utils
import os
os.environ["PYPROBML"] = ".." # one above current scripts directory
import google.colab
from google.colab.patches import cv2_imshow
%reload_ext autoreload
%autoreload 2
def show_image(img_path,size=None,ratio=None):
img = colab_utils.image_resize(img_path, size)
cv2_imshow(img)
print('finished!')
show_image("/pyprobml/book1/figures/images/Figure_3.16.png")
```
## References:
<a name='BishopBook'>[Bis06]</a> C. Bishop "Pattern recognition and machine learning". (2006).
<a name='VonKugelgen2020'>[JLB20]</a> v. Julius, G. Luigi and S. Bernhard. "Simpson's paradox in Covid-19 case fatality rates: amediation analysis of age-related causal effects". abs/2005.07180 (2020). arXiv: 2005.07180
| github_jupyter |
```
import boto3
from IPython.display import Image, display
from trp import Document
from PIL import Image as PImage, ImageDraw
import time
from IPython.display import IFrame
```
# In this section, we will deep dive into Amazon Textract APIs and its feature.
Amazon Textract includes simple, easy-to-use APIs that can analyze image files and PDF files.
Amazon Textract APIs can be classified into synchronous APIs for real time processing and asynchronous APIs for batch processing.
We will deep dive into each:
• Synchronous APIs(Real time processing use case)
• Asynchronous APIs(Batch processing use cases)
Synchronous APIs (Real time processing use case): There are two APIs which can help with real time analysis:
Analyze Text
Analyze Document API
```
# Curent AWS Region. Use this to choose corresponding S3 bucket with sample content
mySession = boto3.session.Session()
awsRegion = mySession.region_name
# S3 bucket that contains sample documents. Download the sample documents and craete an Amazon s3 Bucket and upload these documents
s3BucketName = "<enter amazon s3 bucket>"
# Amazon S3 client
s3 = boto3.client('s3')
# Amazon Textract client
textract = boto3.client('textract')
# 1. Detect text from image with
```
https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html
```
# Document
documentName = "sample-invoice.png"
display(Image(filename=documentName))
# Read document content
with open(documentName, 'rb') as document:
imageBytes = bytearray(document.read())
# Call Amazon Textract
response = textract.detect_document_text(Document={'Bytes': imageBytes})
import json
print (json.dumps(response, indent=4, sort_keys=True))
```
# 2. Detect text from S3 object
https://docs.aws.amazon.com/textract/latest/dg/API_DetectDocumentText.html
## Lines and Words of Text - JSON Structure
https://docs.aws.amazon.com/textract/latest/dg/API_BoundingBox.html
https://docs.aws.amazon.com/textract/latest/dg/text-location.html
https://docs.aws.amazon.com/textract/latest/dg/how-it-works-lines-words.html
```
# Reading order
# Document
documentName = "two-column-image.jpg"
display(Image(url=s3.generate_presigned_url('get_object', Params={'Bucket': s3BucketName, 'Key': documentName})))
# Call Amazon Textract
response = textract.detect_document_text(
Document={
'S3Object': {
'Bucket': s3BucketName,
'Name': documentName
}
})
print(response)
#using trp.py to parse the json into reading order
doc = Document(response)
for page in doc.pages:
for line in page.getLinesInReadingOrder():
print(line[1])
```
# Analyze Document API for tables and Forms: Key/Values
https://docs.aws.amazon.com/textract/latest/dg/API_AnalyzeDocument.html
```
# Document
documentName = "sample-invoice.png"
display(Image(url=s3.generate_presigned_url('get_object', Params={'Bucket': s3BucketName, 'Key': documentName})))
# Call Amazon Textract
response = textract.analyze_document(
Document={
'S3Object': {
'Bucket': s3BucketName,
'Name': documentName
}
},
FeatureTypes=["FORMS","TABLES"])
#print(response)
doc = Document(response)
for page in doc.pages:
# Print fields
print("Fields:")
for field in page.form.fields:
print("Key: {}, Value: {}".format(field.key, field.value))
# Get field by key
print("\nGet Field by Key:")
key = "Phone Number:"
field = page.form.getFieldByKey(key)
if(field):
print("Key: {}, Value: {}".format(field.key, field.value))
# Search fields by key
print("\nSearch Fields:")
key = "address"
fields = page.form.searchFieldsByKey(key)
for field in fields:
print("Key: {}, Value: {}".format(field.key, field.value))
doc = Document(response)
for page in doc.pages:
# Print tables
for table in page.tables:
for r, row in enumerate(table.rows):
for c, cell in enumerate(row.cells):
print("Table[{}][{}] = {}".format(r, c, cell.text))
```
# 12. PDF Processing
https://docs.aws.amazon.com/textract/latest/dg/API_StartDocumentTextDetection.html
https://docs.aws.amazon.com/textract/latest/dg/API_GetDocumentTextDetection.html
https://docs.aws.amazon.com/textract/latest/dg/API_StartDocumentAnalysis.html
https://docs.aws.amazon.com/textract/latest/dg/API_GetDocumentAnalysis.html
```
def startJob(s3BucketName, objectName):
response = None
response = textract.start_document_text_detection(
DocumentLocation={
'S3Object': {
'Bucket': s3BucketName,
'Name': objectName
}
})
return response["JobId"]
def isJobComplete(jobId):
response = textract.get_document_text_detection(JobId=jobId)
status = response["JobStatus"]
print("Job status: {}".format(status))
while(status == "IN_PROGRESS"):
time.sleep(5)
response = textract.get_document_text_detection(JobId=jobId)
status = response["JobStatus"]
print("Job status: {}".format(status))
return status
def getJobResults(jobId):
pages = []
response = textract.get_document_text_detection(JobId=jobId)
pages.append(response)
print("Resultset page recieved: {}".format(len(pages)))
nextToken = None
if('NextToken' in response):
nextToken = response['NextToken']
while(nextToken):
response = textract.get_document_text_detection(JobId=jobId, NextToken=nextToken)
pages.append(response)
print("Resultset page recieved: {}".format(len(pages)))
nextToken = None
if('NextToken' in response):
nextToken = response['NextToken']
return pages
# Document
documentName = "job-application-form.pdf"
jobId = startJob(s3BucketName, documentName)
print("Started job with id: {}".format(jobId))
if(isJobComplete(jobId)):
response = getJobResults(jobId)
#print(response)
doc = Document(response)
#Print detected text
for page in doc.pages:
for line in page.getLinesInReadingOrder():
print(line[1])
```
| github_jupyter |
# Machine Learning Engineer Nanodegree
## Deep Learning
## Project: Build a Digit Recognition Program
In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with **'Implementation'** in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with **'Optional'** in the header.
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
----
## Step 1: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize sequences of digits. Train the model using synthetic data generated by concatenating character images from [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) or [MNIST](http://yann.lecun.com/exdb/mnist/). To produce a synthetic sequence of digits for testing, you can for example limit yourself to sequences up to five digits, and use five classifiers on top of your deep network. You would have to incorporate an additional ‘blank’ character to account for shorter number sequences.
There are various aspects to consider when thinking about this problem:
- Your model can be derived from a deep neural net or a convolutional network.
- You could experiment sharing or not the weights between the softmax classifiers.
- You can also use a recurrent network in your deep neural net to replace the classification layers and directly emit the sequence of digits one-at-a-time.
Here is an example of a [published baseline model on this problem](http://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42241.pdf). ([video](https://www.youtube.com/watch?v=vGPI_JvLoN0))
### Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
```
from keras.datasets import mnist
import keras.models #import Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, merge
from keras.layers import Input, Convolution2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.layers.local import LocallyConnected2D
from keras.utils.np_utils import to_categorical
import os
import sys
import tarfile
import numpy as np
from six.moves import cPickle as pickle
from six.moves.urllib.request import urlretrieve
from PIL import Image
from scipy.misc import imresize
from scipy import ndimage
import matplotlib.pyplot as plt
import h5py
import operator
%matplotlib inline
def getAccuracy(clf, dataset, sequences, labels, fiveDigit=True):
pred = clf.predict(dataset)
acc = 0
for sample in range(len(sequences)):
pred_length = pred[0][sample]
index0, value0 = max(enumerate(pred_length), key=operator.itemgetter(1))
pred_d1 = pred[1][sample]
index1, value1 = max(enumerate(pred_d1), key=operator.itemgetter(1))
pred_d2 = pred[2][sample]
index2, value2 = max(enumerate(pred_d2), key=operator.itemgetter(1))
pred_d3 = pred[3][sample]
index3, value3 = max(enumerate(pred_d3), key=operator.itemgetter(1))
pred_d4 = pred[4][sample]
index4, value4 = max(enumerate(pred_d4), key=operator.itemgetter(1))
if fiveDigit:
pred_d5 = pred[5][sample]
index5, value5 = max(enumerate(pred_d5), key=operator.itemgetter(1))
if fiveDigit and (sequences[sample, index0] == 1 and labels[sample, index1, 0] == 1 and
labels[sample, index2, 1] == 1 and labels[sample, index3, 2] == 1 and
labels[sample, index4, 3] == 1 and labels[sample, index5, 4] == 1):
acc += 1
if not fiveDigit and (sequences[sample, index0] == 1 and labels[sample, index1, 0] == 1 and
labels[sample, index2, 1] == 1 and labels[sample, index3, 2] == 1 and
labels[sample, index4, 3] == 1):
acc += 1
return 1. * acc / len(sequences)
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
image = np.array(X_train[np.random.randint(X_train.shape[0]), :, :])
plt.figure()
plt.imshow(image)
# Don't run unless generating new data!
num_labels = 11 # 0-9 + blank
image_size = 28
new_image_size = 28
def seqLength(n):
s = 1
if n > 10:
s += seqLength(n / 10)
return s
def generateNewDataset(source_data, source_labels, sequences, max_sequence_length=5, option=1, insert_blanks=True):
new_dataset = np.ndarray((sequences, 1, new_image_size, new_image_size), dtype=np.int8)
new_labels = np.ndarray((sequences, num_labels, max_sequence_length), dtype=np.int8)
sequence_lengths = np.ndarray((sequences, max_sequence_length), dtype=np.int8)
# Support for option 2 below
if option == 2:
random_lengths = np.random.randint(1, max_sequence_length + 1, sequences)
for sequence in range(sequences):
# Option 1: Equal numbers of cases per class, except sequence_length
if option == 1:
number = np.random.randint(10 ** max_sequence_length)
sequence_length = seqLength(number)
# Option 2: Equal numbers of cases for each sequence_length - and each resized number
if option == 2:
sequence_length = random_lengths[sequence]
# Save sequence length to return variable
sequence_lengths[sequence, :] = 0
sequence_lengths[sequence, sequence_length - 1] = 1
# Randomly select samples from source data
sample_indices = []
for bar in range(sequence_length):
sample_indices.append(np.random.randint(source_data.shape[0]))
# Save label data from sources
new_labels[sequence, :, :] = 0
new_labels[sequence, 0, sequence_length:max_sequence_length] = 1 # Label "blank" classes
for digit in range(sequence_length):
new_labels[sequence, source_labels[sample_indices[digit]] + 1, digit] = 1
# Pull the images from the original sources and concatenate
sample = np.matrix(source_data[sample_indices[0], :, :])
for character in sample_indices[1:]:
sample = np.concatenate((sample, np.matrix(source_data[character, :, :])), axis=1)
# Insert blanks to keep digits aligned
if insert_blanks:
for bar in range(max_sequence_length - sequence_length):
sample = np.concatenate((sample, np.matrix([[0] * image_size] * image_size)), axis=1)
# Resize sequence image to constant width
new_image = imresize(sample, (new_image_size, new_image_size), interp='bilinear')
# Append current sample to new data and label sets
new_dataset[sequence, 0, :, :] = new_image
return new_dataset, new_labels, sequence_lengths
new_train_dataset, new_train_labels, train_sequences = generateNewDataset(X_train, y_train, 5000000, 5, 1, True)
new_test_dataset, new_test_labels, test_sequences = generateNewDataset(X_test, y_test, 30000, 5, 1, True)
print("Generating sequence data complete.")
print(new_train_dataset.shape, new_train_labels.shape, train_sequences.shape)
print(new_test_dataset.shape, new_test_labels.shape, test_sequences.shape)
print(np.sum(new_train_labels, axis=0)) # Check distribution of classes
print("\n")
print(np.sum(train_sequences, axis=0)) # Check distribution of sequence lengths
image = np.array(new_train_dataset[np.random.randint(new_train_dataset.shape[0]), 0, :, :])
plt.figure()
plt.imshow(image)
```
### Question 1
_What approach did you take in coming up with a solution to this problem?_
**Answer:**
### Question 2
_What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.)_
**Answer:**
### Question 3
_How did you train your model? How did you generate your synthetic dataset?_ Include examples of images from the synthetic data you constructed.
**Answer:**
----
## Step 2: Train a Model on a Realistic Dataset
Once you have settled on a good architecture, you can train your model on real data. In particular, the [Street View House Numbers (SVHN)](http://ufldl.stanford.edu/housenumbers/) dataset is a good large-scale dataset collected from house numbers in Google Street View. Training on this more challenging dataset, where the digits are not neatly lined-up and have various skews, fonts and colors, likely means you have to do some hyperparameter exploration to perform well.
### Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
```
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
```
### Question 4
_Describe how you set up the training and testing data for your model. How does the model perform on a realistic dataset?_
**Answer:**
### Question 5
_What changes did you have to make, if any, to achieve "good" results? Were there any options you explored that made the results worse?_
**Answer:**
### Question 6
_What were your initial and final results with testing on a realistic dataset? Do you believe your model is doing a good enough job at classifying numbers correctly?_
**Answer:**
----
## Step 3: Test a Model on Newly-Captured Images
Take several pictures of numbers that you find around you (at least five), and run them through your classifier on your computer to produce example results. Alternatively (optionally), you can try using OpenCV / SimpleCV / Pygame to capture live images from a webcam and run those through your classifier.
### Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
```
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
```
### Question 7
_Choose five candidate images of numbers you took from around you and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult?_
**Answer:**
### Question 8
_Is your model able to perform equally well on captured pictures or a live camera stream when compared to testing on the realistic dataset?_
**Answer:**
### Optional: Question 9
_If necessary, provide documentation for how an interface was built for your model to load and classify newly-acquired images._
**Answer:** Leave blank if you did not complete this part.
----
### Step 4: Explore an Improvement for a Model
There are many things you can do once you have the basic classifier in place. One example would be to also localize where the numbers are on the image. The SVHN dataset provides bounding boxes that you can tune to train a localizer. Train a regression loss to the coordinates of the bounding box, and then test it.
### Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
```
### Your code implementation goes here.
### Feel free to use as many code cells as needed.
```
### Question 10
_How well does your model localize numbers on the testing set from the realistic dataset? Do your classification results change at all with localization included?_
**Answer:**
### Question 11
_Test the localization function on the images you captured in **Step 3**. Does the model accurately calculate a bounding box for the numbers in the images you found? If you did not use a graphical interface, you may need to investigate the bounding boxes by hand._ Provide an example of the localization created on a captured image.
**Answer:**
----
## Optional Step 5: Build an Application or Program for a Model
Take your project one step further. If you're interested, look to build an Android application or even a more robust Python program that can interface with input images and display the classified numbers and even the bounding boxes. You can for example try to build an augmented reality app by overlaying your answer on the image like the [Word Lens](https://en.wikipedia.org/wiki/Word_Lens) app does.
Loading a TensorFlow model into a camera app on Android is demonstrated in the [TensorFlow Android demo app](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android), which you can simply modify.
If you decide to explore this optional route, be sure to document your interface and implementation, along with significant results you find. You can see the additional rubric items that you could be evaluated on by [following this link](https://review.udacity.com/#!/rubrics/413/view).
### Optional Implementation
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.
```
### Your optional code implementation goes here.
### Feel free to use as many code cells as needed.
```
### Documentation
Provide additional documentation sufficient for detailing the implementation of the Android application or Python program for visualizing the classification of numbers in images. It should be clear how the program or application works. Demonstrations should be provided.
_Write your documentation here._
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
| github_jupyter |
# Building a bulk system
Let's build a Fe bulk with BCC structure using a python script.
Import the required libraries
* [numpy](http://www.numpy.org/) handles numeric arrays and mathematical operations.
* [product](https://docs.python.org/3.7/library/itertools.html#itertools.product) returns cartesian product of input iterables.
* [matplotlib](https://matplotlib.org/) produces figures.
* [defaultdict](https://docs.python.org/3.7/library/collections.html#collections.defaultdict) is a dictionary where each *value* has a defined type.
* [Axes3D](https://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html) produces 3D graphics using matplotlib.
```
import numpy
from itertools import product
from matplotlib import pyplot
from collections import defaultdict
from mpl_toolkits.mplot3d import Axes3D
```
Define the edge length of the bulk (```L```) measured in magnetic unit cells (muc).
```
L = 5 # muc
```
Define the spin value (```spin```), its magnetic moment (```mu```), the anisotropy constant (```kan```) and the nearest neighbors exchange interaction (```jex```). These values were reported [elsewhere](http://iopscience.iop.org/article/10.1088/0953-8984/26/10/103202/meta). In order to use coherent units, ```kan``` and ```jex``` are given both in units of energy (meV), while ```mu``` is given in units of energy/magnetic field (meV/T). Then, the Boltzmann constant must be given in units of energy/temperature (meV/K) in the JSON input file. Finally, define the [spin update policy](https://pcm-ca.github.io/vegas/spin-update-policies/) for the direction of the magnetic moments.
```
mub = 5.788e-2 # meV/T
spin = 1.0
mu = 2.22 * mub
kan = 3.53e-3 # meV/atom
jex = 44.01 # meV/link
update_policy = "adaptive"
```
Create a list of sites.
```
sites = list()
for site in product(range(0, L), range(0, L), range(0, L)):
sites.append(site)
x, y, z = site
sites.append((x + 0.5, y + 0.5, z + 0.5))
```
Convert the previous list to numpy arrays.
```
positions = numpy.array(sites)
```
Generate a 3D graphic of the Fe bulk.
```
fig = pyplot.figure(figsize=(8, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(positions[:, 0], positions[:, 1], positions[:, 2],
s=100, color="silver", edgecolor="black")
ax.grid()
ax.set_xlabel(r"$x$", fontsize=20)
ax.set_ylabel(r"$y$", fontsize=20)
ax.set_zlabel(r"$z$", fontsize=20)
pyplot.show()
```
Generate graphics of $xy$, $xz$ and $yz$ plane cross-sections to better visualize the BBC structure.
```
fig = pyplot.figure(figsize=(8, 8))
ax1 = fig.add_subplot(131)
ax1.scatter(positions[:, 0], positions[:, 1],
s=100, color="silver", edgecolor="black")
ax1.grid()
ax1.set_xlabel(r"$x$", fontsize=20)
ax1.set_ylabel(r"$y$", fontsize=20)
ax1.set_aspect("equal")
ax2 = fig.add_subplot(132)
ax2.scatter(positions[:, 0], positions[:, 2],
s=100, color="silver", edgecolor="black")
ax2.grid()
ax2.set_xlabel(r"$x$", fontsize=20)
ax2.set_ylabel(r"$z$", fontsize=20)
ax2.set_aspect("equal")
ax3 = fig.add_subplot(133)
ax3.scatter(positions[:, 1], positions[:, 2],
s=100, color="silver", edgecolor="black")
ax3.grid()
ax3.set_xlabel(r"$y$", fontsize=20)
ax3.set_ylabel(r"$z$", fontsize=20)
ax3.set_aspect("equal")
pyplot.tight_layout()
pyplot.show()
```
Identify the neighbors of each magnetic moment and store them in a dictionary. Periodic boundary conditions are established.
```
nhbs = defaultdict(list)
for site in sites:
x, y, z = site
for dx, dy, dz in [(0.5, 0.5, 0.5),
(0.5, 0.5, -0.5),
(0.5, -0.5, 0.5),
(0.5, -0.5, -0.5),
(-0.5, 0.5, 0.5),
(-0.5, 0.5, -0.5),
(-0.5, -0.5, 0.5),
(-0.5, -0.5, -0.5)]:
nhb = ((x + dx) % L, (y + dy) % L, (z + dz) % L)
if nhb in sites:
nhbs[site].append(nhb)
```
Make some verifications: that each site has $8$ neighbors, that the neighbors of each site are $\dfrac{\sqrt{3}}{2}$ muc away, and that each site is in the neighbors list of each of its neighbors. If two sites are neighbors because of the periodic boundary conditions, they are not $\dfrac{\sqrt{3}}{2}$ muc away. However, one of the $27$ replicas must be this distance away.
```
for site in sites:
assert len(nhbs[site]) == 8
for nhb in nhbs[site]:
assert site in nhbs[nhb]
dists = list()
for x in [-L, 0, L]:
for y in [-L, 0, L]:
for z in [-L, 0, L]:
dists.append(numpy.linalg.norm(numpy.array(site) + (x, y, z) - numpy.array(nhb)))
assert (27 - numpy.count_nonzero(numpy.array(dists) - (numpy.sqrt(3)/2)) == 1)
```
Create a dictionary to identify the type of each site, which in this case is Fe for all.
```
types = dict()
for site in sites:
types[site] = "Fe"
```
Define the anisotropy axes. It is enough to just define two axes, the third one is computed to produce cubic anisotropy. Also, define the direction of the external magnetic field for each site, which in this case is the $+z$ direction for all the ions. The external magnetic field is multiplied by the magnetic moment of Fe. Then, the magnetic field values in the JSON input file must be given in Teslas.
```
anisotropy_axis = dict()
field_axis = dict()
for site in sites:
anisotropy_axis[site] = [(1.0, 0.0, 0.0), (0.0, 1.0, 0.0)]
field_axis[site] = (0.0, 0.0, mu)
```
Count the number of interactions equal to the sum of the neighbors of each site, and the number of sites as the length of the list of the sites.
```
num_interactions = 0
for site in sites:
num_interactions += len(nhbs[site])
num_sites = len(sites)
```
Create the files to store the structural properties (```sample.dat```) and the anisotropy (```anisotropy.dat```) of the sample.
```
sample_file = open("sample.dat", mode="w")
anisotropy_file = open("anisotropy.dat", mode="w")
```
Write in the first line of ```sample_file``` the number of sites, interactions and types:
```
sample_file.write("{} {} {}\n".format(num_sites, num_interactions, len(set(types.values()))))
print(num_sites, num_interactions, len(set(types.values())))
```
Write the ion types on a different line each one.
```
for t in sorted(set(types.values())):
sample_file.write("{}\n".format(t))
print(t)
```
Write the parameters of each site according to the established [format](/vegas/system-building/).
```
for site in sites:
i = sites.index(site)
t = types[site]
sample_file.write("{} {} {} {} {} {} {} {} {} {}\n".format(
i, *site, spin, *field_axis[site], t, update_policy))
anisotropy_file.write("{} {} {} {} {} {} {}\n".format(
*anisotropy_axis[site][0],
*anisotropy_axis[site][1],
kan))
```
Write the exchange interactions between every pair of neighbors.
```
for site in sites:
t = types[site]
for nhb in nhbs[site]:
nhb_t = types[nhb]
sample_file.write("{} {} {}\n".format(
sites.index(site), sites.index(nhb), jex))
```
Close the files.
```
sample_file.close()
anisotropy_file.close()
```
The result of this script is the creation of two files: ```sample.dat``` and ```anisotropy.dat```, which store the structural properties and the anisotropy of the Fe bulk system.
| github_jupyter |
#### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Classification on imbalanced data
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the [Credit Card Fraud Detection](https://www.kaggle.com/mlg-ulb/creditcardfraud) dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use [Keras](../../guide/keras/overview.ipynb) to define the model and [class weights](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Model) to help the model learn from the imbalanced data. .
This tutorial contains complete code to:
* Load a CSV file using Pandas.
* Create train, validation, and test sets.
* Define and train a model using Keras (including setting class weights).
* Evaluate the model using various metrics (including precision and recall).
* Try common techniques for dealing with imbalanced data like:
* Class weighting
* Oversampling
## Setup
```
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
```
## Data processing and exploration
### Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the [Machine Learning Group](http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available [here](https://www.researchgate.net/project/Fraud-detection-5) and the page of the [DefeatFraud](https://mlg.ulb.ac.be/wordpress/portfolio_page/defeatfraud-assessment-and-validation-of-deep-feature-engineering-and-learning-solutions-for-fraud-detection/) project
```
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
```
### Examine the class label imbalance
Let's look at the dataset imbalance:
```
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
```
This shows the small fraction of positive samples.
### Clean, split and normalize the data
The raw data has a few issues. First the `Time` and `Amount` columns are too variable to use directly. Drop the `Time` column (since it's not clear what it means) and take the log of the `Amount` column to reduce its range.
```
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps=0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
```
Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where [overfitting](https://developers.google.com/machine-learning/crash-course/generalization/peril-of-overfitting) is a significant concern from the lack of training data.
```
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
```
Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The `StandardScaler` is only fit using the `train_features` to be sure the model is not peeking at the validation or test sets.
```
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
```
Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
### Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
* Do these distributions make sense?
* Yes. You've normalized the input and these are mostly concentrated in the `+/- 2` range.
* Can you see the difference between the ditributions?
* Yes the positive examples contain a much higher rate of extreme values.
```
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns = train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns = train_df.columns)
sns.jointplot(pos_df['V5'], pos_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(neg_df['V5'], neg_df['V6'],
kind='hex', xlim = (-5,5), ylim = (-5,5))
_ = plt.suptitle("Negative distribution")
```
## Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a [dropout](https://developers.google.com/machine-learning/glossary/#dropout_regularization) layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
```
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
]
def make_model(metrics = METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
```
### Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
* **False** negatives and **false** positives are samples that were **incorrectly** classified
* **True** negatives and **true** positives are samples that were **correctly** classified
* **Accuracy** is the percentage of examples correctly classified
> $\frac{\text{true samples}}{\text{total samples}}$
* **Precision** is the percentage of **predicted** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false positives}}$
* **Recall** is the percentage of **actual** positives that were correctly classified
> $\frac{\text{true positives}}{\text{true positives + false negatives}}$
* **AUC** refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* [True vs. False and Positive vs. Negative](https://developers.google.com/machine-learning/crash-course/classification/true-false-positive-negative)
* [Accuracy](https://developers.google.com/machine-learning/crash-course/classification/accuracy)
* [Precision and Recall](https://developers.google.com/machine-learning/crash-course/classification/precision-and-recall)
* [ROC-AUC](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc)
## Baseline model
### Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
```
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_auc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
```
Test run the model:
```
model.predict(train_features[:10])
```
### Optional: Set the correct initial bias.
These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: [A Recipe for Training Neural Networks: "init well"](http://karpathy.github.io/2019/04/25/recipe/#2-set-up-the-end-to-end-trainingevaluation-skeleton--get-dumb-baselines)). This can help with initial convergence.
With the default bias initialization the loss should be about `math.log(2) = 0.69314`
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
```
initial_bias = np.log([pos/neg])
initial_bias
```
Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: `pos/total = 0.0018`
```
model = make_model(output_bias = initial_bias)
model.predict(train_features[:10])
```
With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
```
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
```
This initial loss is about 50 times less than if would have been with naive initilization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
### Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
```
initial_weights = os.path.join(tempfile.mkdtemp(),'initial_weights')
model.save_weights(initial_weights)
```
### Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
```
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train '+label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val '+label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
```
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
### Train the model
```
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels))
```
### Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
```
def plot_metrics(history):
metrics = ['loss', 'auc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend()
plot_metrics(baseline_history)
```
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
### Evaluate metrics
You can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/#confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
```
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
```
Evaluate your model on the test dataset and display the results for the metrics you created above.
```
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
```
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
### Plot the ROC
Now plot the [ROC](https://developers.google.com/machine-learning/glossary#ROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
```
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right')
```
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
## Class weights
### Calculate class weights
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
```
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg)*(total)/2.0
weight_for_1 = (1 / pos)*(total)/2.0
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
```
### Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
```
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks = [early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
```
### Check training history
```
plot_metrics(weighted_history)
```
### Evaluate metrics
```
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
```
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right')
```
## Oversampling
### Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
```
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
```
#### Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
```
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
```
#### Using `tf.data`
If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
```
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
```
Each dataset provides `(feature, label)` pairs:
```
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
```
Merge the two together using `experimental.sample_from_datasets`:
```
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
```
To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
```
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
```
### Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks = [early_stopping],
validation_data=val_ds)
```
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
### Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
```
plot_metrics(resampled_history )
```
### Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
```
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch = 20,
epochs=10*EPOCHS,
callbacks = [early_stopping],
validation_data=(val_ds))
```
### Re-check training history
```
plot_metrics(resampled_history)
```
### Evaluate metrics
```
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
```
### Plot the ROC
```
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right')
```
## Applying this tutorial to your problem
Imbalanced data classification is an inherently difficult task since there are so few samples to learn from. You should always start with the data first and do your best to collect as many samples as possible and give substantial thought to what features may be relevant so the model can get the most out of your minority class. At some point your model may struggle to improve and yield the results you want, so it is important to keep in mind the context of your problem and the trade offs between different types of errors.
| github_jupyter |
```
import numpy as np
from numpy.fft import fft2, ifft2, fftshift, ifftshift
from scipy import signal
from time import time
import matplotlib.pyplot as plt
%matplotlib inline
from skimage.io import imread
from skimage.filters import gaussian
img = imread('../pd.jpg')
img = np.array(img[:,:,0], dtype=float)
img[100:110,100:110]
plt.imshow(img, cmap='gray');
img = imread('../peggys-cove.jpg')
img = np.array(img, dtype=float)
img = img[25:1525, 825:2325, :]
plt.imshow(img/255.);
start = time()
blur = gaussian(img, 10)
duration = time() - start
print(str(duration)+' seconds')
plt.imshow(blur, cmap='gray');
start = time()
Img = fft2(img)
duration = time() - start
print(str(duration)+' seconds')
plt.imshow(np.log(abs(fftshift(Img[:,:]))), cmap='gray');
```
## Make a blur pyramid
```
sigmas = [1., 2, 3.5, 5., 20.]
def makeGaussian(size, fwhm = 3, center=None):
""" Make a square gaussian kernel.
size is the length of a side of the square
fwhm is full-width-half-maximum, which
can be thought of as an effective radius.
"""
x = np.arange(0, size, 1, float)
y = x[:,np.newaxis]
if center is None:
x0 = y0 = size // 2
else:
x0 = center[0]
y0 = center[1]
g = np.exp(-4*np.log(2) * ((x-x0)**2 + (y-y0)**2) / fwhm**2)
return g/np.sum(g.flatten())
def AutoCrop(img, relthresh=0.1):
N = np.shape(img)[0]
maximums = np.max(abs(img), axis=1) / np.max(abs(img))
idx_low = list(maximums>0.1).index(True)
if idx_low==0:
return img
else:
idx_high = N - 1 - list(reversed(maximums>0.1)).index(True)
return img[idx_low:idx_high,idx_low:idx_high]
def AutoCropFilter(N, fwhm=2., relthresh=0.1):
g = makeGaussian(N, fwhm=fwhm)
G = fftshift(fft2(ifftshift(g)))
return AutoCrop(G, relthresh=relthresh)
blah = AutoCropFilter(256, fwhm=25.)
print(np.shape(blah))
plt.imshow(abs(blah), cmap='gray');
def MakeImagePyramid(img, sigmas):
'''
pyramid = MakeImagePyramid(img, sigmas)
Construct a list of blurred and subsampled versions of an image.
Inputs:
img square image
sigmas list of standard deviations for the Gaussian blur kernels
Output:
pyramid list of images, varying in size
'''
f_pyramid = []
#F_pyramid = []
#G_pyramid = []
F = fftshift(fft2(img, axes=[0,1]), axes=[0,1])
N = np.shape(img)[0]
chans = np.shape(img)[2]
for s in sigmas:
G = AutoCropFilter(N, fwhm=s)
#G_pyramid.append(G)
sd = int( (np.shape(F)[0] - np.shape(G)[0])/2 )
if sd<=0:
sd = 0
Fc = F.copy()
else:
Fc = F[sd:-sd,sd:-sd,:].copy()
for c in range(chans):
Fc[:,:,c] *= G
Nnew = np.shape(G)[0]
#F_pyramid.append(Fc)
f_pyramid.append(np.real(ifft2(ifftshift(Fc, axes=[0,1]), axes=[0,1]))/N/N*Nnew**2)
return f_pyramid
pyr = MakeImagePyramid(img, sigmas)
plt.figure(figsize=[15,8])
blur_levels = len(sigmas)
for idx, f in enumerate(pyr):
plt.subplot(1,blur_levels,idx+1)
plt.imshow(f/255.);
plt.title(str(np.shape(f)[0])+'x'+str(np.shape(f)[1]))
```
| github_jupyter |
```
#INCLUDE LIBRARIES
import numpy as np
import pandas as pd
import re
import itertools
import nltk
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
import matplotlib.pyplot as plt
from nltk import sent_tokenize, word_tokenize
from nltk.tokenize import word_tokenize
text = "Hello everyone. You are reading NLP article."
word_tokenize(text)
#Reading the data
df=pd.read_csv('/home/femme_js/Hoaxify/news.csv')
df.head(10)
# checking if column have nan values
check_nan_in_df = df.isnull()
print (check_nan_in_df)
# as data dont have any NaN value, we dont need to fill them
#Getting the Labels
labels=df.label
labels.head()
# Combining important features into a single feature
df['total'] = df['title'] + ' ' + df['text']
df.head()
#PRE-PROCESSING THE DATA
stop_words = stopwords.words('english')
lemmatizer = WordNetLemmatizer()
for index, row in df.iterrows():
filter_sentence = ''
sentence = row['total']
# Cleaning the sentence with regex
sentence = re.sub(r'[^\w\s]', '', sentence)
# Tokenization
words = nltk.word_tokenize(sentence)
# Stopwords removal
words = [w for w in words if not w in stop_words]
# Lemmatization
for words in words:
filter_sentence = filter_sentence + ' ' + str(lemmatizer.lemmatize(words)).lower()
df.loc[index, 'total'] = filter_sentence
df.head()
df['total'].head()
print(type(df['label']))
df['label'].value_counts().plot(kind = 'bar')
df.label = df.label.astype(str)
df.label.unique()
df.label = df.label.astype(str)
df.label = df.label.str.strip()
dict = { 'REAL' : '1' , 'FAKE' : '0'}
df['label'] = df['label'].map(dict)
df['label'].head()
x_df = df['total']
y_df = df['label']
x_df.head()
y_df.head()
#VECOTRIZATION
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
count_vectorizer = CountVectorizer()
count_vectorizer.fit_transform(x_df)
freq_term_matrix = count_vectorizer.transform(x_df)
tfidf = TfidfTransformer(norm = "l2")
tfidf.fit(freq_term_matrix)
tf_idf_matrix = tfidf.fit_transform(freq_term_matrix)
print(tf_idf_matrix)
#Splitting data into train and test data
x_train, x_test, y_train, y_test = train_test_split(tf_idf_matrix,
y_df, random_state=0)
#Implementing DIfferent Models and checking accuracy
y_train.head()
#LOGISTIC REGRESSION
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(x_train, y_train)
Accuracy = logreg.score(x_test, y_test)
print(Accuracy*100)
#NAIVE BAYES
from sklearn.naive_bayes import MultinomialNB
NB = MultinomialNB()
NB.fit(x_train, y_train)
Accuracy = NB.score(x_test, y_test)
print(Accuracy*100)
# DECISION TREE
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(x_train, y_train)
Accuracy = clf.score(x_test, y_test)
print(Accuracy*100)
# PASSIVE-AGGRESSIVE CLASSIFIER
from sklearn.metrics import accuracy_score
from sklearn.linear_model import PassiveAggressiveClassifier
#DataFlair - Initialize a PassiveAggressiveClassifier
pac=PassiveAggressiveClassifier(max_iter=50)
pac.fit(x_train,y_train)
#DataFlair - Predict on the test set and calculate accuracy
y_pred=pac.predict(x_test)
score=accuracy_score(y_test,y_pred)
print(f'Accuracy: {round(score*100,2)}%')
# Linear classifiers with SGD(Stochastic Gradient Descent) training
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(max_iter=1000, tol=1e-5)
sgd.fit(x_train,y_train)
y_pred=sgd.predict(x_test)
score=accuracy_score(y_test,y_pred)
print(f'Accuracy: {round(score*100,2)}%')
# Linear Support Vector Classification
from sklearn.svm import LinearSVC
clf = LinearSVC(random_state=0, tol=1e-5)
clf.fit(x_train,y_train)
y_pred=clf.predict(x_test)
score=accuracy_score(y_test,y_pred)
print(f'Accuracy: {round(score*100,2)}%')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.cross_validation import train_test_split
import tensorflow as tf
from matplotlib import animation
import matplotlib.pyplot as plt
from IPython.display import HTML
import seaborn as sns
sns.set()
df = pd.read_csv('Iris.csv')
df.head()
X = PCA(n_components=2).fit_transform(MinMaxScaler().fit_transform(df.iloc[:, 1:-1]))
Y = LabelEncoder().fit_transform(df.iloc[:, -1])
onehot_y = np.zeros((X.shape[0], np.unique(Y).shape[0]))
for k in range(X.shape[0]):
onehot_y[k, Y[k]] = 1.0
class Model:
def __init__(self, learning_rate, layer_size, optimizer):
self.X = tf.placeholder(tf.float32, (None, X.shape[1]))
self.Y = tf.placeholder(tf.float32, (None, np.unique(Y).shape[0]))
w1 = tf.Variable(tf.random_normal([X.shape[1], layer_size]))
b1 = tf.Variable(tf.random_normal([layer_size]))
w2 = tf.Variable(tf.random_normal([layer_size, layer_size]))
b2 = tf.Variable(tf.random_normal([layer_size]))
w3 = tf.Variable(tf.random_normal([layer_size, np.unique(Y).shape[0]]))
b3 = tf.Variable(tf.random_normal([np.unique(Y).shape[0]]))
self.logits = tf.nn.sigmoid(tf.matmul(self.X, w1) + b1)
self.logits = tf.nn.sigmoid(tf.matmul(self.logits, w2) + b2)
self.logits = tf.matmul(self.logits, w3) + b3
self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=self.Y, logits=self.logits))
self.optimizer = optimizer(learning_rate).minimize(self.cost)
correct_pred = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.Y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(0.1, 128, tf.train.GradientDescentOptimizer)
sess.run(tf.global_variables_initializer())
for i in range(100):
batch_y = np.zeros((X.shape[0], np.unique(Y).shape[0]))
for k in range(X.shape[0]):
batch_y[k, Y[k]] = 1.0
cost, _ = sess.run([model.cost, model.optimizer], feed_dict={model.X: X, model.Y:onehot_y})
acc = sess.run(model.accuracy, feed_dict={model.X: X, model.Y:onehot_y})
if (i+1) % 10 == 0:
print('epoch %d, Entropy: %f, Accuracy: %f'%(i+1, cost, acc))
fig = plt.figure(figsize=(15,10))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = np.argmax(sess.run(model.logits, feed_dict={model.X: np.c_[xx.ravel(), yy.ravel()]}),axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.show()
tf.reset_default_graph()
first_graph = tf.Graph()
with first_graph.as_default():
gd = Model(0.1, 128, tf.train.GradientDescentOptimizer)
first_sess = tf.InteractiveSession()
first_sess.run(tf.global_variables_initializer())
second_graph = tf.Graph()
with second_graph.as_default():
adagrad = Model(0.1, 128, tf.train.AdagradOptimizer)
second_sess = tf.InteractiveSession()
second_sess.run(tf.global_variables_initializer())
third_graph = tf.Graph()
with third_graph.as_default():
rmsprop = Model(0.1, 128, tf.train.RMSPropOptimizer)
third_sess = tf.InteractiveSession()
third_sess.run(tf.global_variables_initializer())
fourth_graph = tf.Graph()
with fourth_graph.as_default():
adam = Model(0.1, 128, tf.train.AdamOptimizer)
fourth_sess = tf.InteractiveSession()
fourth_sess.run(tf.global_variables_initializer())
fig = plt.figure(figsize=(25,15))
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
h = 0.01
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
concated = np.c_[xx.ravel(), yy.ravel()]
plt.subplot(2, 2, 1)
Z = first_sess.run(gd.logits, feed_dict={gd.X:concated})
acc = first_sess.run(gd.accuracy, feed_dict={gd.X:X, gd.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.title('GD epoch %d, acc %f'%(0, acc))
plt.subplot(2, 2, 2)
Z = second_sess.run(adagrad.logits, feed_dict={adagrad.X:concated})
acc = second_sess.run(adagrad.accuracy, feed_dict={adagrad.X:X, adagrad.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.title('ADAGRAD epoch %d, acc %f'%(0, acc))
plt.subplot(2, 2, 3)
Z = third_sess.run(rmsprop.logits, feed_dict={rmsprop.X:concated})
acc = third_sess.run(rmsprop.accuracy, feed_dict={rmsprop.X:X, rmsprop.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.title('RMSPROP epoch %d, acc %f'%(0, acc))
plt.subplot(2, 2, 4)
Z = fourth_sess.run(adam.logits, feed_dict={adam.X:concated})
acc = fourth_sess.run(adam.accuracy, feed_dict={adam.X:X, adam.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.title('ADAM epoch %d, acc %f'%(0, acc))
def training(epoch):
plt.subplot(2, 2, 1)
first_sess.run(gd.optimizer, feed_dict={gd.X:X, gd.Y:onehot_y})
Z = first_sess.run(gd.logits, feed_dict={gd.X:concated})
acc = first_sess.run(gd.accuracy, feed_dict={gd.X:X, gd.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.title('GD epoch %d, acc %f'%(epoch, acc))
plt.subplot(2, 2, 2)
second_sess.run(adagrad.optimizer, feed_dict={adagrad.X:X, adagrad.Y:onehot_y})
Z = second_sess.run(adagrad.logits, feed_dict={adagrad.X:concated})
acc = second_sess.run(adagrad.accuracy, feed_dict={adagrad.X:X, adagrad.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.title('ADAGRAD epoch %d, acc %f'%(epoch, acc))
plt.subplot(2, 2, 3)
third_sess.run(rmsprop.optimizer, feed_dict={rmsprop.X:X, rmsprop.Y:onehot_y})
Z = third_sess.run(rmsprop.logits, feed_dict={rmsprop.X:concated})
acc = third_sess.run(rmsprop.accuracy, feed_dict={rmsprop.X:X, rmsprop.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
plt.title('RMSPROP epoch %d, acc %f'%(epoch, acc))
plt.subplot(2, 2, 4)
fourth_sess.run(adam.optimizer, feed_dict={adam.X:X, adam.Y:onehot_y})
Z = fourth_sess.run(adam.logits, feed_dict={adam.X:concated})
acc = fourth_sess.run(adam.accuracy, feed_dict={adam.X:X, adam.Y:onehot_y})
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Spectral, alpha =0.5)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Spectral)
cont = plt.title('ADAM epoch %d, acc %f'%(epoch, acc))
return cont
anim = animation.FuncAnimation(fig, training, frames=100, interval=200)
anim.save('animation-gradientcomparison-iris.gif', writer='imagemagick', fps=5)
```
| github_jupyter |
# Exception Handling Basics
There's one more way that you can control the flow of code and that's with exception handling. Exception handling is the process of "catching" an error that would otherwise halt execution of your code. This allows you to potentially recover from a somewhat fatal situation.
Exception handling is a powerful tool, but sometimes can be overused or used improperly. Rememeber these simple rules when doing exception handling:
- Only catch exception you can gracefully recover from.
- It's ok to have exception in your code, they're meant to be there for situations that you didn't account for.
- Don't try to account for every exception possible.
- If you have to catch all execption (for example in an always running script) log the exception.
- Keep your `try` blocks as small as possible.
## Types of Errors
- A full list of default errors can be found in the [official Python documentation](https://docs.python.org/3.8/library/exceptions.html).
- Some common one are:
- **`ValueError`:** An operation encounters an invalid value.
- **`TypeError`:** An operation is performed on a value of the wrong type.
- **`NameError`:** A variable name was used that hasn't been defined.
- **`ZeroDivisionError`:** The divisor in a division operation is 0.
- **`OverflowError`:** The result of an arithmetic operation is too large.
## Catching Exceptions Using `try`/`except` Statements
To "catch" an exception, you use the `try` and `except` statements:
- **`try`:** Try the the following indented code.
- **`except`:** if an exception is raised in the `try` blokc, run the following indented code.
```
# Let's try a small example where we want to make sure
# we get an integer value from the user
num_is_valid = False
while not num_is_valid:
number_str = input("Please enter a number: ")
try:
number = int(number_str)
num_is_valid = True
except ValueError:
print("That is not an integer!")
print(number)
# you can also chain exceptions
def safe_division(A, B):
try:
print(A / B)
except TypeError:
print("A and B must be numbers!")
except ZeroDivisionError:
print("B cannot be 0")
safe_division(10, '5')
safe_division(10, 0)
safe_division(10, 2)
```
## In-Class Assignment
- Write a program that prompts the user to enter in a float and continues to do so until they do. If the user enters an invalid value, use the `ValueError` to print out to the user "Try Again".
- Write a program that prompts the user to enter in a string and then an integer. Using the integer, display the value of the character that the number indexes into for the passed in string. Some off-nominal use cases to consider are:
- Invalid integer entry
- An index value that's outside the scope of the string
- You may need to look at the Python documentation to find the correct Errors to catch
## Solution
```
while True:
prompt = input("Please enter a valid float: ")
try:
val = float(prompt)
print(f'Your value was: {val}')
break
except ValueError:
print('Try Again')
user_str = input("Please enter a string: ")
while True:
user_num = input("Please enter the location of the character to display: ")
try:
num = int(user_num)
print(user_str[num])
break
except ValueError:
print('Not an Integer!')
except IndexError:
print('location is invalid!')
```
| github_jupyter |
# 18 Héritage
L'héritage est la possibilité de définir une nouvelle classe, qui est une version modifié d'une classe existante. Dans cette section nous allons utiliser le jeu de poker, en utilisant des classes qui représentent des cartes à jouer.
Référence: https://fr.wikipedia.org/wiki/Poker
## Objet carte de jeu
Dansu un paquet de cartes il y 52 cartes, dont chacune appartient à une des 4 **couleurs**:
* Pique (3)
* Coeur (2)
* Carreau (1)
* Trèfle (0)
Les **valeurs** sont:
* as (1)
* 2, 3, .. 10
* valet (11)
* dame (12)
* roi (13)
Nous allons utiliser les nombre en parenthèses pour encoder la **couleur** et la **valeur** d'une carte.
```
class Cartes:
"""Représente une carte à jouer standard."""
def __init__(self, couleur=0, valeur=2):
self.couleur = couleur
self.valeur = valeur
```
Pour créer une carte nous appelons ``Carte`` avec la couleur et la valeur.
```
dame_de_carreau = Cartes(1, 12)
dame_de_carreau
```
## Attributs de classe
Pour afficher les cartes d'une manière facilement lisible pour les humains, nous avons besoin d'une correspondance entre les codes et les couleurs. Nous utilisons des listes, et nous en créons un **attribut de classe**.
Ces attributs de classe sont précédé du nom de la classe ``Cartes``, ce qui les distingue des **attributs d'instance** tel que `self.couleur`` ou ``self.valeur``.
```
class Cartes:
"""Représente une carte à jouer standard."""
couleurs = ['trèfle', 'carreau', 'coeur', 'pique']
valeurs = [None, 'as', '2', '3', '4', '5', '6', '7',
'8', '9', '10', 'valet', 'dame', 'roi']
def __init__(self, couleur=0, valeur=2):
self.couleur = couleur
self.valeur = valeur
def __str__(self):
return '{} de {}'.format(Cartes.valeurs[self.valeur],
Cartes.couleurs[self.couleur])
print(Cartes(1, 12))
```
Le premier élément de la liste ``Carte.valeurs`` est ``None`` car il n'y a pas de carte avec le code 0.
```
for i in range(1, 14):
print(Cartes(1, i))
```
## Comparer des cartes
Pour la comparaison des types internes (int, float) nous disposons des 6 comparateurs standard (``==``, ``!=``, ``<``, ``<=``, ``>=``, ``>``). Pour les classes définis par l'utilisateur, nous avons les méthodes spéciales ``__lt__`` (less than).
```
class Cartes:
"""Représente une carte à jouer standard."""
couleurs = ['trèfle', 'carreau', 'coeur', 'pique']
valeurs = [None, 'as', '2', '3', '4', '5', '6', '7',
'8', '9', '10', 'valet', 'dame', 'roi']
def __init__(self, couleur=0, valeur=2):
self.couleur = couleur
self.valeur = valeur
def __str__(self):
return '{} de {}'.format(Cartes.valeurs[self.valeur],
Cartes.couleurs[self.couleur])
def __lt__(self, other):
if self.couleur < other.couleur:
return True
elif self.couleur > other.couleur:
return False
return self.valeur < other.valeur
```
Testons avec deux cartes.
```
c1 = Cartes(2, 12)
c2 = Cartes(3, 13)
print(c1)
print(c2)
c1 < c2
```
## Paquet de cartes
La prochaine étape es de définir les paquets de cartes.
```
class Paquet:
def __init__(self):
self.cartes = []
for couleur in range(4):
for valeur in range(1, 14):
carte = Cartes(couleur, valeur)
self.cartes.append(carte)
def affiche(self):
for i in self.cartes:
print(i)
```
Créons maintenant un paquet et vérifions la présenece des 52 cartes.
```
p = Paquet()
p.affiche
len(p.cartes)
```
## Afficher le paquet
```
class Paquet:
def __init__(self):
self.cartes = []
for couleur in range(4):
for valeur in range(1, 14):
carte = Cartes(couleur, valeur)
self.cartes.append(carte)
def __str__(self):
res = []
for carte in self.cartes:
res.append(str(carte))
return '\n'.join(res)
p = Paquet()
print(p)
import random
L = list(range(10))
print(L)
random.shuffle(L)
print(L)
L.append(99)
L.pop(5) #Position
print(L)
```
## Ajouter, enlever, mélanger et trier
Pour distribuer des cartes, nous voudrions une méthode qui enlève une carte du paquet et la renvoie. La méthode de liste ``pop`` offre un moyen pratique pour le faire.
```
import random
class Paquet:
import random
def __init__(self):
self.cartes = []
for couleur in range(4):
for valeur in range(1, 14):
carte = Cartes(couleur, valeur)
self.cartes.append(carte)
def __str__(self):
res = []
for carte in self.cartes:
res.append(str(carte))
return '\n'.join(res)
def pop(self):
return self.cartes.pop()
def add(self, carte):
self.cartes.append(carte)
def battre(self):
random.shuffle(self.cartes)
def deplacer(self, main, nombre):
for i in range(nombre):
main.add(self.pop())
```
Créons un nouveau paquet, mélangeons les cartes et tirons-en une.
```
p = Paquet()
p.battre()
p.deplacer(m1, 3)
print(m1) #Enlève la dernière par défaut
for i in range(10):
print(p.pop()) #Enlève les 10 dernières
len(p.cartes) #On a enlevé 10 cartes
```
## Héritage
```
class Main(Paquet):
'''Main au jeu de carte.'''
def __init__(self, etiquette = ''):
self.cartes = []
self.etiquette = etiquette
p = Paquet()
m1 = Main('Bob')
m2 = Main('Alice')
print(m.pop())
carte = p.pop()
print(carte)
m.add(carte)
print(m)
m2.etiquette
```
## Exercice 2
Écrivez une méthode de Paquet appelée `distribue_mains` qui prend deux paramètres, le nombre de mains à distribuer et le nombre de cartes par main. Elle doit créer le nombre voulu d'objets Main, distribuer le nombre approprié de cartes par main et renvoyer une liste de Mains.
```
class Paquet2(Paquet):
'''Hérite toutes les méthodes de Paquet'''
def distribue_mains(seld, nbr_mains, nbr_cartes):
mains = []
return mains
```
| github_jupyter |
# Tensorflow MNIST
```
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
%matplotlib inline
mnist = input_data.read_data_sets('/tmp/data/', one_hot=True)
image = mnist.train.images[7].reshape([28, 28]);
plt.gray()
plt.imshow(image)
print(mnist.train.images[7].shape)
print(mnist.train.labels[7].shape)
print(mnist.train.images[7][150:200])
print(mnist.train.labels[:10])
learning_rate = 0.1
epochs = 1000
batch_size = 128
n_hidden_1 = 256
n_hidden_2 = 256
num_input = 784 # 28 x 28
num_classes = 10
X = tf.placeholder('float', [None, num_input])
Y = tf.placeholder('float', [None, num_classes])
weights = {
'h1': tf.Variable(tf.random_normal([num_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'output': tf.Variable(tf.random_normal([n_hidden_2, num_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'output': tf.Variable(tf.random_normal([num_classes]))
}
def network(x):
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
output_layer = tf.matmul(layer_2, weights['output']) + biases['output']
return output_layer
logits = network(X)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(
logits=logits, labels=Y
)
)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(loss)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
batch_x, batch_y = mnist.train.next_batch(batch_size)
sess.run(train, feed_dict={X: batch_x, Y: batch_y})
if epoch % 50 == 0:
train_accuracy = sess.run(
accuracy,
feed_dict={
X: mnist.train.images,
Y: mnist.train.labels
}
)
print('Epoch #{}: train accuracy = {}'.format(epoch, train_accuracy))
print('Test accuracy = {}'.format(
sess.run(
accuracy,
feed_dict={
X: mnist.test.images,
Y: mnist.test.labels
}
)
))
```
# Keras MNIST
```
tf.__version__
batch_size = 128
num_classes = 10
epochs = 2
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = tf.keras.utils.to_categorical(y_train, num_classes)
y_test = tf.keras.utils.to_categorical(y_test, num_classes)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(512, activation='relu', input_shape=(784,)))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(512, activation='relu'))
model.add(tf.keras.layers.Dropout(0.2))
model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))
model.compile(
loss='categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy']
)
_ = model.fit(
x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test)
)
loss, accuracy = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', loss)
print('Test accuracy:', accuracy)
```
| github_jupyter |
## $k_\infty$ and Diffusion Length for a Water Balloon
$\textbf{(100 points)}$ Consider a water balloon that you fill with a homogeneous mixture of
heavy water and fissile material. You do not know the exact ratio of the
water molecules to fissile atoms. However, you can fill the balloon with
different amounts of the solution (thereby changing the radius of the
sphere). You are able to measure the multiplication factor,
$k_\mathrm{eff}$, at several different balloon radii by pulsing the
sphere with a neutron source and observing the subcritical
multiplication. You also astutely remember that the 1-group diffusion
theory relation between the multiplication factor, the infinite medium
multiplication factor $k_\infty$, and the diffusion length, $L$ is
$$k_\mathrm{eff} = \frac{k_\infty}{1+L^2 B_g^2},$$
where $B_g^2$ is the geometric buckling for the system. In this case we have
$$B_g^2 = \frac{\pi^2}{(R+d)^2},$$
where $d$ is the extrapolation length. If we assume that $d \ll R$, we can do a linear Taylor expansion around $d=0$ to write
$$B_g^2 = \frac{\pi^2}{R^2} - \frac{2 \pi^2}{R^3}d.$$
Given the measurement data below, infer the values of $k_\infty$, $L$, and $d$. Is the assumption on $d$ correct? What radius will make the reactor critical? Report $R^2$ for your model.
Hint: make your regression model have the dependent variable be $1/k_\mathrm{eff}$.
## Solution
We have an equation for the geometric buckling that can be substitued into the equation for the multiplication factor as follows
$$k_\mathrm{eff} = \frac{k_\infty}{1 + L^2\Big(\frac{\pi^2}{R^2} - \frac{2 \pi^2}{R^3}d\Big)}.$$
Taking the hint, we are going to let $\frac{1}{k_\mathrm{eff}}$ be the experimential variable. As such, the equation above can be re-ordered like so
$$\frac{1}{k_\mathrm{eff}} = \frac{1 + L^2\Big(\frac{\pi^2}{R^2} - \frac{2 \pi^2}{R^3}d\Big)}{k_\infty},$$
and is then re-organized
$$\frac{1}{k_\mathrm{eff}} = \frac{1}{k_\infty} + \frac{1}{R^2} \frac{L^2 \pi^2}{k_\infty} - \frac{2 L^2 d \pi^2}{k_\infty} \frac{1}{R^3}.$$
This is then simplified further as
$$\frac{1}{k_\mathrm{eff}} = a + \frac{1}{R^2} b + \frac{1}{R^3} c,$$
where
$$k_\infty = \frac{1}{a},$$
$$L = \sqrt{\frac{k_\infty b}{\pi^2}},$$
and
$$d = -\frac{k_\infty c}{2 L^2 \pi^2}.$$
Once these variables are all known, the critical radius, $R_\mathrm{crit}$ can be determined as follows:
$$R_\mathrm{crit} = \sqrt{\frac{L^2 \pi^2}{k_\infty - 1}} - d.$$
Knowing this, we will will first define our expiermental data in $\texttt{Python}$.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Import given data
R = np.array([10,12.5,15,20,25,35,36,40,45,50])
keff = np.array([0.16,0.23,0.31,0.46,0.60,0.80,0.82,0.87,0.93,0.98])
keffinv = np.power(keff,-1)
```
The numpy function $\tt{linalg.lstsq}$ is then used to preform a least-squared regression on the data. This procedure follows the "Nonlinear models" section in Chapter 11.
```
# Fill necessary A matrix
A = np.ones((R.size,3))
A[:,1] = np.power(R,-2)
A[:,2] = np.power(R,-3)
# Run least-squared regression
solution,residuals = np.linalg.lstsq(A,keffinv)[:2]
a = solution[0]
b = solution[1]
c = solution[2]
R2 = 1 - residuals/(keffinv.size*keffinv.var())
# Print
print('The solution is')
print('1/k_eff =',a,'+',b,'R^-2 +',c,'R^-3')
print('\nWith a R^2 value of',R2)
```
As seen, the result is a very reasonable $R^2$ value. Next, the solution is plotted as a simple verification (not required).
```
Rplot = np.linspace(10,50,1000)
plt.plot(Rplot,a + b/Rplot**2 + c/Rplot**3,label="Regression")
plt.scatter(R,keffinv,label='Experimental data')
plt.xlabel("$R$")
plt.ylabel("1/$k_\mathrm{eff}$")
plt.legend()
plt.show()
```
As expected, the agreement between the regression and the experimental data is excellent.
The desired values of $k_\infty$, $L$, $d$, and $R_\mathrm{crit}$ are then determined. Note that the values of $L$, $d$, and $R_\mathrm{crit}$ have arbitrary units of length.
```
# Determined desired values
kinf = 1/a
L = np.sqrt(kinf*b/np.pi**2)
d = -kinf*c/(2*L**2*np.pi**2)
R_crit = np.sqrt((L**2*np.pi**2)/(kinf-1)) - d
# Print
print('k_inf =',kinf)
print('L =',L)
print('d =',d)
print('R_crit =',R_crit)
```
From these values, it can be determined that the assumption $d \ll R$ is valid.
## Model without a constant
It does not always make sense to have a constant in a linear model. What the constant indicates is the value of the dependent variable when all the dependent variables are zero. One example of a model where this is the case is the temperature coefficient of reactivity for a nuclear system. This coefficient, denoted by $\alpha$, given by
$$\Delta \rho = \alpha \Delta T,$$
where $T$ is the temperature of the system and $\Delta T$ is the difference between the current temperature and the point where the reactivity $\rho$ is 0. The reactivity is given by
$$\rho = \frac{k-1}{k}.$$
Your task is to find $\alpha$ using the data below by fitting a model without a constant. To do this you have to modify the data matrix in the least squares procedure. Report $R^2$ for your model.
## Solution
The point at which the reactivity $\rho$ is 0 is at $k_\mathrm{eff} = 1$. We will use simple interpolation between the first two data points to find an approximate temperature at which this is true:
$$T\rvert_{\rho = 0} = T_0 + \frac{T_1 - T_0}{k_\mathrm{eff,1} - k_\mathrm{eff,0}} \Big(k_\mathrm{eff,0} - 1\Big).$$
```
import numpy as np
import matplotlib.pyplot as plt
# Define given data
keff = np.array([1.002145,0.999901,0.998032,0.996076,
0.994055,0.992329])
T = np.array([250,300,350,400,450,500])
# Interpolate temperature where reactivity is 0
T_rho0 = T[1] - (T[1]-T[0])/(keff[1]-keff[0])*(keff[1]-1)
print('T_rho0 =',T_rho0)
```
Once the temperature at which $\rho = 0$ is determined, the individual values of $\Delta \rho$ and $\Delta T$ are then determined. A least squares procedure is then used to determine an appropriate value of $\alpha$.
```
# Determine delta values and fill A matrix
drho = (keff-1)/keff
dT = T - T_rho0
A = np.ones((drho.size,2))
A[:,1] = dT
# Solve for solution and residual
solution,residuals = np.linalg.lstsq(A,drho)[:2]
R2 = 1 - residuals/(drho.size*drho.var())
alpha = solution[1]
# Print
print('alpha =',alpha,'K^-1')
print('R^2 =',R2)
```
Again, an appropriate value of $R^2$ is found.
The data is then plotted as an additional form of verification (not required).
```
# Plot regression and experimental data
dTplot = np.linspace(dT.min(),dT.max(),100)
plt.plot(dTplot,alpha*dTplot,label="Regression")
plt.scatter(dT,drho,label='Experimental data')
plt.xlabel(r"$\Delta T$")
plt.ylabel(r"$\Delta \rho$")
plt.legend()
plt.show()
```
| github_jupyter |
```
# Data exploration
import pandas as pd
# Numerical
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#import required Python scripts to access their functions
import NYC_GetCleaned_HistoricData
import data_utility
import NYC_GetCleaned_TotalPopulation
#import the functions from their corresponding files
from NYC_GetCleaned_HistoricData import getCleanedData
from NYC_GetCleaned_TotalPopulation import getMeanPopulation
#Get cleaned data from NYC_GetCleaned_HistoricData
crimes_original = getCleanedData()
crimes_original.head()
#Filter the data to fetch only crimes with status 'Completed'
from data_utility import filterData
completed_crimes = filterData(crimes_original,7,'COMPLETED')
#Drop unwanted comlumns
dropped_columns = completed_crimes.drop([0,1,2,3,4,6,11,13,14,5,7,10],axis=1)
#Rename the columns
dropped_columns.columns = ['TYPE OF CRIME','BOROUGHS','PREMISES']
dropped_columns.head()
#Since population count is carried once in a decade, get the mean population of the year 2010 and 2020 for each borough
bronxPop = getMeanPopulation('Bronx', 1)
brookylnPop = getMeanPopulation('Brooklyn', 2)
manhattanPop = getMeanPopulation('Manhattan', 3)
queensPop = getMeanPopulation('Queens', 4)
statIslandPop = getMeanPopulation('Staten Island', 5)
total_population = bronxPop+brookylnPop+manhattanPop+queensPop+statIslandPop
#Set Type of crime to a variable
felony_val = 'FELONY'
mis_val = 'MISDEMEANOR'
vio_val = 'VIOLATION'
#Fetch crimes that are categorized under Felony
data = filterData(dropped_columns,'TYPE OF CRIME',felony_val)
data.head()
#Function to highlight the max value in a row
def highlight_max(s):
is_max = s == s.max()
return ['background-color: yellow' if v else '' for v in is_max]
#Function that returns data in the form of Pivot table for the corresponding type of crime
def createPivotTable(data,levelName):
#set level name for pivot table
data.columns = [levelName,'BOROUGHS','PREMISES
']
#Convert the Felony dataset into a pivot table as per Boroughs and Premises
fel_pivot = data.pivot_table(values = [levelName], index = ['PREMISES'], columns = ['BOROUGHS'], aggfunc=np.size,fill_value=0,margins=True)
#Convert the values from int64 to float64
for i in range(6):
fel_pivot[fel_pivot.columns[0][0], fel_pivot.columns[i][1]] = fel_pivot[fel_pivot.columns[0][0],fel_pivot.columns[i][1]].astype('float64')
#Divide the number of crimes with the mean population in each premises for the corresponding Borough and round it upto 6 decimal digits
for idx, row in fel_pivot.iterrows():
row[0]=round(row[0]/bronxPop,6)
row[1]=round(row[1]/brookylnPop,6)
row[2]=round(row[2]/manhattanPop,6)
row[3]=round(row[3]/queensPop,6)
row[4]=round(row[4]/statIslandPop,6)
row[5]=round(row[5]/total_population,6)
#Sort the pivot table by All(Total Crime for each premise and borough)
sort_pivot = fel_pivot.reindex(fel_pivot[levelName].sort_values(by='All', ascending=False).index)
#Highlight max values for each of the Boroughs
temp = sort_pivot.drop(['All'])
sort_pivot_final = temp.style.apply(highlight_max)
return sort_pivot_final
#Create a pivot table for Felonious Crimes under Boroughs and Premises
createPivotTable(data,felony_val)
#Fetch crimes that are categorized under Misdemeanor
data_mis = filterData(dropped_columns,'TYPE OF CRIME',mis_val)
createPivotTable(data_mis,mis_val)
#Fetch crimes that are categorized under Violation
data_vio = filterData(dropped_columns,'TYPE OF CRIME',vio_val)
temp =createPivotTable(data_vio,vio_val)
temp.columns[5][1]
temp.index
temp.index.name
temp.columns.names[1]
```
| github_jupyter |
<h1 style="direction:rtl;text-align:center;color:#ffffff;background-color:#cca3db;font-size:48p"><strong>سوال پنجم</strong> </h1>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import random
```
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> a) </strong></h3>
```
missing_values = ["NA","?"]
df = pd.read_csv('thyroid.csv',na_values=missing_values)
df.head()
df = pd.DataFrame(df[pd.notnull(df['Outcome'])])
df.head()
df.replace(np.NaN,df.mean(),inplace=True)
df.head()
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">ابتدا سایر علائمی که علاوه بر مقادیر استاندارد بجای null استفاده شده اند را تاثیر دادیم و فایل را خواندیم. سپس رکورد هایی را که متغیر تارگت آنها یعنی Outcome نال بود حذف کردیم و سپس در سایر ستون ها بجای مقادیر null میانگین آن ستون را قرار دادیم</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> b) </strong></h3>
```
import seaborn as sns
corrMatrix = df.corr()
corrMatrix
sns.heatmap(corrMatrix, annot=True)
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">ایتدا ماتریس ضزایب همبستگی را ایجاد کرده و سپس heatmap آن را رسم کردیم. هرچه اعداد به +-1 نزدیک تر باشند همبستگی بیشتر است</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> c) </strong></h3>
```
c = df.groupby('Outcome')['Outcome'].count()
c
print("Current Percentage of two/total:",(30/174)*100)
print("#Records Needed to Resample To Achieve 50% two/total: " , ((0.5*174)-30)/0.5) #x = (p(rare)-current)/1-p
to_resample = df.loc[df['Outcome'] == 2]
our_resample = to_resample.sample(n = 114, replace = True)
balanced_df = pd.concat([df, our_resample])
print(balanced_df.groupby('Outcome')['Outcome'].count())
print("Current Percentage of 2:",(144/(144+144))*100)
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">در بخش اول دیدیم نسبت رکوردها با کلاس هدف 2 نسبت به کلاس 1به نسبت کمتر است و داده ها می توانند imbalanced تلقی شوند و در کل کلاس 2، 17 درصد کل را تشکیل داده است.
برای رساندن این درصد به 50% به 114 رکورد نیاز داریم. در بخش بعد داده های با کلاس 2 را که می خواهیم سمپل کنیم در to_resample ذخیره می کنیم . 114 تای دیگر مشابه آنها تواید و با دیتاست اصلی concat می کنیم. حال درصد هریک از کلاس ها به 50 رسیده است</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> d) </strong></h3>
```
from sklearn.model_selection import train_test_split
X = balanced_df.drop('Outcome', axis=1)
y = balanced_df['Outcome']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">ابتدا X وy را جدا کرده و سپس X و y های آموزش و تست را با اندازه تست 0.2 می سازیم.</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> e) </strong></h3>
```
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
cart00 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6).fit(X_train,y_train)
print(X_train.columns)
fig, axes = plt.subplots(1,1,figsize=(12,12))
tree.plot_tree(cart00,ax=axes)
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">به کمک کتابخانه sklearn یک DecisionTreeClassifier با پارامتر gini ساختیم و با داده های x و y آموزشی آن را train کردیم و درخت ان را پلات کردیم.</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> f) </strong></h3>
```
from sklearn.metrics import classification_report, confusion_matrix
y_predicted = cart00.predict(X_test)
# cart01.score(y_predicted,y_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted))
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">X_test را به مدلcart00 که آموزش دادیم دادیم تا y آن را پیشبینی کند و آنها را در y_predicted ذخیره کردیم.
برای سنجش دقت confusion matrix را نمایش دادیم که بیانگر این است که در کل روی 58 داده تست شده است مدل و 1 نمونه به خطا ب بقیه صحیح پیش بینی شده اند و به دقت 98% رسیده ایم.</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> g) </strong></h3>
```
#max_depth = 1
cart01 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=1).fit(X_train,y_train)
y_predicted01 = cart01.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted01))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted01))
#max_depth = 2
cart02 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=2).fit(X_train,y_train)
y_predicted02 = cart02.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted02))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted02))
#max_depth = 3
cart03 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=3).fit(X_train,y_train)
y_predicted03 = cart03.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted03))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted03))
#max_depth = 4
cart04 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=4).fit(X_train,y_train)
y_predicted04 = cart04.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted04))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted04))
#max_depth = 5
cart05 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=5).fit(X_train,y_train)
y_predicted05 = cart05.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted05))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted05))
#max_depth = 6
cart06 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=6).fit(X_train,y_train)
y_predicted06 = cart06.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted06))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted06))
#max_depth = 7
cart07 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=7).fit(X_train,y_train)
y_predicted07 = cart07.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted07))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted07))
#max_depth = 8
cart08 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=8).fit(X_train,y_train)
y_predicted08 = cart08.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted08))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted08))
#max_depth = 9
cart09 = DecisionTreeClassifier(criterion="gini",max_leaf_nodes=6,max_depth=9).fit(X_train,y_train)
y_predicted09 = cart09.predict(X_test)
print("[+] confusion matrix\n")
print(confusion_matrix(y_test, y_predicted09))
print("\n[+] classification report\n")
print(classification_report(y_test, y_predicted09))
```
<h4 style="direction:rtl;text-align:right;font-family:Yekan, sans-serif;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<h4 style="direction:rtl">برای بیش از 4 به 98% دقت رسیده و در هر مورد 1 خطا داشته ایم. </h4>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> h) </strong></h3>
```
importance = cart00.feature_importances_
for i,v in enumerate(importance):
print('Feature %0d (%s) : Score: %.5f' % (i,X_train.columns[i],v))
fig, axes = plt.subplots(1,1,figsize=(10,5))
plt.bar([x for x in X_train.columns], importance)
plt.show()
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">Feature Importance به روش هایی گفته می شود که بر حسب اینکه یک متغیر یا فیچر چقدر در پیش بینی متغیر هدف مهم و اثرگدار است به آن یک ارزش داده می شود.</p>
<p style="direction:rtl;text-align:right;">در مدل های پیشگو اهمیت بالایی دارد و علاوه بر اینکه در مورد دیتا و مدل به ما دید می دهد مبنای کاهش ابعاد و انتخاب فیچر هم هست. به ما نشان می دهد که کدام فیچر ها به فیچر هدف مرتبط تر هستند و کدام دورتر. در این کار را متخصص بیزینس هم می تواند کمک کند تا بتوانیم داده های مناسب تری هم جمع آوری کنیم. همچنین چون عمدتا این مقدار پس از fit شدن مدل روی داده ها محاسبه می شود دردگاه خوبی هم در مورد مدل ما و فیچرهای مهم برای آن ارائه می دهد.به این صورت ما می توانیم فیچرهایی را که برای مدل ما مهم تر هستند نگه داریم و از بقیه صرف نظر کنیم. </p>
<p style="direction:rtl;text-align:right;">هرچه feature_importances_ بالاتر باشد ، ویژگی مهمتر است.به عنوان اهمیت جینی نیز شناخته می شود</p>
<p style="direction:rtl;text-align:right;">همانطور که از اعداد و نمودار فوق مشخص است serum_thyroxin مهم ترین فیچر برای پیش بینی Outcome است و سایر فیچر ها هم به ترتیب مشخض شده اند.</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> i) </strong></h3>
```
from sklearn.tree import export_graphviz
import pydotplus
import graphviz
export_graphviz(cart09, out_file="tree.dot", class_names=["Outcome1", "Outcome2"],feature_names=X_train.columns, impurity=True, filled=True)
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">درخت با max_depth=9 به 98درصد دقت می رسد پس آن برای رسم انتخاب شده است.درخت مربوط به آن را در فایلtree.dot با کلاس های متغیر هدف Outcome1و Outcome2 و اسامی فیچرهای معادل نام ستون هایX_train ذخیره کردایم. سپی فایل را خوانده و به کمک همان graphviz درخت را نمایش دادیم. در ادامه نیز به روش دیگری آن را نمایش خواهیم داد.
impurity=True باعث نمایش gini در نمودار می شود.</p>
<h3 style="text-align:left;color:#945aaf;background-color:#ffffff;font-size:48p"><strong> j) </strong></h3>
```
import collections
from IPython.display import Image
graph = pydotplus.graph_from_dot_data(dot_graph)
colors = ('turquoise', 'pink')
edges = collections.defaultdict(list)
for edge in graph.get_edge_list():
edges[edge.get_source()].append(int(edge.get_destination()))
for edge in edges:
edges[edge].sort()
for i in range(2):
dest = graph.get_node(str(edges[edge][i]))[0]
dest.set_fillcolor(colors[i])
graph.write_png('mytree.png')
Image(graph.create_png())
```
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:48p"><strong>شرح کد :</strong> </h4>
<p style="direction:rtl;text-align:right;">ابتدا dot_graph ی را که در سوال قبل از فایلtree.dot خواندیم به شکل یک گراف در متغیرgraph ذخیره می کنیم.
سپس پالت رنگی برای رسم را تعیین می کنیم.برای یال هایی که در گرافgraph وجود دارد مبدا و مقصد آنها را یه هم وصل می کنیم و سپس گراف را مرتب و رنگ آمیزی می کنیم.درنهایت آن را به عنوان یک عکس ذخیره و با خواندن غکس نمایش می دهیم.</p>
<h4 style="direction:rtl;text-align:right;color:#945aaf;background-color:#ffffff;font-size:40p"><strong>توصیف نمودار :</strong> </h4>
<p style="direction:rtl;text-align:right;">
اولین متغیری که بررسی شده است و در ریشه درخت قرار گرفته است Serum_thyroxin است چون separator خوبی است و تمایز بیشتری ایجاد می کند.اگر مقدار آن کمتر از 11 ابشد بدون بررسی هیچ ویژگی دیگر با می توان گفت در کلاس Outcome1 است و با توجه به نمودار چون gini=0 کلاس خیلی خالص و ایده ال است.
سپس در سمت راست درخت با فرض متغیر قبل بزرگتر از 11 است متغیر بعدی را بررسی می کند که اگر کوچکترمساوی 1.55 باشد باز یک گره خالص از کلاس 1 را در خروجی خواهیم داشت اما اگر اینطور نباشد سراغ بررسی متغیرهای بعد می رویم و همینطور ادامه می دهیم تا درختی ساخته بشود که ترجیحا بالانس است و در برگ ها دسته بندی هایی صورت گرفته است که gini آنها حداقل است و دسته بندی مناسبی است. </p>
| github_jupyter |
# Diplomatura en Ciencia de Datos, Aprendizaje Automático y sus Aplicaciones
## Programación Distribuida sobre Grandes Volúmenes de Datos
Damián Barsotti
### Facultad de Matemática Astronomía Física y Computación
## Universidad Nacional de Córdoba
<img src="http://program.ar/wp-content/uploads/2018/07/logo-UNC-FAMAF.png" alt="Drawing" style="width:80%;"/>
### Antes de comenzar
#### En máquina virtual
1. Lanzar terminal
1. Actualizar repo:
```sh
cd diplodatos_bigdata
git pull
```
1. Lanzar [Zeppelin](http://zeppelin.apache.org/):
```sh
cd
cd spark/zeppelin-0.8.2-bin-all
./bin/zeppelin-daemon.sh start
```
1. En navegador abrir [http://localhost:8080](http://localhost:8080)
1. Seleccionar `Import note`
1. Elegir json en `diplodatos_bigdata/clases/04_dataframes_tablas/note.json`
2. Seleccionar `Clase 04 - Dataframes y Tablas`
```
import seaborn as sns
import matplotlib.pyplot as plt
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("04_dataframes_tablas").getOrCreate()
sc = spark.sparkContext
```
## Dataframes y Tablas (ejemplo)
```
flightsInDF = spark.read.load("../inputs/ds/flights.csv",
format="csv", delimiter=",", header=True, inferSchema=True)
flightsInDF.count()
flightsInDF.limit(10).toPandas()
```
#### Descripción de columnas
##### Algunas columnas importantes
```
flightsInDF.select("UniqueCarrier", "FlightNum", "DepDelay", "ArrDelay", "Distance").show()
```
#### Esquema
```
flightsInDF.printSchema()
```
Vuelos retrasados (> 15')
```
delayedDF = flightsInDF.select("UniqueCarrier", "DepDelay").filter("DepDelay > 15").cache()
delayedDF.show(n=5)
```
Cantidad
```
print("Cantidad de vuelos retrasados por más de 15': ", delayedDF.count())
```
### User Defined Functions (UDF)
```
from pyspark.sql.functions import udf
from pyspark.sql.types import IntegerType
#* Función que define si un vuelo está retrasado
#* Si el retraso es > que 15' devuelve 1
#* si no 0
#* Si no está definida devuelve 0
# ********************************************************/
def isDelayed(time):
if time == "NA":
return 0
elif int(time) > 15:
return 1
else:
return 0
# UDF que define si un vuelo está retrasado
isDelayedUDF = udf(isDelayed, IntegerType())
```
### Nuevo Dataframe usando UDF
Nuevo Dataframe con algunas columnas y con una nueva indicando si el vuelo está retrasado
```
flightsDF = flightsInDF.select(
"Year", "Month", "DayofMonth", "DayOfWeek", "CRSDepTime", "UniqueCarrier", "FlightNum",
"DepDelay", "ArrDelay", "Origin", "Dest", "TaxiIn", "TaxiOut", "Distance",
isDelayedUDF("DepDelay").alias("IsDepDelayed")).cache()
flightsDF.toPandas()
```
Porcentaje de vuelos demorados
```
from pyspark.sql.functions import sum, count
percDelayedDF = flightsDF.agg(
(sum("IsDepDelayed") * 100 / count("*"))\
.alias("Porcentaje de vuelos retrasados"))\
.cache()
firstRow = percDelayedDF.first()
porcentaje = firstRow.asDict()["Porcentaje de vuelos retrasados"]
print(f"Porcentaje de vuelos demorados: {porcentaje}%")
```
Promedio de Taxi-out
```
from pyspark.sql.functions import avg
promTODF= flightsDF.select("Origin", "Dest", "TaxiOut") \
.groupBy("Origin", "Dest").agg(avg("TaxiOut").alias("AvgTaxiOut")) \
.orderBy("AvgTaxiOut", ascending=False)
promTODF.show()
```
### Ejercicio
Averiguar el promedio de Taxi-in por origen y destino.
```
flightsDF.select("Origin", "Dest", "TaxiIn") \
.groupBy("Origin", "Dest").agg(avg("TaxiIn").alias("AvgTaxiIn")) \
.orderBy("AvgTaxiIn", ascending=False).show()
```
## SQL Plano
---
Creacion de tabla temporal
```
flightsDF.createOrReplaceTempView("flightsTbl")
```
Tabla vuelos
```
resDF = spark.sql("SELECT * FROM flightsTbl LIMIT 10")
resDF.toPandas()
```
#### UDF en SQL plano
`isDelayed` ya la definimos
Registración de UDF para usar con tablas
```
spark.udf.register("isDelayedTabUDF", isDelayed)
```
Usando Sql
```sql
SELECT UniqueCarrier, SUM(isDelayedTabUDF(ArrDelay)) AS NumArrDelays
FROM flightsTbl GROUP BY UniqueCarrier
```
```sql
SELECT UniqueCarrier, AVG(ArrDelay) AS AvgTimeDelay
FROM flightsTbl WHERE isDelayedTabUDF(ArrDelay) == 1
GROUP BY UniqueCarrier
```
SQL programatico
```
flightsDF.filter(isDelayedUDF("ArrDelay") == 1) \
.groupBy("UniqueCarrier") \
.agg(avg("ArrDelay")) \
.show()
```
### Ejereccio
Encontrar la distancia de vuelo promedio por empresa.
```
flightsDF.printSchema()
flightsDF.select("UniqueCarrier", "Distance")\
.groupBy("UniqueCarrier") \
.agg(avg("Distance")) \
.show()
```
### Ejercicio
Retrasados y en horario por día de la semana
```
from pyspark.sql.functions import when
retardos = flightsDF.select(
"DayOfWeek", when(isDelayedUDF("ArrDelay") == 1, 'delayed').otherwise('ok')\
.alias('Delay'))\
.groupBy("DayOfWeek", 'Delay')\
.agg(count("*")).alias("count")
retardos.show()
plt.figure(figsize=(12, 6))
sns.barplot(x='DayOfWeek',y='count(1)', hue='Delay', data=retardos.toPandas())
plt.show()
```
### Ejercicio
Retrasados y en horario por hora
```
retardos = flightsDF.select(
(flightsDF.CRSDepTime / 100).cast("int").alias("Hour"),
when(isDelayedUDF("ArrDelay") == 1, 'delayed').otherwise('ok')\
.alias('Delay'))\
.groupBy("Hour", 'Delay')\
.agg(count("*")).alias("count")
retardos.show()
plt.figure(figsize=(12, 6))
sns.barplot(x='Hour',y='count(1)', hue='Delay', data=retardos.toPandas())
plt.show()
```
## Lectura/Escritura
---
Guardar como archivo Orc
|Modo| Significado|
|----|-------------|
|<code>"error"</code> (default) | Si existe se genera una excepción.|
| <code>"append"</code> | Si existe se agrega al final.|
| <code>"overwrite"</code> | Si existe se borra y se crea una nueva.|
|<code>"ignore"</code> | Si existe no se guarda. Similar a CREATE TABLE IF NOT.|
Cargar los datos usando `orc`
``` python
testDF = spark.read.format("orc").load("flights.orc")
```
### Tablas permanentes
* `createOrReplaceTempView` crea **tablas** temporales (desaparecen cuando se destruye `SparkSession`).
* `saveAsTable`
- guarda dataframes en *HiveMetastore* de forma permanente y
- permita utilizar SQL plano sobre ella.
Guardad como tabla
```
flightsDF.write.format("orc").mode("overwrite").saveAsTable("lightsPermTbl")
```
### Leer tablas a Df
```
from pyspark.sql import SQLContext
sqlContext = SQLContext(sc)
dfFromTbl = sqlContext.table("flightstbl")
dfFromTbl.filter("IsDepDelayed == 1").toPandas()
```
### Ejercicio
Calcular la cantidad de vuelos que parten de cada aeropuerto y
guardar el resultado en una tabla permanente.
```
srcPort = dfFromTbl.select("Origin")\
.groupBy("Origin")\
.agg(count("*")).alias("count")
srcPort.show()
```
#### Agregando nombres de empresas
en carriers.csv están los nombres
```
carriersDF = spark.read.load(
"../inputs/ds/carriers.csv", format="csv", header=True, inferSchema=True)
carriersDF.show()
```
#### join
```
from pyspark.sql.functions import col
flCarrDF = flightsDF.join(carriersDF, col("Code") == col("UniqueCarrier")) \
.withColumnRenamed("Description","CarrierName")
print(flCarrDF.count())
flCarrDF.toPandas()
```
#### Join con broadcast
es una parte importante del motor de ejecución de Spark SQL.
Cuando se usa, realiza una combinación en dos relaciones
transmitiendo primero la más pequeña a todos los ejecutores de Spark
y luego evaluando los criterios de combinación con las particiones
de cada ejecutor de la otra relación.
<img src="https://www.oreilly.com/library/view/high-performance-spark/9781491943199/assets/hpsp_0405.png" alt="Drawing" style="width:60%;"/>
```
from pyspark.sql.functions import col, broadcast
flCarrDF = flightsDF.join(broadcast(carriersDF), col("Code") == col("UniqueCarrier")) \
.withColumnRenamed("Description","CarrierName")
print(flCarrDF.count())
flCarrDF.toPandas()
```
#### Ejercicio
Utilizando la información en el archivo `airports.csv` cambiar
los códigos de los aeropuertos por sus nombres y agregar sus coordenadas.
Hacerlo con SQL programático (usando dataframes).
##### Ayuda:
Puede ser útil el método `drop` que aparece en [Doc API Datasets](http://spark.apache.org/docs/2.1.1/api/scala/index.html#org.apache.spark.sql.Dataset).
```
!head ../inputs/ds/airports.csv
airportsDF = spark.read.load(
"../inputs/ds/airports.csv", format="csv", header=True, inferSchema=True)
airportsDF.toPandas()
flAirportDF = flCarrDF.join(broadcast(airportsDF.select("airport", "iata")), col("Origin") == col("iata")) \
.withColumnRenamed("airport","airport origin") \
.drop("Origin") \
.drop("iata") \
.join(broadcast(airportsDF.select("airport", "iata", "lat", "long")), col("Dest") == col("iata")) \
.withColumnRenamed("airport","airport dest") \
.drop("Dest") \
.drop("iata")
flAirportDF.toPandas()
```
Fin
| github_jupyter |
## Model Layers
This module contains many layer classes that we might be interested in using in our models. These layers complement the default [Pytorch layers](https://pytorch.org/docs/stable/nn.html) which we can also use as predefined layers.
```
from fastai.vision import *
from fastai.gen_doc.nbdoc import *
```
## Custom fastai modules
```
show_doc(AdaptiveConcatPool2d, title_level=3)
from fastai.gen_doc.nbdoc import *
from fastai.layers import *
```
The output will be `2*sz`, or just 2 if `sz` is None.
The [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d) object uses adaptive average pooling and adaptive max pooling and concatenates them both. We use this because it provides the model with the information of both methods and improves performance. This technique is called `adaptive` because it allows us to decide on what output dimensions we want, instead of choosing the input's dimensions to fit a desired output size.
Let's try training with Adaptive Average Pooling first, then with Adaptive Max Pooling and finally with the concatenation of them both to see how they fare in performance.
We will first define a [`simple_cnn`](/layers.html#simple_cnn) using [Adaptive Max Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveMaxPool2d) by changing the source code a bit.
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
def simple_cnn_max(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveMaxPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_max((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Now let's try with [Adaptive Average Pooling](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) now.
```
def simple_cnn_avg(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn_avg((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
```
Finally we will try with the concatenation of them both [`AdaptiveConcatPool2d`](/layers.html#AdaptiveConcatPool2d). We will see that, in fact, it increases our accuracy and decreases our loss considerably!
```
def simple_cnn(actns:Collection[int], kernel_szs:Collection[int]=None,
strides:Collection[int]=None) -> nn.Sequential:
"CNN with `conv2d_relu` layers defined by `actns`, `kernel_szs` and `strides`"
nl = len(actns)-1
kernel_szs = ifnone(kernel_szs, [3]*nl)
strides = ifnone(strides , [2]*nl)
layers = [conv_layer(actns[i], actns[i+1], kernel_szs[i], stride=strides[i])
for i in range(len(strides))]
layers.append(nn.Sequential(AdaptiveConcatPool2d(1), Flatten()))
return nn.Sequential(*layers)
model = simple_cnn((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy])
learner.fit(1)
show_doc(Lambda, title_level=3)
```
This is very useful to use functions as layers in our networks inside a [Sequential](https://pytorch.org/docs/stable/nn.html#torch.nn.Sequential) object. So, for example, say we want to apply a [log_softmax loss](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.log_softmax) and we need to change the shape of our output batches to be able to use this loss. We can add a layer that applies the necessary change in shape by calling:
`Lambda(lambda x: x.view(x.size(0),-1))`
Let's see an example of how the shape of our output can change when we add this layer.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0),-1))
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(Flatten)
```
The function we build above is actually implemented in our library as [`Flatten`](/layers.html#Flatten). We can see that it returns the same size when we run it.
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Flatten(),
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
show_doc(PoolFlatten)
```
We can combine these two final layers ([AdaptiveAvgPool2d](https://pytorch.org/docs/stable/nn.html#torch.nn.AdaptiveAvgPool2d) and [`Flatten`](/layers.html#Flatten)) by using [`PoolFlatten`](/layers.html#PoolFlatten).
```
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
PoolFlatten()
)
model.cuda()
for xb, yb in data.train_dl:
out = (model(*[xb]))
print(out.size())
break
```
Another use we give to the Lambda function is to resize batches with [`ResizeBatch`](/layers.html#ResizeBatch) when we have a layer that expects a different input than what comes from the previous one.
```
show_doc(ResizeBatch)
a = torch.tensor([[1., -1.], [1., -1.]])[None]
print(a)
out = ResizeBatch(4)
print(out(a))
show_doc(Debugger, title_level=3)
```
The debugger module allows us to peek inside a network while its training and see in detail what is going on. We can see inputs, outputs and sizes at any point in the network.
For instance, if you run the following:
``` python
model = nn.Sequential(
nn.Conv2d(3, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
Debugger(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1), nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1), nn.ReLU(),
)
model.cuda()
learner = Learner(data, model, metrics=[accuracy])
learner.fit(5)
```
... you'll see something like this:
```
/home/ubuntu/fastai/fastai/layers.py(74)forward()
72 def forward(self,x:Tensor) -> Tensor:
73 set_trace()
---> 74 return x
75
76 class StdUpsample(nn.Module):
ipdb>
```
```
show_doc(PixelShuffle_ICNR, title_level=3)
show_doc(MergeLayer, title_level=3)
show_doc(PartialLayer, title_level=3)
show_doc(SigmoidRange, title_level=3)
show_doc(SequentialEx, title_level=3)
show_doc(SelfAttention, title_level=3)
show_doc(BatchNorm1dFlat, title_level=3)
```
## Loss functions
```
show_doc(FlattenedLoss, title_level=3)
```
Create an instance of `func` with `args` and `kwargs`. When passing an output and target, it
- puts `axis` first in output and target with a transpose
- casts the target to `float` is `floatify=True`
- squeezes the `output` to two dimensions if `is_2d`, otherwise one dimension, squeezes the target to one dimension
- applied the instance of `func`.
```
show_doc(BCEFlat)
show_doc(BCEWithLogitsFlat)
show_doc(CrossEntropyFlat)
show_doc(MSELossFlat)
show_doc(NoopLoss)
show_doc(WassersteinLoss)
```
## Helper functions to create modules
```
show_doc(bn_drop_lin, doc_string=False)
```
The [`bn_drop_lin`](/layers.html#bn_drop_lin) function returns a sequence of [batch normalization](https://arxiv.org/abs/1502.03167), [dropout](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf) and a linear layer. This custom layer is usually used at the end of a model.
`n_in` represents the number of size of the input `n_out` the size of the output, `bn` whether we want batch norm or not, `p` is how much dropout and `actn` is an optional parameter to add an activation function at the end.
```
show_doc(conv2d)
show_doc(conv2d_trans)
show_doc(conv_layer, doc_string=False)
```
The [`conv_layer`](/layers.html#conv_layer) function returns a sequence of [nn.Conv2D](https://pytorch.org/docs/stable/nn.html#torch.nn.Conv2d), [BatchNorm](https://arxiv.org/abs/1502.03167) and a ReLU or [leaky RELU](https://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf) activation function.
`n_in` represents the number of size of the input `n_out` the size of the output, `ks` kernel size, `stride` the stride with which we want to apply the convolutions. `bias` will decide if they have bias or not (if None, defaults to True unless using batchnorm). `norm_type` selects type of normalization (or `None`). If `leaky` is None, the activation is a standard `ReLU`, otherwise it's a `LeakyReLU` of slope `leaky`. Finally if `transpose=True`, the convolution is replaced by a `ConvTranspose2D`.
```
show_doc(embedding, doc_string=False)
```
Create an [embedding layer](https://arxiv.org/abs/1711.09160) with input size `ni` and output size `nf`.
```
show_doc(relu)
show_doc(res_block)
show_doc(sigmoid_range)
show_doc(simple_cnn)
```
## Initialization of modules
```
show_doc(batchnorm_2d)
show_doc(icnr)
show_doc(trunc_normal_)
show_doc(icnr)
show_doc(NormType)
```
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(Debugger.forward)
show_doc(Lambda.forward)
show_doc(AdaptiveConcatPool2d.forward)
show_doc(NoopLoss.forward)
show_doc(PixelShuffle_ICNR.forward)
show_doc(WassersteinLoss.forward)
show_doc(MergeLayer.forward)
show_doc(SigmoidRange.forward)
show_doc(MergeLayer.forward)
show_doc(SelfAttention.forward)
show_doc(SequentialEx.forward)
show_doc(SequentialEx.append)
show_doc(SequentialEx.extend)
show_doc(SequentialEx.insert)
show_doc(PartialLayer.forward)
show_doc(BatchNorm1dFlat.forward)
show_doc(Flatten.forward)
```
## New Methods - Please document or move to the undocumented section
```
show_doc(View)
show_doc(ResizeBatch.forward)
show_doc(View.forward)
```
| github_jupyter |
# Les listes
## Définition
Collection d’objets hétéroclites, séparés entre eux par une virgule, et délimitée par des crochets :
```
collection = ["A Lannister", [32, "cheese"], "32"]
```
Comme pour toute séquence, les éléments de la liste sont ordonnés et sont accessibles par leur indice :
```
print(collection[1])
print(collection[1][0])
```
Contrairement aux chaînes de caractères, les listes sont mutables, c’est-à-dire qu’elles peuvent être modifiées après création :
```
collection[2] = "32.0"
print(collection)
```
## Manipulations
Pour ajouter un élément en fin de liste, utiliser la méthode `append()` :
```
words = ["A", "Lannister"]
words.append("always")
print(words)
```
La méthode `extend()` réalise la même opération, à l’exception qu’elle le fait avec plusieurs éléments :
```
words.extend(["his", "."])
print(words)
```
Une autre possibilité passe par la méthode `insert()` en fournissant l’indice où insérer l’élément :
```
words.insert(3, "pays")
print(words)
```
Le même résultat est atteignable grâce au *slicing* :
```
words[5:5] = ["debts"]
print(words)
```
Pour au contraire supprimer un élément d’une liste, utiliser la méthode `pop()` avec l’indice de l’élément en question. Par défaut, la méthode retire le dernier élément :
```
words.pop(2)
words.pop()
print(words)
```
La méthode `remove()` quant à elle supprime la première occurrence de l’élément recherché :
```
words.remove("Lannister")
print(words)
```
Pour finir, la méthode `reverse()` renverse complètement la liste. Le même résultat est obtenu par la technique du *slicing*, à la différence que le *slicing* effectue une copie de la liste.
```
# The original list is not affected by the slicing
print(words)
print(words[::-1])
print(words)
# Yet the reverse() method substitutes items in place
words.reverse()
print(words)
```
### Remarques sur la concaténation de listes
Il est possible d’ajouter des éléments à une liste en utilisant l’opérateur `+`. Dans la pratique, il n’est pas conseillé de l’utiliser, car la concaténation crée un nouvel objet au lieu de profiter du caractère **mutable** des listes. Il est beaucoup plus coûteux en temps machine de créer des nouveaux objets à chaque fois que l’on veut insérer un élément dans une liste. On va lui préférer la méthode `append()`.
## Techniques courantes sur les listes
### Trier
La méthode `sort()` effectue un tri en place, tandis que la fonction `sorted()` effectue une copie de la liste :
```
a = [2, 4, 1]
# Sorted but not affected to a variable
sorted(a)
print(a)
# Sorted!
a.sort()
print(a)
```
### Calculs
La méthode `count()` renvoie le nombre d’occurrences d’un élément passé en paramètres :
```
words = ["A", "good", "day", "is", "a", "day", "with", "coffee", "."]
words.count("day")
```
Quelques autres fonctions utiles :
- `len()` : renvoie le nombre d’éléments
- `max()`: renvoie la valeur maximale
- `min()`: renvoie la valeur minimale
- `sum()` : renvoie la somme des valeurs de la liste
```
print(len(a))
print(max(a))
print(min(a))
print(sum(a))
```
### Dupliquer
La méthode `copy()` effectue une copie de surface de la liste. Elle est équivalente au *slicing* `a[:]` :
```
b = a.copy()
c = a[:]
print(b, c)
```
## Itération
Une boucle `for` permet de parcourir chaque élément dans l’ordre d’insertion :
```
words = ["A", "Lannister", "always", "pays", "his", "debts."]
for word in words:
print(word, end=" ")
```
La fonction `enumerate()` associe l’élément à son indice dans un tuple :
```
for word in enumerate(words):
print(word, end=" ")
```
On peut tirer profit de ce comportement en désempilant les tuples (cf [*notebook* sur les tuples](23.tuples.ipynb#Accéder-à-un-élément-du-tuple)) :
```
for idx, word in enumerate(words):
print(idx, word)
```
## Filtrer
La fonction `filter()` prend en paramètre une fonction et l’applique à chaque élément d’une séquence. Elle est souvent plus simple à manipuler qu’une boucle `for` pour parvenir au bon résultat.
Si par exemple dans une liste on souhaite repérer les mots de plus de trois caractères, on procéderait classiquement ainsi :
```
results = []
words = ["A", "Lannister", "always", "pays", "his", "debts."]
for word in words:
if len(word) > 3:
results.append(word)
print(results)
```
Avec la fonction `filter()`, le résultat s’obtient plus rapidement :
```
words = ["A", "Lannister", "always", "pays", "his", "debts."]
results = filter(lambda word:len(word) > 3, words)
print(list(results))
```
### Obtenir des valeurs uniques
Une liste étant une simple collection d’éléments hétéroclites, rien n’interdit la redondance d’information. Et lorsque cette redondance n’est pas pertinente, on peut retenir deux manières de la supprimer :
- remplir une autre liste avec les valeurs uniques ;
- transformer la liste en une collection d’éléments uniques.
```
# A list with redundant informations
ingredients = ['spam', 'spam', 'spam', 'egg', 'spam', 'cheese', 'egg']
```
La première solution recourt à une condition :
```
unique_ingredients = []
for ingredient in ingredients:
if ingredient not in unique_ingredients:
unique_ingredients.append(ingredient)
unique_ingredients
```
La deuxième solution met en œuvre une conversion de type, de la liste vers l’ensemble (`set`) :
```
set(ingredients)
```
### Sélectionner une valeur aléatoirement
Il n’est jamais de véritable hasard en informatique, mais certains algorithmes tendent à s’en rapprocher. En python, c’est le rôle du module `random`. Parmi toutes les méthodes disponibles, `choice()` permet de piocher un élément dans n’importe quelle séquence :
```
from random import choice
shifumi = ['pierre', 'papier', 'ciseaux']
choice(shifumi)
```
| github_jupyter |
# Exporting high quality satellite images
* **Products used:**
[ls8_sr](https://explorer.digitalearth.africa/products/ls8_sr),
[ls7_sr](https://explorer.digitalearth.africa/products/ls7_sr),
[ls5_sr](https://explorer.digitalearth.africa/products/ls5_sr),
[s2_l2a](https://explorer.digitalearth.africa/products/s2_l2a)
## Background
Most of the case studies in this repository focus on quantitatively analysing satellite data to obtain insights into Africa's changing environment.
However, satellite imagery also represents a powerful tool for visualisation.
Images taken by satellites can help explain physical processes, highlight change over time, or provide valuable context to better understand the impacts of recent environmental events such as flooding or fire.
Satellite data can also be processed to create images of the landscape based on invisible wavelengths of light (e.g. false colour images), allowing us to obtain richer insights into features and processes that would otherwise be invisible to the human eye.
### Digital Earth Africa use case
**Digital Earth Africa** provides over three decades of satellite imagery across the entire continent of Africa. Satellite data from the [NASA/USGS Landsat program](https://www.usgs.gov/land-resources/nli/landsat) allow us to produce fortnightly images of Africa's diverse natural and artificial landscapes at any time since 1984. More recently, the [Copernicus Sentinel-2 mission](https://sentinel.esa.int/web/sentinel/missions/sentinel-2) has provided even higher resolution imagery as frequently as every 5 days since 2017.
## Description
This notebook provides an interactive tool for selecting, loading, processing and exporting satellite imagery as a high quality image file.
This can be used in combination with the interactive [Digital Earth Africa Maps](https://maps.digitalearth.africa/) platform to identify an image of interest, then download it using this notebook for use in other applications.
The tool supports Sentinel-2 and Landsat data, creating True and False colour images.
***
## Getting started
To run this analysis, run all the cells in the notebook, starting with the "Load packages" cell.
### Load packages
Import Python packages used for the analysis.
```
%matplotlib inline
from deafrica_tools.app.imageexport import select_region_app, export_image_app
```
## Select imagery
### Analysis parameters
The following cell sets important required parameters used to plot and select satellite imagery on the interactive map.
* `date`: The exact date used to display imagery on the map (e.g. `date='1988-01-01'`).
* `satellites`: The satellite data to be shown on the map.
Five options are currently supported:
| | |
|--------------------|--------------------------------------------------------------------------------------------------------------------------|
| **"Landsat-8"** | Data from the Landsat 8 optical satellite |
| **"Landsat-7"** | Data from the Landsat 7 optical satellite |
| **"Landsat-5"** | Data from the Landsat 5 optical satellite |
| **"Sentinel-2"** | Data from the Sentinel-2A and 2B optical satellites |
| **"Sentinel-2 geomedian"** | Data from the Sentinel-2 annual geomedian |
> **If running the notebook for the first time**, keep the default settings below. This will demonstrate how the analysis works and provide meaningful results.**
> **If passing in `'Sentinel-2 geomedian'`, ensure the `date` looks like this: `'<year>-01-01'`**
```
# Required image selection parameters
date = '2020-01-30'
satellites = 'Sentinel-2'
```
### Select location on a map
Run the following cell to plot the interactive map that is used to select the area to load and export satellite imagery.
Select the `Draw a rectangle` or `Draw a polygon` tool on the left of the map, and draw a shape around the area you are interested in.
When you are ready, press the green `done` button on the top right of the map.
> To keep load times reasonable, select an area **smaller than 10000 square kilometers** in size (this limit can be overuled by supplying the `size_limit` parameter in the `select_region_app` function below).
```
selection = select_region_app(date=date,
satellites=satellites,
size_limit=10000)
```
## Export image
The optional parameters below allow you to fine-tune the appearance of your output image.
* `style`: The style used to produce the image.
Two options are currently supported:
| | |
|--------------------|--------------------------------------------------------------------------------------------------------------------------|
| **"True colour"** | Creates a true colour image using the red, green and blue satellite bands |
| **"False colour"** | Creates a false colour image using short-wave infrared, infrared and green satellite bands. |
* `resolution`: The spatial resolution to load data. By default, the tool will automatically set the best possible resolution depending on the satellites selected (i.e. 30 m for Landsat, 10 m for Sentinel-2).
Increasing this (e.g. to `resolution = (-100, 100)`) can be useful for loading large spatial extents.
* `vmin, vmax`: The minimum and maximum surface reflectance values used to clip the resulting imagery to enhance contrast.
* `percentile_stretch`: A tuple of two percentiles (i.e. between 0.00 and 1.00) that can be used to clip the imagery to optimise the brightness and contrast of the image.
If this parameter is used, `vmin` and `vmax` will have no effect.
* `power`: Raises imagery by a power to reduce bright features and enhance dark features.
This can add extra definition over areas with extremely bright features like snow, beaches or salt pans.
* `image_proc_funcs` An optional list containing functions that will be applied to the output image. This can include image processing functions such as increasing contrast, unsharp masking, saturation etc. The function should take *and* return a `numpy.ndarray` with shape `[y, x, bands]`.
* `standardise_name`: Whether to export the image file with a machine-readable file name (e.g. `sentinel-2a_2020-01-30_bitou-local-municipality-western-cape_true-colour-10-m-resolution.jpg`)
> **If running the notebook for the first time**, keep the default settings below. This will demonstrate how the analysis works and provide meaningful results.
```
# Optional image export parameters
style = 'True colour' # 'False colour'
resolution = None
vmin, vmax = (0, 2000)
percentile_stretch = None
power = None
image_proc_funcs = None
output_format = "jpg"
standardise_name = False
```
Once you are happy with the parameters above, run the cell below to load the satellite data and export it as an image:
```
# Load data and export image
export_image_app(
style=style,
resolution=resolution,
vmin=vmin,
vmax=vmax,
percentile_stretch=percentile_stretch,
power=power,
image_proc_funcs=image_proc_funcs,
output_format=output_format,
standardise_name=standardise_name,
**selection
)
```
## Downloading exported image
The image export will be completed when `Finished exporting image` appears above, and a preview of your image is shown below the map.
The high resolution image file generated above will be saved to the same location you are running this notebook from (e.g. typically `Real_world_examples`).
In JupyterLab, use the file browser to locate the image file with a name in the following format:
`Sentinel-2A - 2020-01-30 - Bitou Local Municipality, Western Cape - True colour, 10 m resolution`
If you are using the **DE Africa Sandbox**, you can download the image to your PC by right clicking on the image file and selecting `Download`.
## Next steps
When you are done, return to the [Analysis parameters](#Analysis-parameters) section, modify some values and rerun the analysis.
For example, you could try:
* Change `satellites` to `"Landsat"` to export a Landsat image instead of Sentinel-2.
* Modify `style` to `"False colour"` to export a false colour view of the landscape that highlights growing vegetation and water.
* Specify a custom resolution, e.g. `resolution = (-1000, 1000)`.
* Experiment with the `vmin`, `vmax`, `percentile_stretch` and `power` parameters to alter the appearance of the resulting image.
---
## Additional information
**License:** The code in this notebook is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
Digital Earth Africa data is licensed under the [Creative Commons by Attribution 4.0](https://creativecommons.org/licenses/by/4.0/) license.
**Contact:** If you need assistance, please post a question on the [Open Data Cube Slack channel](http://slack.opendatacube.org/) or on the [GIS Stack Exchange](https://gis.stackexchange.com/questions/ask?tags=open-data-cube) using the `open-data-cube` tag (you can view previously asked questions [here](https://gis.stackexchange.com/questions/tagged/open-data-cube)).
If you would like to report an issue with this notebook, you can file one on [Github](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks).
**Last Tested:**
```
from datetime import datetime
datetime.today().strftime('%Y-%m-%d')
```
| github_jupyter |
```
# coding=utf-8
from __future__ import print_function
import os
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.optimizers import SGD
from sklearn.metrics import confusion_matrix
from scipy.stats import spearmanr
import openslide as ops
import cv2
import numpy as np
import datetime
import math
import json
from ml_metrics import quadratic_weighted_kappa
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import keras.backend as K
import scandir
from sklearn import cross_validation
import matplotlib.pyplot as plt
from keras.layers import merge
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Dense, Activation, Flatten, MaxoutDense
from keras.layers.normalization import BatchNormalization
from keras.models import Model
from keras.layers import Input
import tensorflow as tf
os.environ['KERAS_BACKEND'] = 'tensorflow'
os.environ['CUDA_HOME'] = '/usr/local/cuda-7.5'
def read_img(img_path):
dim_ordering = K.image_dim_ordering()
img = load_img(img_path, target_size=(512, 512))
img = img_to_array(img, dim_ordering=dim_ordering)
if dim_ordering == 'th':
img = img[::-1, :, :]
else:
img = img[:, :, ::-1]
return img
def splitAndSqueeze(images, patches_per_slice=3):
result = []
for img in np.split(images, patches_per_slice):
result.append(np.squeeze(img))
return result
# featurewise standardization, normalization and augmentation (horizontal and vertical flips)
def augment_data(X, patches_per_slice=3):
X = np.asarray(X).swapaxes(1, 0)
mean = np.mean(X, axis=0)
X = np.subtract(X, mean)
std = np.std(X, axis=0)
X /= (std + 1e-7)
X = X.swapaxes(0, 1)
if np.random.random() < 0.5:
X = flip_axis(X, 2)
if np.random.random() < 0.5:
X = flip_axis(X, 3)
return splitAndSqueeze(X)
def flip_axis(X, axis):
X = np.asarray(X).swapaxes(axis, 0)
X = X[::-1, ...]
X = X.swapaxes(0, axis)
return X
# Decode predictions to one hot encoding
# Use ranking approach from https://github.com/ChenglongChen/Kaggle_CrowdFlower/blob/master/BlogPost/BlogPost.md
# The predicted values are mapped to classes based on the CDF of the ref values.
def getScore(pred, cdf, valid=False):
num = pred.shape[0]
output = np.asarray([3]*num, dtype=int)
rank = pred.argsort()
output[rank[:int(num*cdf[0]-1)]] = 1
output[rank[int(num*cdf[0]):int(num*cdf[1]-1)]] = 2
output[rank[int(num*cdf[1]):int(num*cdf[2]-1)]] = 3
if valid:
cutoff = [ pred[rank[int(num*cdf[i]-1)]] for i in range(3) ]
return output, cutoff
return output
def plotLearningCurve(history):
plt.plot(history.history['mean_squared_error'])
#plt.plot(history.history['val_mean_squared_error'])
plt.title('Learning Curve')
plt.ylabel('MSE')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.savefig('learning_curve_{0}.jpg'.format(datetime.datetime.utcnow()))
# Since we are looking for very local features, we should need a short amount of layers(2?), followed by a FCN
def get_patchblock(inp, idx):
if K.image_dim_ordering() == 'tf':
inp_shape=(512, 512, 3)
bn_axis = 3
else:
inp_shape=shape=(3, 512, 512)
bn_axis = 1
dim_ordering = K.image_dim_ordering()
out = Convolution2D(32, 7, 7, subsample=(2, 2),
init='he_normal', border_mode='same', dim_ordering=dim_ordering,
name='conv1_{0}'.format(idx), input_shape=(inp_shape))(inp)
out = BatchNormalization(axis=bn_axis, name='bn_conv1_{0}'.format(idx))(out)
out = Activation('relu')(out) #LeakyReLU(alpha=0.5)
out = MaxPooling2D((3, 3), strides=(2, 2), dim_ordering=dim_ordering)(out)
out = Convolution2D(32, 3, 3, subsample=(2, 2),
init='he_normal', border_mode='same', dim_ordering=dim_ordering,
name='conv2_{0}'.format(idx), input_shape=(inp_shape))(out)
out = BatchNormalization(axis=bn_axis, name='bn_conv2_{0}'.format(idx))(out)
out = Activation('relu')(out) #LeakyReLU(alpha=0.5)
out = Convolution2D(32, 3, 3, subsample=(2, 2),
init='he_normal', border_mode='same', dim_ordering=dim_ordering,
name='conv3_{0}'.format(idx), input_shape=(inp_shape))(out)
out = BatchNormalization(axis=bn_axis, name='bn_conv3_{0}'.format(idx))(out)
out = Activation('relu')(out) #LeakyReLU(alpha=0.5)
out = MaxPooling2D((3, 3), strides=(2, 2), dim_ordering=dim_ordering)(out)
out = Flatten()(out)
out = MaxoutDense(1, init='he_normal')(out)
return out
def get_mergenet(patches_per_slice=3):
print('Started to build model at {0}'.format(datetime.datetime.utcnow()))
if K.image_dim_ordering() == 'tf':
inp_shape=(512, 512, 3)
concat_axis = 3
else:
inp_shape=shape=(3, 512, 512)
concat_axis = 1
patch_nets = []
input_list = []
for i in range(patches_per_slice):
inp = Input(inp_shape)
model = get_patchblock(inp, i)
patch_nets.append(model)
input_list.append(inp)
out = merge(patch_nets, mode='ave')#, concat_axis=concat_axis)
out = MaxoutDense(1, init='he_normal')(out)
print('Finished building model at {0}'.format(datetime.datetime.utcnow()))
return Model(input_list, out)
def get_data(dir_path, res = 512, limit=False):
annotations = open('../annotations/training_ground_truth.csv', 'r')
lines = annotations.readlines()
images_train = []
labels_train = []
images_val = []
labels_val = []
rs = cross_validation.ShuffleSplit(len(lines), n_iter = 1, test_size = 0.2, random_state = 17)
val_idx = []
count = 0
for train_index, val_index in rs:
val_idx.append(val_index)
for subdir, _, files in scandir.walk(dir_path):
for file in files:
study_id = int(file[9:12].lstrip("0"))
label = [float(lines[study_id-1].split(',')[0]), float(lines[study_id-1].split(',')[1])]
imgs = get_images(file[9:12])
if len(imgs)<3: continue
if (study_id in val_idx[0]):
images_val.append(np.asarray(imgs))
labels_val.append(np.asarray(label))
else:
images_train.append(np.asarray(imgs))
labels_train.append(np.asarray(label))
count += 1
if limit and count >=10:
break
if limit and count >=10:
break
annotations.close()
images_train = splitAndSqueeze(np.swapaxes(np.asarray(images_train), 0, 1))
images_val = splitAndSqueeze(np.swapaxes(np.asarray(images_val), 0, 1))
return images_train, images_val, np.asarray(labels_train), np.asarray(labels_val)
def get_images(study_id, svs_dir='../../images/train/', roi_dir='../ROIs/'):
rois = open(roi_dir+'TUPAC-TR-{0}-ROI.csv'.format(study_id), 'r').readlines()
try:
slide = ops.OpenSlide(svs_dir+'{0}/{1}.svs'.format(study_id, study_id))
except ops.OpenSlideUnsupportedFormatError:
print('Not able to open svs file at: {0}'.format(svs_dir+'{0}/{1}.svs'.format(study_id, study_id)))
return []
imgs = []
for roi in rois:
roi = roi.split(',')
centroid = [int(roi[0])+int(math.ceil(int(roi[2])/2)), int(roi[1])+int(math.ceil(int(roi[3])/2))]
try:
level = 2
img = np.asarray(slide.read_region((centroid[0]-512, centroid[1]-512), 0, [1024, 1024]))
except ops.OpenSlideError:
print('Not able to open slide', study_id)
return []
res = 512
img = cv2.resize(img, (res, res), interpolation = cv2.INTER_AREA)
img = img[:, :, 0:3]
dim_ordering = K.image_dim_ordering()
img = img_to_array(img, dim_ordering=dim_ordering)
if dim_ordering == 'th':
img = img[::-1, :, :]
else:
img = img[:, :, ::-1]
imgs.append(img)
return imgs
patches_per_slice = 3
model = get_mergenet()
model.summary()
def spearmanloss(y_true, y_pred, batch_size=50):
length = batch_size
original_loss = -1*length*tf.reduce_sum(tf.mul(y_true, y_pred))-(tf.reduce_sum(y_true)*tf.reduce_sum(y_pred))
divisor = tf.sqrt((length * tf.reduce_sum(tf.square(y_true)) - tf.square(tf.reduce_sum(y_true))) *
(length * tf.reduce_sum(tf.square(y_pred)) - tf.square(tf.reduce_sum(y_pred)))
)
return tf.truediv(original_loss, divisor)
opt = 'adam'#SGD(lr=0.1)
model.compile(optimizer=opt,loss='mse',metrics=['mse'])
images_train, images_val, labels_train, labels_val = get_data('../ROIs/')#, limit=True)
print(np.asarray(images_train).shape)
images_train = augment_data(images_train)
images_val = augment_data(images_val)
batch_size = 50
nb_epoch = 30
callbacks = [ModelCheckpoint('mergenet_weights{0}.hdf5'.format(datetime.datetime.utcnow()), monitor='val_loss', verbose=1,
save_best_only=True, mode='auto')]
print('Started fitting at {0}'.format(datetime.datetime.utcnow()))
history = model.fit(images_train, labels_train[:,1], batch_size,
nb_epoch=nb_epoch, verbose=1,
validation_data=(images_val, labels_val[:,1]),
callbacks=callbacks, shuffle=True)
print('Finished fitting at {0}'.format(datetime.datetime.utcnow()))
with open('01_mergenet_training_history{0}.json'.format(datetime.datetime.utcnow()), 'w') as f_out:
json.dump(history.history, f_out)
print('Metrics on validation set:\n------------------------------------------------------------\n')
pred = model.predict(images_val, verbose=0)
print("Spearman's Correlation Coëfficient: ", spearmanr(pred, labels_val[:,1])[0])
plotLearningCurve(history)
%matplotlib inline
plt.hist(pred)
plt.hist(labels_val[:,1])
```
| github_jupyter |
# Guideline on Eager Execution
* 이 코드는 [TensorFlow official Guide `eager execution` 문서](https://www.tensorflow.org/guide/eager)를 정리한 것이다.
[Eager execution](https://www.tensorflow.org/guide/eager#build_a_model) is a flexible machine learning platform for research and experimentation, providing:
* **An intuitive interface**: Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.
* **Easier debugging**: Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.
* **Natural control flow**: Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
from tensorflow.keras import layers
tf.enable_eager_execution()
print(tf.VERSION)
```
## Guideline on building a model
* Use only [`tf.keras.layers`](https://www.tensorflow.org/api_docs/python/tf/keras/layers), do **NOT** use `tf.layers` or `tf.contrib.layers`.
### Example of a layer made myself
```
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
```
### Simple method of building a model
* Use [`tf.keras.Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential)
* Must set input shape
```
model = tf.keras.Sequential([
layers.Dense(10, input_shape=(784,)), # must declare input shape
layers.Dense(10)
])
```
### Another method of builing a model
* Organize models in classes by inheriting from [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Model).
* It's not required to set an input shape for the [`tf.keras.Model`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Model) class since the parameters are set the first time input is passed to the layer.
* [`tf.keras.layers`](https://www.tensorflow.org/api_docs/python/tf/keras/layers) classes create and contain their own model variables that are tied to the lifetime of their layer objects. To share layer variables, share their objects.
```
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = layers.Dense(units=10)
self.dense2 = layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
```
## Eager Training
### Computing gradients
* Use [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape)
```
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
```
### Train a model
```
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[..., tf.newaxis]/255., tf.float32), # [..., tf.newaxis] for just expand_dims(last_axis)
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
layers.Conv2D(16, [3,3], activation='relu'),
layers.Conv2D(16, [3,3], activation='relu'),
layers.GlobalAveragePooling2D(),
layers.Dense(10)
])
# without training, just inference a model in eager execution:
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
# Train the model
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)): # just 400 steps (iterations), NOT epochs
if batch % 80 == 0:
print()
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.variables)
optimizer.apply_gradients(zip(grads, mnist_model.variables),
global_step=tf.train.get_or_create_global_step())
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
plt.show()
```
### Train a model made myself
```
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
```
### Save a model
```
model = tf.keras.Sequential([
layers.Conv2D(16, [3,3], activation='relu'),
layers.GlobalAveragePooling2D(),
layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = './model_dir'
os.makedirs(checkpoint_dir, exist_ok=True)
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
# Save a model
root.save(checkpoint_prefix)
# Restore a model
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
```
| github_jupyter |
# Demo Prophet Time Series Forecasting on Ray local
<b>Suggestion: Make a copy of this notebook. This way you will retain the original, executed notebook outputs. Make edits in the copied notebook. </b>
### Description:
This notebook goes along with the tutorial <a href="https://towardsdatascience.com/scaling-time-series-forecasting-with-ray-arima-and-prophet-e6c856e605ee">How to Train Faster Time Series Forecasting Using Ray, part 1 of 2<a>.
This notebook demonstrates Time Series Forecasting Prophet algorithm on Ray. Example data is NYC yellow taxi from: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page <br>
Forecast goal: Given 6 months historical taxi trips data for NYC, your task is to predict #pickups at each location in NYC at monthly level for the next 2 months.
### Demo notes:
Output shows timings using SMALL dataset <br>
Both demo datasets are available in this github repo under data/ <br>
SMALL dataset contains original, actual 260 items "clean_taxi_monthly.parquet" <br>
MEDIUM dataset contains 2860 items with extra fakes "clean_taxi_monthly_fake_medium.parquet" <br>
```
# install open-source Ray if you haven't already
# !pip install "ray[default] installs the latest version; otherwise use a specific version
# !pip install "ray[default]==1.9.0"
# install ARIMA library
# !pip install pmdarima
# install Prophet library
# !pip install kats
# install Anyscale to run Ray easily on a Cloud
# !pip install anyscale
###########
# Import libraries
###########
# Open-source libraries
import os # Python os functions
import logging #Python logging functions
import time # Python time functions
import warnings # Python warnings
warnings.filterwarnings('ignore')
import ray # Run distributed code
import numpy as np # Numerical processing
import pandas as pd # Dataframe (tabular data) processing
import matplotlib as mpl # Graph plotting
import matplotlib.pyplot as plt
%matplotlib inline
import pickle
# Open-source ARIMA forecasting libraries
import pmdarima as pm
from pmdarima.model_selection import train_test_split
# Open-source Prophet forecasting libraries
# Note: using kats since it looks more actively maintained than original prophet
import kats
from kats.consts import TimeSeriesData
from kats.models.prophet import ProphetModel, ProphetParams
!python --version
print(f"ray: {ray.__version__}")
print(f"numpy: {np.__version__}")
print(f"pandas: {pd.__version__}")
print(f"matplotlib: {mpl.__version__}")
AVAILABLE_LOCAL_CPU = os.cpu_count()
print(f"Found available CPU: {AVAILABLE_LOCAL_CPU}")
```
# Change how you want to run Ray below.
<b>Depending on whether you want to run Ray Local or Ray in a Cloud:</b>
<ul>
<li><b>To run Ray Local, change below variables, then continue running cells in the notebook</b>: <br>
RUN_RAY_LOCAL = True; RUN_RAY_ON_A_CLOUD = False</li>
<li><b>To run Ray in a Cloud, change below variables, then continue running cells in the notebook</b>: <br>
RUN_RAY_LOCAL = False; RUN_RAY_ON_A_CLOUD = True </li>
</ul>
```
###########
# CHANGE VARIABLES BELOW.
# To run Ray Local: RUN_RAY_LOCAL = True; RUN_RAY_ON_A_CLOUD = False
# To run Ray in a Cloud: RUN_RAY_LOCAL = False; RUN_RAY_ON_A_CLOUD = True
###########
RUN_RAY_LOCAL = True
RUN_RAY_ON_A_CLOUD = False
###########
# Run Ray Local on your laptop for testing purposes
# Dashboard doc: https://docs.ray.io/en/master/ray-dashboard.html#ray-dashboard
###########
if RUN_RAY_LOCAL:
# num_cpus, num_gpus are optional parameters
# by default Ray will detect and use all available
NUM_CPU = AVAILABLE_LOCAL_CPU
print(f"You are running Ray Local with {NUM_CPU} CPUs")
# start up ray locally
if ray.is_initialized():
ray.shutdown()
ray.init()
else:
print("You are not running Ray Local")
###########
# Run Ray in the Cloud using Anyscale
# View your cluster on console.anyscale.com
###########
if RUN_RAY_ON_A_CLOUD:
print("You are running Ray on a Cloud")
# !pip install anyscale # install anyscale if you haven't already
import anyscale
# You can specify more pip installs, clone github, or copy code/data here in the runtime env.
# Everything in the runtime environment will override the cluster environment.
# https://docs.anyscale.com/user-guide/configure/dependency-management/anyscale-environments
my_env={ "working_dir": ".",
"pip": ["pmdarima", "kats"],
}
# start up ray in any cloud
if ray.is_initialized():
ray.shutdown()
ray.init(
"anyscale://christy-forecast4",
# runtime_env=my_env,
# optionally put pip installs in the cluster config instead of runtime_env
cluster_env="christy-forecast:4",
# Add extra quiet settings, since Prophet is noisy!
log_to_driver=False, # disable ray workers from logging the output.
configure_logging=True,
logging_level=logging.ERROR,
)
else:
print("You are not running Ray on a Cloud")
```
# Read 8 months clean NYC taxi data
New York City Yellow Taxi ride volumes per location (8 months of historical data). <ul>
<li>Original source: https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page</li>
<li>Clean monthly source: https://github.com/christy/AnyscaleDemos/blob/main/forecasting_demos/data/clean_taxi_monthly.parquet?raw=true </li>
</ul>
Normally there is a data cleaning/prep step to convert raw data -> cleaned data. We'll dig into details of ETL later. <br>
For now, let's just start with cleaned, aggregated monthly data for ARIMA and Prophet, since those algorithms are typically for strategic-level forecasting, not typically for detailed-level forecasting.
```
###########
# Read pandas dataframe
# If you cloned this notebook from github the data should be in your data/ folder
###########
# read 8 months of clean, aggregated monthly taxi data
# filename = "https://github.com/christy/AnyscaleDemos/blob/main/forecasting_demos/data/clean_taxi_monthly.parquet?raw=true"
filename = "data/clean_taxi_monthly.parquet"
# filename = "data/clean_taxi_monthly_fake_medium.parquet"
g_month = pd.read_parquet(filename)
# rename "time" column, since prophet expects that, arima doesn't care
g_month.reset_index(inplace=True)
g_month.rename(columns={"pickup_monthly": "time"}, inplace=True)
display(g_month.head())
# Train a model per item_id
item_list = list(g_month["pulocationid"].unique())
print(f"Number unique items = {len(item_list)}")
```
# Regular Python
```
###########
# Assume below is already-existing regular Python code.
###########
# define file handler, not appending, to avoid growing logs
file_handler = logging.FileHandler('training.log', mode='w')
formatter = logging.Formatter('%(asctime)s : %(levelname)s : %(name)s : %(message)s')
file_handler.setFormatter(formatter)
# prophet logger - also need Class below to stop the noisy PyStan messages
prophet_logger = logging.getLogger('fbprophet')
prophet_logger.setLevel(logging.ERROR)
prophet_logger.addHandler(file_handler)
# This class is to suppress the Pystan noisy messages coming from Prophet
# Thanks to https://github.com/facebook/prophet/issues/223#issuecomment-326455744
class suppress_stdout_stderr(object):
'''
A context manager for doing a "deep suppression" of stdout and stderr in
Python, i.e. will suppress all print, even if the print originates in a
compiled C/Fortran sub-function.
This will not suppress raised exceptions, since exceptions are printed
to stderr just before a script exits, and after the context manager has
exited (at least, I think that is why it lets exceptions through).
'''
def __init__(self):
# Open a pair of null files
self.null_fds = [os.open(os.devnull, os.O_RDWR) for x in range(2)]
# Save the actual stdout (1) and stderr (2) file descriptors.
self.save_fds = [os.dup(1), os.dup(2)]
def __enter__(self):
# Assign the null pointers to stdout and stderr.
os.dup2(self.null_fds[0], 1)
os.dup2(self.null_fds[1], 2)
def __exit__(self, *_):
# Re-assign the real stdout/stderr back to (1) and (2)
os.dup2(self.save_fds[0], 1)
os.dup2(self.save_fds[1], 2)
# Close the null files
for fd in self.null_fds + self.save_fds:
os.close(fd)
###########
# Prophet train_model function, default train on 6 months, inference 2
###########
def train_model_PROPHET(
theDF: pd.DataFrame,
item_col: str,
item_value: str,
target_col: str,
train_size: int = 6,
) -> list:
"""This function trains a model using Prophet algorithm.
Args:
theDF (pd.DataFrame): Input data. It must have a "time" column.
theDF should not be indexed by "time".
item_col (str): Name of the column containing item_id or SKU.
item_value (str): Value of the item_id or SKU being forecasted.
target_col (str): Name of the column containing the actual value.
train_size (int, optional): Count of number of timestamps to use
for training. Defaults to 6.
Returns:
list: [
train (pd.DataFrame): Training data.
test (pd.DataFrame): Test data for evaluation.
model (pmdarima.arima.arima.ARIMA): ARIMA model for inference.
]
"""
# Set seed for reproducibility, use same seed in train AND inference
np.random.seed(415)
# split data into train/test
train, test = train_test_split(
theDF.loc[(theDF[item_col] == item_value), :], train_size=train_size
)
# convert pandas df to TimeSeriesData(df), with "time" column and any number of value columns.
train_ts = TimeSeriesData(train[["time", target_col]])
test_ts = TimeSeriesData(test[["time", target_col]])
# create a prophet model param instance
params = ProphetParams(
seasonality_mode="multiplicative"
)
# create a prophet model instance
model = ProphetModel(train_ts, params)
# fit model
with suppress_stdout_stderr(): #suppress pystan messages
model.fit()
# index train, test by time
train.set_index("time", inplace=True)
test.set_index("time", inplace=True)
return [train, test, model]
###########
# Prophet inference_model function
###########
def inference_model_PROPHET(
model: "kats.models.prophet.ProphetModel",
test: pd.DataFrame,
item_col: str,
target_col: str,
) -> pd.DataFrame:
"""This function inferences a model using Prophet algorithm. It uses
the actual values, if known, in the test evaluation dataframe
and concats them into the forecast output dataframe,
for easier evaluation later.
Args:
model (kats.models.prophet.ProphetModel): Prophet model.
test (pd.DataFrame): Test data for evaluation.
item_col (str): Name of the column containing item_id or SKU.
target_col (str): Name of the column containing the actual value.
Returns:
pd.DataFrame: forecast as pandas dataframe containing the forecast along
with actual values.
"""
# Set seed for reproducibility, use same seed in train AND inference
np.random.seed(415)
# Prophet inference on test data
forecast = model.predict(steps=test.shape[0], freq="MS")
# put both actual_value and predicted_value in forecast, for easier eval later
forecast.fcst = forecast.fcst.astype(np.int32)
forecast.fcst_lower = forecast.fcst_lower.astype(np.int32)
forecast.fcst_upper = forecast.fcst_upper.astype(np.int32)
forecast.columns = [
"time",
"fcst_prophet",
"fcst_prophet_lower",
"fcst_prophet_upper",
]
forecast = pd.concat(
[forecast, test.loc[:, target_col].reset_index(drop=True)], axis=1
)
forecast.set_index("time", inplace=True)
return forecast
###########
# REGULAR PYTHON program flow to train and inference Prophet models
###########
# initialize objects
train = []
test = []
model = []
forecast = []
start = time.time()
# Train every model
train, test, model = map(
list,
zip(
*(
[
train_model_PROPHET(
g_month.copy(),
item_col="pulocationid",
item_value=v,
target_col="trip_quantity",
train_size=6,
)
for p, v in enumerate(item_list)
]
)
),
)
# Inference every model
forecast = [
inference_model_PROPHET(
model[p],
test[p],
item_col="pulocationid",
target_col="trip_quantity",
)
for p in range(len(item_list))
]
time_regular_python = time.time() - start
print(
f"Done! Prophet on Regular Python finished in {time_regular_python} seconds"
)
###########
# inspect a few forecasts
###########
assert len(model) == len(item_list)
assert len(forecast) == len(item_list)
print(f"len(forecast): {len(forecast)}")
# plot first two forecasts
plt.figure(figsize=(8, 5))
for p, v in enumerate(item_list[0:2]):
display(forecast[p])
plt.plot(train[p]["trip_quantity"], label="Train")
plt.plot(test[p]["trip_quantity"], label="Test")
plt.plot(forecast[p]["fcst_prophet"], label="Forecast")
plt.legend(loc="best")
```
# Ray distributed Python
```
%%timeit
###########
# Main Ray distributed program flow to train and inference Prophet models
###########
# Convert your previously-defined regular python functions to ray parallelized functions
train_model_PROPHET_remote = ray.remote(train_model_PROPHET).options(num_returns=3)
inference_model_PROPHET_remote = ray.remote(inference_model_PROPHET)
# initialize objects
train_obj_refs = []
test_obj_refs = []
model_obj_refs = []
forecast_obj_refs = []
# initialize data in ray object store on each cluster
input_data_ref = ray.put(g_month.copy())
start = time.time()
# Train every model
train_obj_refs, test_obj_refs, model_obj_refs = map(
list,
zip(
*(
[
train_model_PROPHET_remote.remote(
# g_month,
input_data_ref,
item_col="pulocationid",
item_value=v,
target_col="trip_quantity",
train_size=6,
)
for p, v in enumerate(item_list)
]
)
),
)
# Inference every model
forecast_obj_refs = [
inference_model_PROPHET_remote.remote(
model_obj_refs[p],
test_obj_refs[p],
item_col="pulocationid",
target_col="trip_quantity",
)
for p in range(len(item_list))
]
# ray.get() means block until all objectIDs requested are available
forecast_ray = ray.get(forecast_obj_refs)
time_ray_local = time.time() - start
print(
f"Done! Prophet on Ray Local finished in {time_ray_local} seconds"
)
```
# Verify forecasts
```
# Run the Ray local code again to get the forecasts
###########
# Main Ray distributed program flow to train and inference Prophet models
###########
# Convert your previously-defined regular python functions to ray parallelized functions
train_model_PROPHET_remote = ray.remote(train_model_PROPHET).options(num_returns=3)
inference_model_PROPHET_remote = ray.remote(inference_model_PROPHET)
# initialize objects
train_obj_refs = []
test_obj_refs = []
model_obj_refs = []
forecast_obj_refs = []
# initialize data in ray object store on each cluster
input_data_ref = ray.put(g_month.copy())
start = time.time()
# Train every model
train_obj_refs, test_obj_refs, model_obj_refs = map(
list,
zip(
*(
[
train_model_PROPHET_remote.remote(
# g_month,
input_data_ref,
item_col="pulocationid",
item_value=v,
target_col="trip_quantity",
train_size=6,
)
for p, v in enumerate(item_list)
]
)
),
)
# Inference every model
forecast_obj_refs = [
inference_model_PROPHET_remote.remote(
model_obj_refs[p],
test_obj_refs[p],
item_col="pulocationid",
target_col="trip_quantity",
)
for p in range(len(item_list))
]
# ray.get() means block until all objectIDs requested are available
forecast_ray = ray.get(forecast_obj_refs)
time_ray_local = time.time() - start
print(
f"Done! Prophet on Ray Local finished in {time_ray_local} seconds"
)
# Calculate speedup:
speedup = time_regular_python / time_ray_local
print(f"Speedup from running Ray parallel code on your laptop: {np.round(speedup, 0)}x"
f", or {(np.round(speedup, 0)-1) * 100}%")
# Verify ray forecast is same as regular Python forecast
assert len(forecast_ray) == len(forecast)
assert len(forecast_ray[0]) == len(forecast[0])
assert forecast_ray[0].equals(forecast[0])
###########
# inspect a few forecasts
###########
assert len(model) == len(item_list)
assert len(forecast) == len(item_list)
print(f"len(forecast): {len(forecast_ray)}")
# plot first two forecasts
train = ray.get(train_obj_refs)
test = ray.get(test_obj_refs)
plt.figure(figsize=(8, 5))
for p in range(len(item_list[0:2])):
display(forecast_ray[p])
plt.plot(train[p]["trip_quantity"], label="Train")
plt.plot(test[p]["trip_quantity"], label="Test")
plt.plot(forecast_ray[p]["fcst_prophet"], label="Forecast")
plt.legend(loc="best")
# fancier plots
# plot first two forecasts
fig, axs = plt.subplots(2, 1, figsize=(8, 5), sharex=True)
for p, v in enumerate(item_list[0:2]):
print(f"Forecast for item {v}:")
display(forecast_ray[p])
ax = axs[p]
train[p].trip_quantity.plot(ax=ax, label="Train")
test[p].trip_quantity.plot(ax=ax, label="Test")
forecast_ray[p].fcst_prophet.plot(ax=ax, label="Forecast")
ax.legend(loc="best")
ax.set_title(f"item {v}")
```
# Now run the same code as Ray Local, but this time run using Anyscale in any Cloud.
<b>
<ol>
<li>Go back to top of notebook </li>
<li>Change variables RUN_RAY_LOCAL = False; RUN_RAY_ON_A_CLOUD = True <br>
... And run the next 2 cells to propertly shutdown/start Ray </li>
<li>Come back here to bottom of notebook <br>
Run cell below.</li>
</ul>
</b>
```
%%timeit
###########
# Main Ray distributed program flow to train and inference Prophet models
###########
# Convert your previously-defined regular python functions to ray parallelized functions
train_model_PROPHET_remote = ray.remote(train_model_PROPHET).options(num_returns=3)
inference_model_PROPHET_remote = ray.remote(inference_model_PROPHET)
# initialize objects
train_obj_refs = []
test_obj_refs = []
model_obj_refs = []
forecast_obj_refs = []
# initialize data in ray object store on each cluster
input_data_ref = ray.put(g_month.copy())
start = time.time()
# Train every model
train_obj_refs, test_obj_refs, model_obj_refs = map(
list,
zip(
*(
[
train_model_PROPHET_remote.remote(
# g_month,
input_data_ref,
item_col="pulocationid",
item_value=v,
target_col="trip_quantity",
train_size=6,
)
for p, v in enumerate(item_list)
]
)
),
)
# Inference every model
forecast_obj_refs = [
inference_model_PROPHET_remote.remote(
model_obj_refs[p],
test_obj_refs[p],
item_col="pulocationid",
target_col="trip_quantity",
)
for p in range(len(item_list))
]
# ray.get() means block until all objectIDs requested are available
forecast_ray = ray.get(forecast_obj_refs)
time_ray_cloud = time.time() - start
print(
f"Done! Prophet on Ray in Cloud finished in {time_ray_cloud} seconds"
)
time_ray_cloud = 10.4
# Calculate speedup running parallel Python Ray in a Cloud:
speedup = time_regular_python / time_ray_cloud
print(f"Speedup from running Ray parallel code in a Cloud: {np.round(speedup, 0)}x"
f", or {(np.round(speedup, 0)-1) * 100}%")
ray.shutdown()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D5_DimensionalityReduction/W1D5_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 1, Day 5, Tutorial 2
# Dimensionality Reduction: Principal component analysis
__Content creators:__ Alex Cayco Gajic, John Murray
__Content reviewers:__ Roozbeh Farhoudi, Matt Krause, Spiros Chavlis, Richard Gao, Michael Waskom
---
# Tutorial Objectives
In this notebook we'll learn how to perform PCA by projecting the data onto the eigenvectors of its covariance matrix.
Overview:
- Calculate the eigenvectors of the sample covariance matrix.
- Perform PCA by projecting data onto the eigenvectors of the covariance matrix.
- Plot and explore the eigenvalues.
To quickly refresh your knowledge of eigenvalues and eigenvectors, you can watch this [short video](https://www.youtube.com/watch?v=kwA3qM0rm7c) (4 minutes) for a geometrical explanation. For a deeper understanding, this [in-depth video](https://www.youtube.com/watch?v=PFDu9oVAE-g&list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&index=14) (17 minutes) provides an excellent basis and is beautifully illustrated.
```
# @title Video 1: PCA
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="-f6T9--oM0E", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
---
# Setup
Run these cells to get the tutorial started.
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def plot_eigenvalues(evals):
"""
Plots eigenvalues.
Args:
(numpy array of floats) : Vector of eigenvalues
Returns:
Nothing.
"""
plt.figure(figsize=(4, 4))
plt.plot(np.arange(1, len(evals) + 1), evals, 'o-k')
plt.xlabel('Component')
plt.ylabel('Eigenvalue')
plt.title('Scree plot')
plt.xticks(np.arange(1, len(evals) + 1))
plt.ylim([0, 2.5])
def sort_evals_descending(evals, evectors):
"""
Sorts eigenvalues and eigenvectors in decreasing order. Also aligns first two
eigenvectors to be in first two quadrants (if 2D).
Args:
evals (numpy array of floats) : Vector of eigenvalues
evectors (numpy array of floats) : Corresponding matrix of eigenvectors
each column corresponds to a different
eigenvalue
Returns:
(numpy array of floats) : Vector of eigenvalues after sorting
(numpy array of floats) : Matrix of eigenvectors after sorting
"""
index = np.flip(np.argsort(evals))
evals = evals[index]
evectors = evectors[:, index]
if evals.shape[0] == 2:
if np.arccos(np.matmul(evectors[:, 0],
1 / np.sqrt(2) * np.array([1, 1]))) > np.pi / 2:
evectors[:, 0] = -evectors[:, 0]
if np.arccos(np.matmul(evectors[:, 1],
1 / np.sqrt(2) * np.array([-1, 1]))) > np.pi / 2:
evectors[:, 1] = -evectors[:, 1]
return evals, evectors
def plot_data(X):
"""
Plots bivariate data. Includes a plot of each random variable, and a scatter
scatter plot of their joint activity. The title indicates the sample
correlation calculated from the data.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8, 4])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(X[:, 0], color='k')
plt.ylabel('Neuron 1')
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(X[:, 1], color='k')
plt.xlabel('Sample Number (sorted)')
plt.ylabel('Neuron 2')
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(X[:, 0], X[:, 1], '.', markerfacecolor=[.5, .5, .5],
markeredgewidth=0)
ax3.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(X[:, 0], X[:, 1])[0, 1]))
plt.show()
def get_data(cov_matrix):
"""
Returns a matrix of 1000 samples from a bivariate, zero-mean Gaussian
Note that samples are sorted in ascending order for the first random
variable.
Args:
var_1 (scalar) : variance of the first random variable
var_2 (scalar) : variance of the second random variable
cov_matrix (numpy array of floats) : desired covariance matrix
Returns:
(numpy array of floats) : samples from the bivariate Gaussian,
with each column corresponding to a
different random variable
"""
mean = np.array([0, 0])
X = np.random.multivariate_normal(mean, cov_matrix, size=1000)
indices_for_sorting = np.argsort(X[:, 0])
X = X[indices_for_sorting, :]
return X
def calculate_cov_matrix(var_1, var_2, corr_coef):
"""
Calculates the covariance matrix based on the variances and
correlation coefficient.
Args:
var_1 (scalar) : variance of the first random variable
var_2 (scalar) : variance of the second random variable
corr_coef (scalar) : correlation coefficient
Returns:
(numpy array of floats) : covariance matrix
"""
cov = corr_coef * np.sqrt(var_1 * var_2)
cov_matrix = np.array([[var_1, cov], [cov, var_2]])
return cov_matrix
def define_orthonormal_basis(u):
"""
Calculates an orthonormal basis given an arbitrary vector u.
Args:
u (numpy array of floats) : arbitrary 2D vector used for new basis
Returns:
(numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
"""
u = u / np.sqrt(u[0] ** 2 + u[1] ** 2)
w = np.array([-u[1], u[0]])
W = np.column_stack((u, w))
return W
def plot_data_new_basis(Y):
"""
Plots bivariate data after transformation to new bases. Similar to plot_data
but with colors corresponding to projections onto basis 1 (red) and
basis 2 (blue).
The title indicates the sample correlation calculated from the data.
Note that samples are re-sorted in ascending order for the first random
variable.
Args:
Y (numpy array of floats) : Data matrix in new basis each column
corresponds to a different random variable
Returns:
Nothing.
"""
fig = plt.figure(figsize=[8, 4])
gs = fig.add_gridspec(2, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(Y[:, 0], 'r')
plt.ylabel('Projection \n basis vector 1')
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(Y[:, 1], 'b')
plt.xlabel('Projection \n basis vector 1')
plt.ylabel('Projection \n basis vector 2')
ax3 = fig.add_subplot(gs[:, 1])
ax3.plot(Y[:, 0], Y[:, 1], '.', color=[.5, .5, .5])
ax3.axis('equal')
plt.xlabel('Projection basis vector 1')
plt.ylabel('Projection basis vector 2')
plt.title('Sample corr: {:.1f}'.format(np.corrcoef(Y[:, 0], Y[:, 1])[0, 1]))
plt.show()
def change_of_basis(X, W):
"""
Projects data onto a new basis.
Args:
X (numpy array of floats) : Data matrix each column corresponding to a
different random variable
W (numpy array of floats) : new orthonormal basis columns correspond to
basis vectors
Returns:
(numpy array of floats) : Data matrix expressed in new basis
"""
Y = np.matmul(X, W)
return Y
def plot_basis_vectors(X, W):
"""
Plots bivariate data as well as new basis vectors.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
W (numpy array of floats) : Square matrix representing new orthonormal
basis each column represents a basis vector
Returns:
Nothing.
"""
plt.figure(figsize=[4, 4])
plt.plot(X[:, 0], X[:, 1], '.', color=[.5, .5, .5], label='Data')
plt.axis('equal')
plt.xlabel('Neuron 1 activity')
plt.ylabel('Neuron 2 activity')
plt.plot([0, W[0, 0]], [0, W[1, 0]], color='r', linewidth=3,
label='Basis vector 1')
plt.plot([0, W[0, 1]], [0, W[1, 1]], color='b', linewidth=3,
label='Basis vector 2')
plt.legend()
plt.show()
```
---
# Section 1: Calculate the eigenvectors of the the sample covariance matrix
As we saw in the lecture, PCA represents data in a new orthonormal basis defined by the eigenvectors of the covariance matrix. Remember that in the previous tutorial, we generated bivariate normal data with a specified covariance matrix $\bf \Sigma$, whose $(i,j)$th element is:
\begin{equation}
\Sigma_{ij} = E[ x_i x_j ] - E[ x_i] E[ x_j ] .
\end{equation}
However, in real life we don't have access to this ground-truth covariance matrix. To get around this, we can use the sample covariance matrix, $\bf\hat\Sigma$, which is calculated directly from the data. The $(i,j)$th element of the sample covariance matrix is:
\begin{equation}
\hat \Sigma_{ij} = \frac{1}{N_\text{samples}}{\bf x}_i^T {\bf x}_j - \bar {\bf x}_i \bar{\bf x}_j ,
\end{equation}
where ${\bf x}_i = [ x_i(1), x_i(2), \dots,x_i(N_\text{samples})]^T$ is a column vector representing all measurements of neuron $i$, and $\bar {\bf x}_i$ is the mean of neuron $i$ across samples:
\begin{equation}
\bar {\bf x}_i = \frac{1}{N_\text{samples}} \sum_{k=1}^{N_\text{samples}} x_i(k).
\end{equation}
If we assume that the data has already been mean-subtracted, then we can write the sample covariance matrix in a much simpler matrix form:
\begin{align}
{\bf \hat \Sigma}
&= \frac{1}{N_\text{samples}} {\bf X}^T {\bf X}.
\end{align}
## Exercise 1: Calculation of the covariance matrix
Before calculating the eigenvectors, you must first calculate the sample covariance matrix.
**Steps**
* Complete the function `get_sample_cov_matrix` by first subtracting the sample mean of the data, then calculate $\bf \hat \Sigma$ using the equation above.
* Use `get_data` to generate bivariate normal data, and calculate the sample covariance matrix with your finished `get_sample_cov_matrix`. Compare this estimate to the true covariate matrix using `calculate_cov_matrix`.
```
help(get_data)
help(calculate_cov_matrix)
def get_sample_cov_matrix(X):
"""
Returns the sample covariance matrix of data X
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Covariance matrix
"""
#################################################
## TODO for students: calculate the covariance matrix
# Fill out function and remove
raise NotImplementedError("Student excercise: calculate the covariance matrix!")
#################################################
# Subtract the mean of X
X = ...
# Calculate the covariance matrix (hint: use np.matmul)
cov_matrix = ...
return cov_matrix
##########################################################
## TODO for students: generate bivariate Gaussian data
# with variances of 1 and a correlation coefficient of 0.8
# compare the true and sample covariance matrices
##########################################################
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
np.random.seed(2020) # set random seed
# Uncomment below code to test your function
# cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
# print(cov_matrix)
# X = get_data(cov_matrix)
# sample_cov_matrix = get_sample_cov_matrix(X)
# print(sample_cov_matrix)
```
SAMPLE OUTPUT
```
[[1. 0.8]
[0.8 1. ]]
[[0.99315313 0.82347589]
[0.82347589 1.01281397]]
```
```
# to_remove solution
def get_sample_cov_matrix(X):
"""
Returns the sample covariance matrix of data X
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Covariance matrix
"""
# Subtract the mean of X
X = X - np.mean(X, 0)
# Calculate the covariance matrix (hint: use np.matmul)
cov_matrix = cov_matrix = 1 / X.shape[0] * np.matmul(X.T, X)
return cov_matrix
variance_1 = 1
variance_2 = 1
corr_coef = 0.8
np.random.seed(2020) # set random seed
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
print(cov_matrix)
X = get_data(cov_matrix)
sample_cov_matrix = get_sample_cov_matrix(X)
print(sample_cov_matrix)
```
## Exercise 2: Eigenvectors of the Covariance matrix
Next you will calculate the eigenvectors of the covariance matrix. Plot them along with the data to check that they align with the geometry of the data.
**Steps:**
* Calculate the eigenvalues and eigenvectors of the sample covariance matrix. (**Hint:** use `np.linalg.eigh`, which finds the eigenvalues of a symmetric matrix).
* Use the provided code to sort the eigenvalues in descending order.
* Plot the eigenvectors on a scatter plot of the data, using the function `plot_basis_vectors`.
```
help(sort_evals_descending)
help(plot_basis_vectors)
#################################################
## TO DO for students: Calculate and sort the eigenvalues in descending order
#################################################
# Calculate the eigenvalues and eigenvectors
# evals, evectors = ...
# Sort the eigenvalues in descending order
# evals, evectors = ...
# plot_basis_vectors(X, evectors)
# to_remove solution
# Calculate the eigenvalues and eigenvectors
evals, evectors = np.linalg.eigh(cov_matrix)
# Sort the eigenvalues in descending order
evals, evectors = sort_evals_descending(evals, evectors)
with plt.xkcd():
plot_basis_vectors(X, evectors)
```
---
# Section 2: Perform PCA by projecting data onto the eigenvectors
To perform PCA, we will project the data onto the eigenvectors of the covariance matrix, i.e.:
\begin{equation}
\bf S = X W
\end{equation}
where $\bf S$ is an $N_\text{samples} \times N$ matrix representing the projected data (also called *scores*), and $W$ is an $N\times N$ orthogonal matrix, each of whose columns represents the eigenvectors of the covariance matrix (also called *weights* or *loadings*).
## Exercise 3: PCA implementation
You will now perform PCA on the data using the intuition you have developed so far. Fill in the function below to carry out the steps to perform PCA by projecting the data onto the eigenvectors of its covariance matrix.
**Steps:**
* First subtract the mean.
* Then calculate the sample covariance matrix.
* Then find the eigenvalues and eigenvectors and sort them in descending order.
* Finally project the mean-centered data onto the eigenvectors.
```
help(change_of_basis)
help(plot_data_new_basis)
def pca(X):
"""
Sorts eigenvalues and eigenvectors in decreasing order.
Args:
X (numpy array of floats): Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
"""
#################################################
## TODO for students: calculate the covariance matrix
# Fill out function and remove
raise NotImplementedError("Student excercise: sort eigenvalues/eigenvectors!")
#################################################
# Subtract the mean of X
X = ...
# Calculate the sample covariance matrix
cov_matrix = ...
# Calculate the eigenvalues and eigenvectors
evals, evectors = ...
# Sort the eigenvalues in descending order
evals, evectors = ...
# Project the data onto the new eigenvector basis
score = ...
return score, evectors, evals
#################################################
## TODO for students: Call the function to calculate the eigenvectors/eigenvalues
#################################################
# Perform PCA on the data matrix X
# score, evectors, evals = ...
# Plot the data projected into the new basis
# plot_data_new_basis(score)
# to_remove solution
def pca(X):
"""
Performs PCA on multivariate data.
Args:
X (numpy array of floats) : Data matrix each column corresponds to a
different random variable
Returns:
(numpy array of floats) : Data projected onto the new basis
(numpy array of floats) : Vector of eigenvalues
(numpy array of floats) : Corresponding matrix of eigenvectors
"""
# Subtract the mean of X
X = X - np.mean(X, axis=0)
# Calculate the sample covariance matrix
cov_matrix = get_sample_cov_matrix(X)
# Calculate the eigenvalues and eigenvectors
evals, evectors = np.linalg.eigh(cov_matrix)
# Sort the eigenvalues in descending order
evals, evectors = sort_evals_descending(evals, evectors)
# Project the data onto the new eigenvector basis
score = change_of_basis(X, evectors)
return score, evectors, evals
# Perform PCA on the data matrix X
score, evectors, evals = pca(X)
# Plot the data projected into the new basis
with plt.xkcd():
plot_data_new_basis(score)
```
## Plot and explore the eigenvalues
Finally, we will examine the eigenvalues of the covariance matrix. Remember that each eigenvalue describes the variance of the data projected onto its corresponding eigenvector. This is an important concept because it allows us to rank the PCA basis vectors based on how much variance each one can capture. First run the code below to plot the eigenvalues (sometimes called the "scree plot"). Which eigenvalue is larger?
```
plot_eigenvalues(evals)
```
## Interactive Demo: Exploration of the correlation coefficient
Run the following cell and use the slider to change the correlation coefficient in the data. You should see the scree plot and the plot of basis vectors update.
**Questions:**
* What happens to the eigenvalues as you change the correlation coefficient?
* Can you find a value for which both eigenvalues are equal?
* Can you find a value for which only one eigenvalue is nonzero?
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
def refresh(corr_coef=.8):
cov_matrix = calculate_cov_matrix(variance_1, variance_2, corr_coef)
X = get_data(cov_matrix)
score, evectors, evals = pca(X)
plot_eigenvalues(evals)
plot_basis_vectors(X, evectors)
_ = widgets.interact(refresh, corr_coef=(-1, 1, .1))
```
---
# Summary
- In this tutorial, we learned that goal of PCA is to find an orthonormal basis capturing the directions of maximum variance of the data. More precisely, the $i$th basis vector is the direction that maximizes the projected variance, while being orthogonal to all previous basis vectors. Mathematically, these basis vectors are the eigenvectors of the covariance matrix (also called *loadings*).
- PCA also has the useful property that the projected data (*scores*) are uncorrelated.
- The projected variance along each basis vector is given by its corresponding eigenvalue. This is important because it allows us rank the "importance" of each basis vector in terms of how much of the data variability it explains. An eigenvalue of zero means there is no variation along that direction so it can be dropped without losing any information about the original data.
- In the next tutorial, we will use this property to reduce the dimensionality of high-dimensional data.
---
# Bonus: Mathematical basis of PCA properties
```
# @title Video 2: Properties of PCA
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="p56UrMRt6-U", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
| github_jupyter |
# Nearest Lat/Lon Points in xarray
It is very handy to pluck points from an xarray dataset that are nearest a latitude/longitude point of interest. One example is comparing station observations to model data at that point.
For some background, read my post on [StakOverflow: xarray select nearest lat/lon with multi-dimension coordinates](https://stackoverflow.com/questions/58758480/xarray-select-nearest-lat-lon-with-multi-dimension-coordinates).
My initial implementation was my `pluck_points` function. It finds the nearest grid point to a lat/lon point by finding the absolute minimum value in the point and grid difference.
My new implementation relys on MetPy ability to determine the x and y values based on the map projection. The benefit of this is that it enables us to use xarray's powerful `sel()` method to obtain the points.
The two methods most often match the same points, but sometimes the matches are slightly different.
This notebook shows a benchmar of the two methods.
```
# The two methods being tested
from herbie.tools import nearest_points
from toolbox.gridded_data import pluck_points
# Get model data
from herbie.archive import Herbie
# Get point data
from synoptic.services import stations_metadata
# Plotting
import matplotlib.pyplot as plt
from toolbox.cartopy_tools import common_features, pc
import numpy as np
from datetime import datetime
import warnings
warnings.filterwarnings("ignore")
# Get Model Data
H = Herbie('2021-10-8 12:00', model='HRRR', product='sfc').xarray('TMP:2 m')
# Get Point Data
a = stations_metadata(radius='UKBKB,100')
points = np.array(list(zip(a.loc['longitude'], a.loc['latitude'])))
names = a.loc['STID'].to_numpy()
print(f'{len(names):,} stations to match to grid')
```
## Match Points
First let's look at the speed for matching just ten points. It doesn't seem the two methods are much different.
```
%%timeit
ds_nearest = nearest_points(H, points[:10], names[:10])
%%timeit
ds_pluck = pluck_points(H, points[:10], names[:10])
```
but when we try to match more values...the `nearest_points` method works much faster
```
%%timeit
ds_nearest = nearest_points(H, points[:100], names[:100])
%%timeit
ds_pluck = pluck_points(H, points[:100], names[:100])
```
## Scalability
Let's graph the time to compute by number of points to match
```
t_nearest = []
t_pluck = []
n = []
#num = range(1,10,3):
num = [1, 5, 10, 25, 50] + list(range(100,len(names),500))
for i in num[:]:
timer = datetime.now()
ds_nearest = nearest_points(H, points[:i], names[:i])
t_nearest.append((datetime.now()-timer).total_seconds())
timer = datetime.now()
ds_pluck = pluck_points(H, points[:i], names[:i])
t_pluck.append((datetime.now()-timer).total_seconds())
n.append(i)
plt.figure(figsize=[8,5], dpi=100)
plt.plot(n[:5], t_nearest[:5], marker='o', label='nearest_points')
plt.plot(n[:5], t_pluck[:5], marker='o', label='pluck_points')
plt.xticks(n[:5])
plt.legend()
plt.ylabel('Time (seconds)')
plt.xlabel('Number of matched points')
plt.figure(figsize=[8,5], dpi=100)
plt.plot(n, t_nearest, marker='o', label='nearest_points')
plt.plot(n, t_pluck, marker='o', label='pluck_points')
plt.xticks(n)
plt.legend()
plt.ylabel('Time (seconds)')
plt.xlabel('Number of matched points')
```
**Clearly, the `nearest_points` implementation scales much better.**
I could talk about the implementation of these algorithms with [Big O notation](https://en.wikipedia.org/wiki/Big_O_notation), but I'll save that for the computer scientists. For me, I'm happy just seeing that my new function scales much better.
# Update Stack Overflow Answer
This updated method requires that you know the map projection. If you have a NetCDF file that follows CF convention, you can use MetPy's `parse_cf` method to get the data's coordinate reference system.
```
import xarray
import metpy
import cartopy.crs as ccrs
from metpy.units import units # only needed for the example
lats = np.array([[21.138 , 21.14499, 21.15197, 21.15894, 21.16591],
[21.16287, 21.16986, 21.17684, 21.18382, 21.19079],
[21.18775, 21.19474, 21.20172, 21.2087 , 21.21568],
[21.21262, 21.21962, 21.22661, 21.23359, 21.24056],
[21.2375 , 21.2445 , 21.25149, 21.25848, 21.26545]])
lons = np.array([[-122.72 , -122.69333, -122.66666, -122.63999, -122.61331],
[-122.7275 , -122.70082, -122.67415, -122.64746, -122.62078],
[-122.735 , -122.70832, -122.68163, -122.65494, -122.62825],
[-122.7425 , -122.71582, -122.68912, -122.66243, -122.63573],
[-122.75001, -122.72332, -122.69662, -122.66992, -122.64321]])
speed = np.array([[10.934007, 10.941321, 10.991583, 11.063932, 11.159435],
[10.98778 , 10.975482, 10.990983, 11.042522, 11.131154],
[11.013505, 11.001573, 10.997754, 11.03566 , 11.123781],
[11.011163, 11.000227, 11.010223, 11.049 , 11.1449 ],
[11.015698, 11.026604, 11.030653, 11.076904, 11.201464]])
ds = xarray.Dataset({'SPEED':(('x', 'y'),speed)},
coords = {'latitude': (('x', 'y'), lats),
'longitude': (('x', 'y'), lons)},
attrs={'variable':'Wind Speed'})
ds
# For the purpose of the minimal example in the question,
# I happen to know that these lat/lons came from the HRRR model
# and I'll add the corresponding CF data.
cf_attrs = {
'semi_major_axis': 6371229.0,
'semi_minor_axis': 6371229.0,
'inverse_flattening': 0.0,
'reference_ellipsoid_name': 'unknown',
'longitude_of_prime_meridian': 0.0,
'prime_meridian_name': 'Greenwich',
'geographic_crs_name': 'unknown',
'horizontal_datum_name': 'unknown',
'projected_crs_name': 'unknown',
'grid_mapping_name': 'lambert_conformal_conic',
'standard_parallel': (38.5, 38.5),
'latitude_of_projection_origin': 38.5,
'longitude_of_central_meridian': 262.5,
'false_easting': 0.0,
'false_northing': 0.0,
'long_name': 'HRRR model grid projection'
}
ds = ds.metpy.assign_crs(cf_attrs)
ds
# Alternatively, you can use the `parse_cf` method if your Dataset already
# has the CF convention data.
#ds.metpy.parse_cf()
# Now we need to convert the x and y index to map projection values
# This is also done with the MetPy accessor
# (I only specify the tolerance to be much higher because the
# example data is truncated; this is not needed if the lat/lon data
# were not truncated)
ds_new = ds.metpy.assign_y_x(tolerance=13*units.km)
ds_new
# Convert lat/lon points you want to match to the data's coordindate reference system (crs).
# This is done with cartopy
crs = ds_new.metpy_crs.item().to_cartopy()
crs
# I want to find the speed at a certain lat/lon point.
lat = 21.22
lon = -122.68
points = np.array([(lat, lon)])#, (21.20, -122.68)])
lats = points[:,0]
lons = points[:,1]
transformed_data = crs.transform_points(ccrs.PlateCarree(), lons, lats)
xs = transformed_data[:,0]
ys = transformed_data[:,1]
# And these are the index we wish to find, in map coordinates
xs, ys
ds_new.sel(x=xs[0], method='nearest')
ds_new.x.plot()
ccrs.PlateCarree().transform_points(crs, ds_new.x.data, ds_new.y.data)
```
| github_jupyter |
# Preface
The locations requiring configuration for your experiment are commented in capital text.
# Setup
**Installations**
```
!pip install apricot-select
!pip install sphinxcontrib-napoleon
!pip install sphinxcontrib-bibtex
!git clone https://github.com/decile-team/distil.git
!git clone https://github.com/circulosmeos/gdown.pl.git
!mv distil asdf
!mv asdf/distil .
```
**Experiment-Specific Imports**
```
from distil.utils.data_handler import DataHandler_ENTER_HERE, DataHandler_Points # IMPORT YOUR DATAHANDLER HERE
from distil.utils.models.resnet import ResNet18 # IMPORT YOUR MODEL HERE
```
**Imports, Training Class Definition, Experiment Procedure Definition**
Nothing needs to be modified in this code block unless it specifically pertains to a change of experimental procedure.
```
import pandas as pd
import numpy as np
import copy
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
from torch.utils.data import Subset
import torch.nn.functional as F
from torch import nn
from torchvision import transforms
from torchvision import datasets
from PIL import Image
import torch
import torch.optim as optim
from torch.autograd import Variable
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import time
import math
import random
import os
import pickle
from numpy.linalg import cond
from numpy.linalg import inv
from numpy.linalg import norm
from scipy import sparse as sp
from scipy.linalg import lstsq
from scipy.linalg import solve
from scipy.optimize import nnls
from distil.active_learning_strategies.badge import BADGE
from distil.active_learning_strategies.glister import GLISTER
from distil.active_learning_strategies.margin_sampling import MarginSampling
from distil.active_learning_strategies.entropy_sampling import EntropySampling
from distil.active_learning_strategies.random_sampling import RandomSampling
from distil.active_learning_strategies.gradmatch_active import GradMatchActive
from distil.active_learning_strategies.craig_active import CRAIGActive
from distil.active_learning_strategies.fass import FASS
from distil.active_learning_strategies.adversarial_bim import AdversarialBIM
from distil.active_learning_strategies.adversarial_deepfool import AdversarialDeepFool
from distil.active_learning_strategies.core_set import CoreSet
from distil.active_learning_strategies.least_confidence import LeastConfidence
from distil.active_learning_strategies.margin_sampling import MarginSampling
from distil.active_learning_strategies.bayesian_active_learning_disagreement_dropout import BALDDropout
from distil.utils.dataset import get_dataset
from distil.utils.train_helper import data_train
from google.colab import drive
import warnings
warnings.filterwarnings("ignore")
class Checkpoint:
def __init__(self, acc_list=None, indices=None, state_dict=None, experiment_name=None, path=None):
# If a path is supplied, load a checkpoint from there.
if path is not None:
if experiment_name is not None:
self.load_checkpoint(path, experiment_name)
else:
raise ValueError("Checkpoint contains None value for experiment_name")
return
if acc_list is None:
raise ValueError("Checkpoint contains None value for acc_list")
if indices is None:
raise ValueError("Checkpoint contains None value for indices")
if state_dict is None:
raise ValueError("Checkpoint contains None value for state_dict")
if experiment_name is None:
raise ValueError("Checkpoint contains None value for experiment_name")
self.acc_list = acc_list
self.indices = indices
self.state_dict = state_dict
self.experiment_name = experiment_name
def __eq__(self, other):
# Check if the accuracy lists are equal
acc_lists_equal = self.acc_list == other.acc_list
# Check if the indices are equal
indices_equal = self.indices == other.indices
# Check if the experiment names are equal
experiment_names_equal = self.experiment_name == other.experiment_name
return acc_lists_equal and indices_equal and experiment_names_equal
def save_checkpoint(self, path):
# Get current time to use in file timestamp
timestamp = time.time_ns()
# Create the path supplied
os.makedirs(path, exist_ok=True)
# Name saved files using timestamp to add recency information
save_path = os.path.join(path, F"c{timestamp}1")
copy_save_path = os.path.join(path, F"c{timestamp}2")
# Write this checkpoint to the first save location
with open(save_path, 'wb') as save_file:
pickle.dump(self, save_file)
# Write this checkpoint to the second save location
with open(copy_save_path, 'wb') as copy_save_file:
pickle.dump(self, copy_save_file)
def load_checkpoint(self, path, experiment_name):
# Obtain a list of all files present at the path
timestamp_save_no = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
# If there are no such files, set values to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Sort the list of strings to get the most recent
timestamp_save_no.sort(reverse=True)
# Read in two files at a time, checking if they are equal to one another.
# If they are equal, then it means that the save operation finished correctly.
# If they are not, then it means that the save operation failed (could not be
# done atomically). Repeat this action until no possible pair can exist.
while len(timestamp_save_no) > 1:
# Pop a most recent checkpoint copy
first_file = timestamp_save_no.pop(0)
# Keep popping until two copies with equal timestamps are present
while True:
second_file = timestamp_save_no.pop(0)
# Timestamps match if the removal of the "1" or "2" results in equal numbers
if (second_file[:-1]) == (first_file[:-1]):
break
else:
first_file = second_file
# If there are no more checkpoints to examine, set to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Form the paths to the files
load_path = os.path.join(path, first_file)
copy_load_path = os.path.join(path, second_file)
# Load the two checkpoints
with open(load_path, 'rb') as load_file:
checkpoint = pickle.load(load_file)
with open(copy_load_path, 'rb') as copy_load_file:
checkpoint_copy = pickle.load(copy_load_file)
# Do not check this experiment if it is not the one we need to restore
if checkpoint.experiment_name != experiment_name:
continue
# Check if they are equal
if checkpoint == checkpoint_copy:
# This checkpoint will suffice. Populate this checkpoint's fields
# with the selected checkpoint's fields.
self.acc_list = checkpoint.acc_list
self.indices = checkpoint.indices
self.state_dict = checkpoint.state_dict
return
# Instantiate None values in acc_list, indices, and model
self.acc_list = None
self.indices = None
self.state_dict = None
def get_saved_values(self):
return (self.acc_list, self.indices, self.state_dict)
def delete_checkpoints(checkpoint_directory, experiment_name):
# Iteratively go through each checkpoint, deleting those whose experiment name matches.
timestamp_save_no = [f for f in os.listdir(checkpoint_directory) if os.path.isfile(os.path.join(checkpoint_directory, f))]
for file in timestamp_save_no:
delete_file = False
# Get file location
file_path = os.path.join(checkpoint_directory, file)
if not os.path.exists(file_path):
continue
# Unpickle the checkpoint and see if its experiment name matches
with open(file_path, "rb") as load_file:
checkpoint_copy = pickle.load(load_file)
if checkpoint_copy.experiment_name == experiment_name:
delete_file = True
# Delete this file only if the experiment name matched
if delete_file:
os.remove(file_path)
#Logs
def write_logs(logs, save_directory, rd, run):
file_path = save_directory + 'run_'+str(run)+'.txt'
with open(file_path, 'a') as f:
f.write('---------------------\n')
f.write('Round '+str(rd)+'\n')
f.write('---------------------\n')
for key, val in logs.items():
if key == 'Training':
f.write(str(key)+ '\n')
for epoch in val:
f.write(str(epoch)+'\n')
else:
f.write(str(key) + ' - '+ str(val) +'\n')
def train_one(X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, strategy, save_directory, run, checkpoint_directory, experiment_name):
# Define acc initially
acc = np.zeros(n_rounds+1)
initial_unlabeled_size = X_unlabeled.shape[0]
initial_round = 1
# Define an index map
index_map = np.array([x for x in range(initial_unlabeled_size)])
# Attempt to load a checkpoint. If one exists, then the experiment crashed.
training_checkpoint = Checkpoint(experiment_name=experiment_name, path=checkpoint_directory)
rec_acc, rec_indices, rec_state_dict = training_checkpoint.get_saved_values()
# Check if there are values to recover
if rec_acc is not None:
# Restore the accuracy list
for i in range(len(rec_acc)):
acc[i] = rec_acc[i]
# Restore the indices list and shift those unlabeled points to the labeled set.
index_map = np.delete(index_map, rec_indices)
# Record initial size of X_tr
intial_seed_size = X_tr.shape[0]
X_tr = np.concatenate((X_tr, X_unlabeled[rec_indices]), axis=0)
X_unlabeled = np.delete(X_unlabeled, rec_indices, axis = 0)
y_tr = np.concatenate((y_tr, y_unlabeled[rec_indices]), axis = 0)
y_unlabeled = np.delete(y_unlabeled, rec_indices, axis = 0)
# Restore the model
net.load_state_dict(rec_state_dict)
# Fix the initial round
initial_round = (X_tr.shape[0] - initial_seed_size) // budget + 1
# Ensure loaded model is moved to GPU
if torch.cuda.is_available():
net = net.cuda()
strategy.update_model(net)
strategy.update_data(X_tr, y_tr, X_unlabeled)
else:
if torch.cuda.is_available():
net = net.cuda()
acc[0] = dt.get_acc_on_set(X_test, y_test)
print('Initial Testing accuracy:', round(acc[0]*100, 2), flush=True)
logs = {}
logs['Training Points'] = X_tr.shape[0]
logs['Test Accuracy'] = str(round(acc[0]*100, 2))
write_logs(logs, save_directory, 0, run)
#Updating the trained model in strategy class
strategy.update_model(net)
##User Controlled Loop
for rd in range(initial_round, n_rounds+1):
print('-------------------------------------------------')
print('Round', rd)
print('-------------------------------------------------')
sel_time = time.time()
idx = strategy.select(budget)
sel_time = time.time() - sel_time
print("Selection Time:", sel_time)
#Saving state of model, since labeling new points might take time
# strategy.save_state()
#Adding new points to training set
X_tr = np.concatenate((X_tr, X_unlabeled[idx]), axis=0)
X_unlabeled = np.delete(X_unlabeled, idx, axis = 0)
#Human In Loop, Assuming user adds new labels here
y_tr = np.concatenate((y_tr, y_unlabeled[idx]), axis = 0)
y_unlabeled = np.delete(y_unlabeled, idx, axis = 0)
# Update the index map
index_map = np.delete(index_map, idx, axis = 0)
print('Number of training points -',X_tr.shape[0])
#Reload state and start training
# strategy.load_state()
strategy.update_data(X_tr, y_tr, X_unlabeled)
dt.update_data(X_tr, y_tr)
t1 = time.time()
clf, train_logs = dt.train(None)
t2 = time.time()
acc[rd] = dt.get_acc_on_set(X_test, y_test)
logs = {}
logs['Training Points'] = X_tr.shape[0]
logs['Test Accuracy'] = str(round(acc[rd]*100, 2))
logs['Selection Time'] = str(sel_time)
logs['Trainining Time'] = str(t2 - t1)
logs['Training'] = train_logs
write_logs(logs, save_directory, rd, run)
strategy.update_model(clf)
print('Testing accuracy:', round(acc[rd]*100, 2), flush=True)
# Create a checkpoint
used_indices = np.array([x for x in range(initial_unlabeled_size)])
used_indices = np.delete(used_indices, index_map).tolist()
round_checkpoint = Checkpoint(acc.tolist(), used_indices, clf.state_dict(), experiment_name=experiment_name)
round_checkpoint.save_checkpoint(checkpoint_directory)
print('Training Completed')
return acc
# Define a function to perform experiments in bulk and return the mean accuracies
def BADGE_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = BADGE(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def random_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = RandomSampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def entropy_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = EntropySampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def GLISTER_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'lr': args['lr'], 'device':args['device']}
strategy = GLISTER(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args,valid=False, typeOf='rand', lam=0.1)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def FASS_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = FASS(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def adversarial_bim_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = AdversarialBIM(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def adversarial_deepfool_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = AdversarialDeepFool(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def coreset_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = CoreSet(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def least_confidence_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = LeastConfidence(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def margin_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = MarginSampling(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
# Define a function to perform experiments in bulk and return the mean accuracies
def bald_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, net, n_rounds, budget, args, nclasses, save_directory, checkpoint_directory, experiment_name):
test_acc_list = list()
fig = plt.figure(figsize=(8,6), dpi=160)
x_axis = [np.shape(X_tr)[0] + budget * x for x in range(0, n_rounds + 1)]
for i in range(n_exp):
# Copy data and model to ensure that experiments do not override base versions
X_tr_copy = copy.deepcopy(X_tr)
y_tr_copy = copy.deepcopy(y_tr)
X_unlabeled_copy = copy.deepcopy(X_unlabeled)
y_unlabeled_copy = copy.deepcopy(y_unlabeled)
X_test_copy = copy.deepcopy(X_test)
y_test_copy = copy.deepcopy(y_test)
dt_copy = copy.deepcopy(dt)
clf_copy = copy.deepcopy(net)
#Initializing Strategy Class
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = BALDDropout(X_tr, y_tr, X_unlabeled, net, handler, nclasses, strategy_args)
test_acc = train_one(X_tr_copy, y_tr_copy, X_test_copy, y_test_copy, X_unlabeled_copy, y_unlabeled_copy, dt_copy, clf_copy, n_rounds, budget, args, nclasses, strategy, save_directory, i, checkpoint_directory, experiment_name)
test_acc_list.append(test_acc)
plt.plot(x_axis, test_acc, label=str(i))
print("EXPERIMENT", i, test_acc)
# Experiment complete; delete all checkpoints related to this experiment
delete_checkpoints(checkpoint_directory, experiment_name)
mean_test_acc = np.zeros(n_rounds + 1)
for test_acc in test_acc_list:
mean_test_acc = mean_test_acc + test_acc
mean_test_acc = mean_test_acc / n_exp
plt.plot(x_axis, mean_test_acc, label="Mean")
plt.xlabel("Labeled Set Size")
plt.ylabel("Test Acc")
plt.legend()
plt.show()
print("MEAN TEST ACC", mean_test_acc)
return mean_test_acc
```
# DATASET NAME HERE
**Parameter Definitions**
Parameters related to the specific experiment are placed here. You should examine each and modify them as needed.
```
data_set_name = 'DATASET HERE'
download_path = '../downloaded_data/'
handler = # PUT DATAHANDLER HERE
net = # MODEL HERE
# MODIFY AS NECESSARY
logs_directory = '/content/gdrive/MyDrive/colab_storage/logs/'
checkpoint_directory = '/content/gdrive/MyDrive/colab_storage/check/'
initial_model = data_set_name
model_directory = "/content/gdrive/MyDrive/colab_storage/model/"
experiment_name = "EXPERIMENT NAME HERE"
initial_seed_size = # INIT SEED SIZE HERE
training_size_cap = # TRAIN SIZE CAP HERE
nclasses = # NUM CLASSES HERE
budget = # BUDGET HERE
# CHANGE ARGS AS NECESSARY
args = {'n_epoch':300, 'lr':float(0.01), 'batch_size':20, 'max_accuracy':float(0.99), 'num_classes':nclasses, 'islogs':True, 'isreset':True, 'isverbose':True, 'device':'DEVICE HERE'}
# Train on approximately the full dataset given the budget contraints
n_rounds = (training_size_cap - initial_seed_size) // budget
# SET N EXP TO RUN (>1 for repeat)
n_exp = 1
```
**Initial Loading and Training**
You may choose to train a new initial model or to continue to load a specific model. If this notebook is being executed in Colab, you should consider whether or not you need the gdown line.
```
# Mount drive containing possible saved model and define file path.
colab_model_storage_mount = "/content/gdrive"
drive.mount(colab_model_storage_mount)
# Retrieve the model from Apurva's link and save it to the drive
os.makedirs(logs_directory, exist_ok = True)
os.makedirs(checkpoint_directory, exist_ok = True)
os.makedirs(model_directory, exist_ok = True)
model_directory = F"{model_directory}/{data_set_name}"
!/content/gdown.pl/gdown.pl "INSERT SHARABLE LINK HERE" "INSERT DOWNLOAD LOCATION HERE (ideally, same as model_directory)" # MAY NOT NEED THIS LINE IF NOT CLONING MODEL FROM COLAB
X, y, X_test, y_test = get_dataset(data_set_name, download_path)
dim = np.shape(X)[1:]
X_tr = X[:initial_seed_size]
y_tr = y[:initial_seed_size].numpy()
X_unlabeled = X[initial_seed_size:]
y_unlabeled = y[initial_seed_size:].numpy()
X_test = X_test
y_test = y_test.numpy()
# COMMENT OUT ONE OR THE OTHER IF YOU WANT TO TRAIN A NEW INITIAL MODEL
#load_model = False
load_model = True
# Only train a new model if one does not exist.
if load_model:
net.load_state_dict(torch.load(model_directory))
dt = data_train(X_tr, y_tr, net, handler, args)
clf = net
else:
dt = data_train(X_tr, y_tr, net, handler, args)
clf, _ = dt.train(None)
torch.save(clf.state_dict(), model_directory)
print("Training for", n_rounds, "rounds with budget", budget, "on unlabeled set size", training_size_cap)
```
**Random Sampling**
```
strat_logs = logs_directory+F'{data_set_name}/random_sampling/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_random = random_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_random")
```
**Entropy (Uncertainty) Sampling**
```
strat_logs = logs_directory+F'{data_set_name}/entropy_sampling/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_entropy = entropy_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_entropy")
```
**GLISTER**
```
strat_logs = logs_directory+F'{data_set_name}/glister/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_glister = GLISTER_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_glister")
```
**FASS**
```
strat_logs = logs_directory+F'{data_set_name}/fass/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_fass = FASS_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_fass")
```
**BADGE**
```
strat_logs = logs_directory+F'{data_set_name}/badge/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_badge = BADGE_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_badge")
```
**CoreSet**
```
strat_logs = logs_directory+F'{data_set_name}/coreset/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_coreset = coreset_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_coreset")
```
**Least Confidence**
```
strat_logs = logs_directory+F'{data_set_name}/least_confidence/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_least_confidence = least_confidence_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_least_conf")
```
**Margin**
```
strat_logs = logs_directory+F'{data_set_name}/margin_sampling/'
os.makedirs(strat_logs, exist_ok = True)
mean_test_acc_margin = margin_experiment_batch(n_exp, X_tr, y_tr, X_test, y_test, X_unlabeled, y_unlabeled, dt, clf, n_rounds, budget, args, nclasses, strat_logs, checkpoint_directory, F"{experiment_name}_margin")
```
| github_jupyter |
```
import argparse
import time
import numpy as np
import scipy.sparse as sp
import torch
from torch import optim
import torch.autograd as autograd
from torch.autograd import Variable
from model import GCNModelAE, Regularizer
from optimizer import loss_function1
from utils import load_data, mask_test_edges, preprocess_graph, get_roc_score,load_data_with_labels
import matplotlib.pyplot as plt
```
# Hyper-parameter Setting
Here we present the same setting in our paper WARGA
```
parser = argparse.ArgumentParser()
parser.add_argument('--seed', type=int, default=0, help='Random seed.')
parser.add_argument('--epochs', type=int, default=200, help='Number of epochs to train.')
parser.add_argument('--hidden1', type=int, default=32, help='Number of units in the first encoding layer.')
parser.add_argument('--hidden2', type=int, default=16, help='Number of units in the second embedding layer.')
parser.add_argument('--hidden3', type=int, default=16, help='Number of units in the first hidden layer of Regularizer.')
parser.add_argument('--hidden4', type=int, default=64, help='Number of units in the second hidden layer of Regularizer.')
parser.add_argument('--gp_lambda', type=float, default=10.0, help='lambda for gradient penalty.')
parser.add_argument('--lr', type=float, default=0.001, help='Initial learning rate for Generator.')
parser.add_argument('--reglr', type=float, default=0.001, help='Initial learning rate for Regularizer.')
parser.add_argument('--dropout', type=float, default=0., help='Dropout rate (1 - keep probability).')
parser.add_argument('--dataset-str', type=str, default='citeseer', help='type of dataset.')
args,unknown = parser.parse_known_args()
torch.manual_seed(args.seed)
np.random.seed(args.seed)
Tensor = torch.cuda.FloatTensor if torch.cuda.is_available() else torch.FloatTensor
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
```
# Model
```
# Cited from Improved Training of Wasserstein GANs
# https://github.com/igul222/improved_wgan_training
def compute_gradient_penalty(D, real_samples, fake_samples):
# Random weight term for interpolation between real and fake samples
alpha = Tensor(np.random.random((real_samples.size(0), 1)))
# Get random interpolation between real and fake samples
interpolates = (alpha * real_samples + ((1 - alpha) * fake_samples)).requires_grad_(True)
d_interpolates = D(interpolates)
fake = Variable(Tensor(real_samples.shape[0], 1).fill_(1.0), requires_grad=False)
# Get gradient w.r.t. interpolates
gradients = autograd.grad(outputs=d_interpolates,
inputs=interpolates,
grad_outputs=fake,
create_graph=True,
retain_graph=True,
only_inputs=True)[0]
gradients = gradients.view(gradients.size(0), -1)
gradient_penalty = ((gradients.norm(2, dim=1) - 0.01) ** 2).mean()
return gradient_penalty
def gae_for(args):
print("Using {} dataset".format(args.dataset_str))
adj, features,true_labels = load_data_with_labels(args.dataset_str)
n_nodes, feat_dim = features.shape
features = features.to(device)
# Store original adjacency matrix (without diagonal entries) for later
adj_orig = adj
adj_orig = adj_orig - sp.dia_matrix((adj_orig.diagonal()[np.newaxis, :], [0]), shape=adj_orig.shape)
adj_orig.eliminate_zeros()
# train-validation-test split
adj_train, train_edges, val_edges, val_edges_false, test_edges, test_edges_false = mask_test_edges(adj)
adj = adj_train
# Some preprocessing
adj_norm = preprocess_graph(adj)
adj_norm = adj_norm.to(device)
adj_label = adj_train + sp.eye(adj_train.shape[0])
adj_label = torch.FloatTensor(adj_label.toarray())
adj_label = adj_label.to(device)
pos_weight = float(adj.shape[0] * adj.shape[0] - adj.sum()) / adj.sum()
norm = adj.shape[0] * adj.shape[0] / float((adj.shape[0] * adj.shape[0] - adj.sum()) * 2)
# Models
model = GCNModelAE(feat_dim, args.hidden1, args.hidden2, args.dropout).to(device)
regularizer = Regularizer(args.hidden3, args.hidden2, args.hidden4).to(device)
optimizer = optim.Adam(model.parameters(), lr=args.lr)
regularizer_optimizer = optim.Adam(regularizer.parameters(), lr=args.reglr)
val_ap = []
train_loss = []
for epoch in range(args.epochs):
t = time.time()
model.train()
regularizer.train()
# Generate the embeddings
predicted_labels_prob, emb = model(features, adj_norm)
# Wasserstein Regularizer
for i in range(1):
f_z = regularizer(emb).to(device)
r = torch.normal(0.0, 1.0, [n_nodes, args.hidden2]).to(device)
f_r = regularizer(r)
# add the gradient penalty to objective function
gradient_penalty = compute_gradient_penalty(regularizer, r, emb)
reg_loss = - f_r.mean() + f_z.mean() + args.gp_lambda * gradient_penalty
regularizer_optimizer.zero_grad()
reg_loss.backward(retain_graph=True)
regularizer_optimizer.step()
# Update the Generator
f_z = regularizer(emb)
generator_loss = -f_z.mean()
loss = loss_function1(preds=predicted_labels_prob, labels=adj_label,
norm=norm, pos_weight=torch.tensor(pos_weight))
loss = loss + generator_loss
optimizer.zero_grad()
loss.backward()
cur_loss = loss.item()
optimizer.step()
# Print Validation results every 10 epochs
if epoch%40 == 0:
hidden_emb = emb.cpu().data.numpy()
roc_curr, ap_curr = get_roc_score(hidden_emb, adj_orig, val_edges, val_edges_false)
print("Epoch:", '%04d' % (epoch + 1), "train_loss=", "{:.5f}".format(cur_loss))
print("val_ap=", "{:.5f}".format(ap_curr))
print("time=", "{:.5f}".format(time.time() - t))
val_ap.append(ap_curr)
train_loss.append(cur_loss)
print("Optimization Finished!")
# Plot the learning curve
fig = plt.figure(figsize=(30,10))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
ax1.plot(train_loss, label='Training loss')
ax1.set_xlabel('Epoch')
ax1.set_ylabel('Loss')
ax1.legend(frameon=False)
ax2.plot(val_ap, label='Validation Average Precision Score',color='Red')
ax2.set_xlabel('Epoch')
ax2.set_ylabel('AP')
ax2.legend(frameon=False)
plt.show()
# Testing results
hidden_emb = emb.cpu().data.numpy()
roc_score, ap_score = get_roc_score(hidden_emb, adj_orig, test_edges, test_edges_false)
print("The Last Epoch's Results are:")
print('Test ROC score: ' + str(roc_score))
print('Test AP score: ' + str(ap_score))
return roc_score, ap_score
```
# Run
```
once = False
if __name__ == '__main__':
if once == True:
gae_for(args)
else:
test_roc = []
test_ap = []
# Run with 10 different random seeds
for seed in range(10):
print('Seed',seed)
args.seed = seed
torch.manual_seed(args.seed)
roc_score, ap_score = gae_for(args)
test_roc.append(roc_score)
test_ap.append(ap_score)
# show results by mean and std
print(test_roc)
print('mean test AUC is',np.mean(test_roc),' std ', np.std(test_roc))
print(test_ap)
print('mean test AP is ',np.mean(test_ap), ' std ', np.std(test_ap))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/claytonchagas/intpy_prod/blob/main/8_3_automatic_evaluation_dataone_tiny_gsgp_ast_only_DB.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!sudo apt-get update
!sudo apt-get install python3.9
!python3.9 -V
!which python3.9
```
#**i. Colab hardware and software specs:**
- n1-highmem-2 instance
- 2vCPU @ 2.3GHz
- 13GB RAM
- 100GB Free Space
- idle cut-off 90 minutes
- maximum lifetime 12 hours
```
# Colab hardware info (processor and memory):
# !cat /proc/cpuinfo
# !cat /proc/memoinfo
# !lscpu
!lscpu | egrep 'Model name|Socket|Thread|NUMA|CPU\(s\)'
print("---------------------------------")
!free -m
# Colab SO structure and version
!ls -a
print("---------------------------------")
!ls -l /
print("---------------------------------")
!lsb_release -a
```
#**ii. Cloning IntPy repository:**
- https://github.com/claytonchagas/intpy_dev.git
```
!git clone https://github.com/claytonchagas/intpy_dev.git
!ls -a
print("---------------------------------")
%cd intpy_dev/
!git checkout d06ae3e
!ls -a
print("---------------------------------")
%cd GSGP3/
!ls -a
print("---------------------------------")
!git branch
print("---------------------------------")
#!git log --pretty=oneline --abbrev-commit
#!git log --all --decorate --oneline --graph
```
#**iii. GSGP experiments' evolutions and cutoff by approach**
- This evaluation does not make sense as the simulation parameters are fixed.
#**iv. GSGP experiments', three mixed trials**
- This evaluation does not make sense as the simulation parameters are fixed.
#**1. Fast execution, all versions (v0.1.x and from v0.2.1.x to v0.2.7.x)**
##**1.1 Fast execution: only intra-cache**
###**1.1.1 Fast execution: only intra-cache => experiment's executions**
```
!rm -rf .intpy;\
echo "IntPy only intra-cache";\
experimento=TINY_GSHCGP_4py3_no_orig_memo.py;\
echo "Experiment: $experimento";\
for i in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do rm -rf output_intra_$i.dat;\
rm -rf .intpy;\
echo "---------------------------------";\
echo "IntPy version $i";\
for j in {1..5};\
do echo "Execution $j";\
rm -rf .intpy;\
if [ "$i" = "--no-cache" ]; then python3.9 $experimento $i >> output_intra_$i.dat;\
else python3.9 $experimento -v $i >> output_intra_$i.dat;\
fi;\
echo "Done execution $j";\
done;\
echo "Done IntPy distribution version $i";\
done;\
!ls -a
!echo "Statistics evaluation:";\
rm -rf stats_intra.dat;\
for k in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do echo "Statistics version $k" >> stats_intra.dat;\
echo "Statistics version $k";\
python3.9 stats_colab_gsgp.py output_intra_$k.dat;\
python3.9 stats_colab_gsgp.py output_intra_$k.dat >> stats_intra.dat;\
echo "---------------------------------";\
done;\
```
###**1.1.2 Fast execution: only intra-cache => charts generation**
```
%matplotlib inline
import matplotlib.pyplot as plt
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
filev = "f_intra_"
data = "data_intra_"
dataf = "dataf_intra_"
for i, j in zip(versions, colors):
filev_version = filev+i
data_version = data+i
dataf_version = dataf+i
file_intra = open("output_intra_"+i+".dat", "r")
data_intra = []
dataf_intra = []
for x in file_intra.readlines()[47::48]:
data_intra.append(float(x))
file_intra.close()
#print(data_intra)
for y in data_intra:
dataf_intra.append(round(y, 5))
print(i+": ",dataf_intra)
running1_1 = ['1st', '2nd', '3rd', '4th', '5th']
plt.figure(figsize = (10, 5))
plt.bar(running1_1, dataf_intra, color =j, width = 0.4)
plt.grid(axis='y')
for index, datas in enumerate(dataf_intra):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Running only with intra cache "+i, fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Chart "+i+" intra - dataone_tiny_gsgp - with intra cache, no inter cache - IntPy "+i+" version", fontweight='bold')
plt.savefig("chart_intra_"+i+".png")
plt.close()
#plt.show()
import matplotlib.pyplot as plt
file_intra = open("stats_intra.dat", "r")
data_intra = []
for x in file_intra.readlines()[5::8]:
data_intra.append(round(float(x[8::]), 5))
file_intra.close()
print(data_intra)
versions = ["--no-cache", "0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.7.x"]
#colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
plt.figure(figsize = (10, 5))
plt.bar(versions, data_intra, color = colors, width = 0.7)
plt.grid(axis='y')
for index, datas in enumerate(data_intra):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Median for 5 executions in each version, intra cache", fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("dataone_tiny_gsgp, cache intra-running, comparison of all versions", fontweight='bold')
plt.savefig('compare_median_intra.png')
plt.close()
#plt.show()
```
##**1.2 Fast execution: full cache -> intra and inter-cache**
###**1.2.1 Fast execution: full cache -> intra and inter-cache => experiment's executions**
```
!rm -rf .intpy;\
echo "IntPy full cache -> intra and inter-cache";\
experimento=TINY_GSHCGP_4py3_no_orig_memo.py;\
echo "Experiment: $experimento";\
for i in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do rm -rf output_full_$i.dat;\
rm -rf .intpy;\
echo "---------------------------------";\
echo "IntPy version $i";\
for j in {1..5};\
do echo "Execution $j";\
if [ "$i" = "--no-cache" ]; then python3.9 $experimento $i >> output_full_$i.dat;\
else python3.9 $experimento -v $i >> output_full_$i.dat;\
fi;\
echo "Done execution $j";\
done;\
echo "Done IntPy version $i";\
done;\
!echo "Statistics evaluation:";\
rm -rf stats_full.dat;\
for k in "--no-cache" "v01x" "v021x" "v022x" "v023x" "v024x" "v025x" "v027x";\
do echo "Statistics version $k" >> stats_full.dat;\
echo "Statistics version $k";\
python3.9 stats_colab_gsgp.py output_full_$k.dat;\
python3.9 stats_colab_gsgp.py output_full_$k.dat >> stats_full.dat;\
echo "---------------------------------";\
done;\
```
###**1.2.2 Fast execution: full cache -> intra and inter-cache => charts generation**
```
%matplotlib inline
import matplotlib.pyplot as plt
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
filev = "f_full_"
data = "data_full_"
dataf = "dataf_full_"
for i, j in zip(versions, colors):
filev_version = filev+i
data_version = data+i
dataf_version = dataf+i
file_full = open("output_full_"+i+".dat", "r")
data_full = []
dataf_full = []
for x in file_full.readlines()[47::48]:
data_full.append(float(x))
file_full.close()
for y in data_full:
dataf_full.append(round(y, 5))
print(i+": ",dataf_full)
running1_1 = ['1st', '2nd', '3rd', '4th', '5th']
plt.figure(figsize = (10, 5))
plt.bar(running1_1, dataf_full, color =j, width = 0.4)
plt.grid(axis='y')
for index, datas in enumerate(dataf_full):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Running full cache "+i, fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Chart "+i+" full - dataone_tiny_gsgp - with intra and inter cache - IntPy "+i+" version", fontweight='bold')
plt.savefig("chart_full_"+i+".png")
plt.close()
#plt.show()
import matplotlib.pyplot as plt
file_full = open("stats_full.dat", "r")
data_full = []
for x in file_full.readlines()[5::8]:
data_full.append(round(float(x[8::]), 5))
file_full.close()
print(data_full)
versions = ["--no-cache", "0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.7.x"]
#colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:pink']
plt.figure(figsize = (10, 5))
plt.bar(versions, data_full, color = colors, width = 0.7)
plt.grid(axis='y')
for index, datas in enumerate(data_full):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Median for 5 executions in each version, full cache", fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("dataone_tiny_gsgp, cache intra and inter-running, all versions", fontweight='bold')
plt.savefig('compare_median_full.png')
plt.close()
#plt.show()
```
##**1.3 Displaying charts to all versions**
###**1.3.1 Only intra-cache charts**
```
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
from IPython.display import Image, display
for i in versions:
display(Image("chart_intra_"+i+".png"))
print("=====================================================================================")
```
###**1.3.2 Full cache charts -> intra and inter-cache**
```
versions = ['--no-cache', 'v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v027x']
from IPython.display import Image, display
for i in versions:
display(Image("chart_full_"+i+".png"))
print("=====================================================================================")
```
###**1.3.3 Only intra-cache: median comparison chart of all versions**
```
from IPython.display import Image, display
display(Image("compare_median_intra.png"))
```
###**1.3.4 Full cache -> intra and inter-cache: median comparison chart of all versions**
```
from IPython.display import Image, display
display(Image("compare_median_full.png"))
```
| github_jupyter |
# Convolutional Neural Networks: Step by Step
Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.
**Notation**:
- Superscript $[l]$ denotes an object of the $l^{th}$ layer.
- Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
- Superscript $(i)$ denotes an object from the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example input.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.
- $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$.
- $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$.
We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:
- Convolution functions, including:
- Zero Padding
- Convolve window
- Convolution forward
- Convolution backward (optional)
- Pooling functions, including:
- Pooling forward
- Create mask
- Distribute value
- Pooling backward (optional)
This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:
<img src="images/model.png" style="width:800px;height:300px;">
**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation.
## 3 - Convolutional Neural Networks
Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below.
<img src="images/conv_nn.png" style="width:350px;height:200px;">
In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself.
### 3.1 - Zero-Padding
Zero-padding adds zeros around the border of an image:
<img src="images/PAD.png" style="width:600px;height:400px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption>
The main benefits of padding are the following:
- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer.
- It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.
**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:
```python
a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
```
```
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), 'constant', constant_values=((0, 0), (0, 0), (0, 0), (0, 0)))
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
```
**Expected Output**:
<table>
<tr>
<td>
**x.shape**:
</td>
<td>
(4, 3, 3, 2)
</td>
</tr>
<tr>
<td>
**x_pad.shape**:
</td>
<td>
(4, 7, 7, 2)
</td>
</tr>
<tr>
<td>
**x[1,1]**:
</td>
<td>
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
</td>
</tr>
<tr>
<td>
**x_pad[1,1]**:
</td>
<td>
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
</td>
</tr>
</table>
### 3.2 - Single step of convolution
In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which:
- Takes an input volume
- Applies a filter at every position of the input
- Outputs another volume (usually of different size)
<img src="images/Convolution_schematic.gif" style="width:500px;height:300px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>
In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output.
Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation.
**Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).
```
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Do not add the bias yet.
s = a_slice_prev * W
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = np.squeeze(Z + b)
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
```
**Expected Output**:
<table>
<tr>
<td>
**Z**
</td>
<td>
-6.99908945068
</td>
</tr>
</table>
### 3.3 - Convolutional Neural Networks - Forward pass
In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:
<center>
<video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls>
</video>
</center>
**Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding.
**Hint**:
1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:
```python
a_slice_prev = a_prev[0:2,0:2,:]
```
This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.
2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.
<img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption>
**Reminder**:
The formulas relating the output shape of the convolution to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_C = \text{number of filters used in the convolution}$$
For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
```
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + pad*2)/stride) + 1
n_W = int((n_W_prev - f + pad*2)/stride) + 1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(X=A_prev, pad=pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start: vert_end, horiz_start: horiz_end]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:, :, :, c], b[:, :, :, c])
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
```
**Expected Output**:
<table>
<tr>
<td>
**Z's mean**
</td>
<td>
0.0489952035289
</td>
</tr>
<tr>
<td>
**Z[3,2,1]**
</td>
<td>
[-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
</td>
</tr>
<tr>
<td>
**cache_conv[0][1][2][3]**
</td>
<td>
[-0.20075807 0.18656139 0.41005165]
</td>
</tr>
</table>
Finally, CONV layer should also contain an activation, in which case we would add the following line of code:
```python
# Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
# Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])
```
You don't need to do it here.
## 4 - Pooling layer
The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are:
- Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.
- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.
<table>
<td>
<img src="images/max_pool1.png" style="width:500px;height:300px;">
<td>
<td>
<img src="images/a_pool.png" style="width:500px;height:300px;">
<td>
</table>
These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over.
### 4.1 - Forward Pooling
Now, you are going to implement MAX-POOL and AVG-POOL, in the same function.
**Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.
**Reminder**:
As there's no padding, the formulas binding the output shape of the pooling to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
$$ n_C = n_{C_{prev}}$$
```
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h*stride
vert_end = vert_start + f
horiz_start = w*stride
horiz_end = horiz_start + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
```
**Expected Output:**
<table>
<tr>
<td>
A =
</td>
<td>
[[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
</td>
</tr>
<tr>
<td>
A =
</td>
<td>
[[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
</td>
</tr>
</table>
Congratulations! You have now implemented the forward passes of all the layers of a convolutional network.
The remainer of this notebook is optional, and will not be graded.
## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.
### 5.1 - Convolutional layer backward pass
Let's start by implementing the backward pass for a CONV layer.
#### 5.1.1 - Computing dA:
This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:
$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$
Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices.
In code, inside the appropriate for-loops, this formula translates into:
```python
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
```
#### 5.1.2 - Computing dW:
This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:
$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$
Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$.
In code, inside the appropriate for-loops, this formula translates into:
```python
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
```
#### 5.1.3 - Computing db:
This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:
$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$
As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost.
In code, inside the appropriate for-loops, this formula translates into:
```python
db[:,:,:,c] += dZ[i, h, w, c]
```
**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
```
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters['stride']
pad = hparameters['pad']
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros_like(A_prev)
dW = np.zeros_like(W)
db = np.zeros_like(b)
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H_prev): # loop over vertical axis of the output volume
for w in range(n_W_prev): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h*stride
vert_end = vert_start+f
horiz_start = w*stride
horiz_end = horiz_start+f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:, :, :, c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice*dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###h
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
```
** Expected Output: **
<table>
<tr>
<td>
**dA_mean**
</td>
<td>
1.45243777754
</td>
</tr>
<tr>
<td>
**dW_mean**
</td>
<td>
1.72699145831
</td>
</tr>
<tr>
<td>
**db_mean**
</td>
<td>
7.83923256462
</td>
</tr>
</table>
## 5.2 Pooling layer - backward pass
Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer.
### 5.2.1 Max pooling - backward pass
Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following:
$$ X = \begin{bmatrix}
1 && 3 \\
4 && 2
\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}
0 && 0 \\
1 && 0
\end{bmatrix}\tag{4}$$
As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask.
**Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward.
Hints:
- [np.max()]() may be helpful. It computes the maximum of an array.
- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:
```
A[i,j] = True if X[i,j] = x
A[i,j] = False if X[i,j] != x
```
- Here, you don't need to consider cases where there are several maxima in a matrix.
```
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = (x == np.max(x))
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
```
**Expected Output:**
<table>
<tr>
<td>
**x =**
</td>
<td>
[[ 1.62434536 -0.61175641 -0.52817175] <br>
[-1.07296862 0.86540763 -2.3015387 ]]
</td>
</tr>
<tr>
<td>
**mask =**
</td>
<td>
[[ True False False] <br>
[False False False]]
</td>
</tr>
</table>
Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost.
### 5.2.2 - Average pooling - backward pass
In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.
For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like:
$$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}
1/4 && 1/4 \\
1/4 && 1/4
\end{bmatrix}\tag{5}$$
This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average.
**Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
```
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = shape
# Compute the value to distribute on the matrix (≈1 line)
average = dz/(n_H * n_W)
# Create a matrix where every entry is the "average" value (≈1 line)
a = average * np.ones((n_H, n_W))
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
```
**Expected Output**:
<table>
<tr>
<td>
distributed_value =
</td>
<td>
[[ 0.5 0.5]
<br\>
[ 0.5 0.5]]
</td>
</tr>
</table>
### 5.2.3 Putting it together: Pooling backward
You now have everything you need to compute backward propagation on a pooling layer.
**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dZ.
```
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = cache
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = hparameters['stride']
f = hparameters['f']
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape
m, n_H, n_W, n_C = dA.shape
# Initialize dA_prev with zeros (≈1 line)
dA_prev = np.zeros_like(A_prev)
for i in range(m): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = A_prev[i]
for h in range(n_H): # loop on the vertical axis
for w in range(n_W): # loop on the horizontal axis
for c in range(n_C): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = h*stride
vert_end = vert_start + f
horiz_start = w*stride
horiz_end = horiz_start + f
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c]
# Create the mask from a_prev_slice (≈1 line)
mask = create_mask_from_window(a_prev_slice)
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += dA[i, h, w, c]* mask
elif mode == "average":
# Get the value a from dA (≈1 line)
da = dA[i, h, w, c]
# Define the shape of the filter as fxf (≈1 line)
shape = (f, f)
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape)
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
```
**Expected Output**:
mode = max:
<table>
<tr>
<td>
**mean of dA =**
</td>
<td>
0.145713902729
</td>
</tr>
<tr>
<td>
**dA_prev[1,1] =**
</td>
<td>
[[ 0. 0. ] <br>
[ 5.05844394 -1.68282702] <br>
[ 0. 0. ]]
</td>
</tr>
</table>
mode = average
<table>
<tr>
<td>
**mean of dA =**
</td>
<td>
0.145713902729
</td>
</tr>
<tr>
<td>
**dA_prev[1,1] =**
</td>
<td>
[[ 0.08485462 0.2787552 ] <br>
[ 1.26461098 -0.25749373] <br>
[ 1.17975636 -0.53624893]]
</td>
</tr>
</table>
### Congratulations !
Congratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow.
| github_jupyter |
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
from utilities_namespace import *
%%capture
%load_ext rpy2.ipython
%R require(ggplot2)
from helpers.notebooks import notebooks_importer
%%capture
import Breast_cancer_data as data
```
## Previously reported cancer stratification
On example of a well studied breast invasive carcinoma (BRCA) cohort from TCGA, as studied and reported by TCGA PanCancer Atlas in 2018.
```
from data_sources.pan_cancer_atlas import PanCancerAtlas
```
### PAM 50
- based on mRNA expression
- applied to breast cancer samples by (The Cancer Genome Atlas Network, 2012)
```
pam50_subtypes = PanCancerAtlas().subtypes_curated()
pam50_brca = pam50_subtypes[pam50_subtypes.cancer_type == 'BRCA']
pam50_brca.dropna(axis=1).head()
```
For detailed description see: https://www.bioconductor.org/packages/devel/bioc/vignettes/TCGAbiolinks/inst/doc/subtypes.html
The *Subtype_Selected* column is equivalent to mRNA for BRCA:
```
assert ((pam50_brca.cancer_type + '.' + pam50_brca.subtype_mrna) == pam50_brca.subtype_selected).all()
```
While *Subtype_protein* has no variance at all:
```
pam50_brca.subtype_protein.unique()
```
There are five subtypes in PAM 50 model:
```
pam50_brca.subtype_mrna.value_counts()
```
#### Coverage of PAM 50
There are subtypes assigned for every sample in BRCA cohort:
```
samples = set(data.brca_expression.columns)
pam_50_samples = set(pam50_brca.pan_samplesid)
len(samples & pam_50_samples) / len(samples)
df = pam50_brca.set_index('participant')[['subtype_mrna']].unstack().reset_index()
df.drop_duplicates()[0].value_counts()
```
#### PAM 50 allows to consider cancer heteregonity
Worth noting: in some cases there is more than one subtype per participant (which is kind of obvious when considering the cancer heterogeneity):
```
heterogenous = pam50_brca.groupby('participant').filter(lambda row: len(set(row.subtype_mrna)) > 1)
# ignore multiple samples (just look at the participants)
heterogenous_participants = heterogenous.drop_duplicates(subset=['participant', 'subtype_mrna'])[['participant', 'subtype_mrna']]
h = len(heterogenous_participants.participant.unique())
a = len(pam50_brca.participant.unique())
print(f'There are {h} participants (out of {a}) with heterogenity, so about {h/a*100:.2f}%')
%%R -i heterogenous_participants -w 950 -h 400 -u px
(
ggplot(heterogenous_participants, aes(x=participant, y=subtype_mrna, fill=subtype_mrna))
+ geom_tile()
+ theme(axis.text.x=element_text(angle=90))
)
heterogenous_groups = heterogenous_participants.groupby('participant').subtype_mrna.apply(lambda subtypes: ' and '.join(sorted(set(subtypes)))).to_dict()
heterogenous_participants['grouped'] = heterogenous_participants.participant.map(heterogenous_groups)
count_fraction = heterogenous_participants.groupby('participant').subtype_mrna.apply(lambda subtypes: 1/len(set(subtypes))).to_dict()
heterogenous_participants['Count'] = heterogenous_participants.participant.map(count_fraction)
heterogenous_participants = heterogenous_participants.sort_values('grouped')
%%R -i heterogenous_participants -w 700 -h 850 -u px
library(ggalluvial)
g = (
ggplot(heterogenous_participants, aes(axis1=subtype_mrna, axis2=grouped, y=Count))
+ ggalluvial::geom_alluvium(aes(fill=subtype_mrna))
+ ggalluvial::geom_stratum(width=1/3, fill='white', color = 'grey')
+ ggfittext::geom_fit_text(stat='stratum', label.strata=T, min.size=3, width=1/4)
+ scale_x_continuous(breaks=1:2, labels = c('Subtype', 'Subtypes combination'))
+ ylab('Participants count')
#+ ggtitle('BRCA heteregonity in TCGA (PAM 50)')
+ labs(fill='PAM 50 subtype')
+ theme(
legend.position='bottom',
legend.margin=margin(t=-.2, unit='cm'),
)
)
ggsave(file="pam50_heterogenity.png", plot=g, width=7*1.4, height=8.5*1.4, dpi=140)
g
```
### iCluster
- based on copy number, DNA methylation, mRNA and miRNA
- applied to all cancer types (pan-cancer clustering) by (Hoadley et al., 2018).
```
icluster_subtypes = PanCancerAtlas().icluster()
icluster_subtypes.icluster = 'IC' + icluster_subtypes.icluster.astype(str)
participants_with_brca = {
barcode.participant
for barcode in data.brca_expression.barcodes
}
icluster_brca = icluster_subtypes[icluster_subtypes.participant.isin(participants_with_brca)]
icluster_brca.head()
```
Some subtypes are assigned to only few (or even one) participants. Those may be mis-classifications or patients with metastasis (or just multiple cancers).
```
# upper numbers = custers, lower row = participants counts
HorizontalNamespace(icluster_brca.icluster.value_counts())
```
For the analysis I will exclude clusters with less than 3 samples (required number of samples for differential expression with signal-to-ratio and tTest metrics):
```
samples_by_participant = data.brca_expression.samples_by_participant()
def sampels_count_in_cluster(participants):
return sum([
len(samples_by_participant[participant])
for participant in participants
])
sampels_in_cluster = icluster_brca.groupby('icluster').participant.apply(sampels_count_in_cluster)
# upper numbers = custers, lower row = samples counts
HorizontalNamespace(sampels_in_cluster)
selecetd_clusters = set(sampels_in_cluster[sampels_in_cluster > 3].index)
icluster_brca_selected = icluster_brca[icluster_brca.icluster.isin(selecetd_clusters)]
# upper numbers = custers, lower row = participants counts
HorizontalNamespace(icluster_brca_selected.icluster.value_counts())
```
#### Coverage of iCluster subtypes
The coverage is high when all subtypes are included (94.24%)
```
participants_icluster = set(icluster_brca.participant)
len(participants_with_brca & participants_icluster) / len(participants_with_brca)
```
And remains comparable (93.96%) after exclusion of the three small clusters:
```
participants_icluster_selected = set(icluster_brca_selected.participant)
len(participants_with_brca & participants_icluster_selected) / len(participants_with_brca)
```
### PARADIGM
- based on mRNA expression, copy number and pathway interaction data (NCIPID, BioCarta, Reactome),
- "PARADIGM pathway analysis followed by unsupervised consensus clustering of pathway scores that clustered samples primarily by tissue type"
- applied to gynecologic and breast cancers by (Berger et al., 2018),
Methodology extracts (open access, [CC BY-NC-ND 4.0](https://creativecommons.org/licenses/by-nc-nd/4.0/), Berger et al):
> The PARADIGM algorithm infers an integrated pathway level (IPL) for each feature that reflects the log likelihood of the probability that it is activated (vs. inactivated). PARADIGM IPLs of the 19504 features within the SuperPathway is available on Synapse (syn6171376).
> We also computed the single sample gene set enrichment (ssGSEA) score, as described by Barbie et al (Barbie et al., 2009), of the constituent pathways forming the SuperPathway structure from the PARADIGM IPL data using the GSVA package in R (Hänzelmann et al., 2013). Of the 1524 pathways obtained, only 1387 have pathway members within the interconnected SuperPathway structure; and their ssGSEA scores are available on Synapse (syn10184122).
> Consensus Clustering based on PARADIGM Inferred Pathway Activation
Consensus clustering based on the 4876 most varying features (i.e. IPLs with variance within the highest quartile) was used to identify Pan-Gynecological subtypes implicated from shared patterns of pathway inference. Consensus clustering was implemented with the ConsensusClusterPlus package in R (Wilkerson and Hayes, 2010)
```
brca_subtypes_integrative = PanCancerAtlas().subtypes_integrative('BRCA')
# just to avoid confusion with Pan-Gyn clusters names
brca_subtypes_integrative.paradigm_clusters = 'PR' + brca_subtypes_integrative.paradigm_clusters
paradigm_brca = brca_subtypes_integrative.dropna(subset=['paradigm_clusters'])
HorizontalNamespace(paradigm_brca.paradigm_clusters.value_counts())
```
Clusters with less than 3 samples are (again) excluded:
```
sampels_in_cluster = paradigm_brca.groupby('paradigm_clusters').participant.apply(sampels_count_in_cluster)
selecetd_clusters = set(sampels_in_cluster[sampels_in_cluster > 3].index)
paradigm_brca_selected = paradigm_brca[paradigm_brca.paradigm_clusters.isin(selecetd_clusters)]
HorizontalNamespace(paradigm_brca_selected.paradigm_clusters.value_counts())
```
#### Coverage of PARADIGM clustering
Is again, high: 96.07%
```
participants_paradigm = set(paradigm_brca_selected.participant)
len(participants_with_brca & participants_paradigm) / len(participants_with_brca)
```
### Cluster of Cluster Assignments (CoCA) integrative analysis
> We used cluster assignments from the six major TCGA platforms (mutations, SCNA, DNA methylation, mRNA, miRNA, and protein) to perform integrated clustering across the Pan-Gyn cohort using the CoCA algorithm.
> The resulting CoCA clusters were heavily dominated by tumor type because the intrinsic gene expression patterns were lineage dependent. The association with tumor type was especially prominent in the DNA methylation, mRNA, miRNA, and protein clusters.
> Therefore, we turned to an alternative method (described next) to define subtypes that would span the Pan-Gyn tumor types and emphasize high-level similarities among them.
It seems that the CoCA clusters are not included (which seems ok, given that related figure only made it to supplementary files), but there are clusterings for each of the molecular platforms! So I could re-use these (but looking at BRCA only) and possibly get something interesting. Actually, there are many interesting ways of doing the consensus clustering...
But lets skip it for now as there are more pressing matters.
### Subtypes across the Pan-Gyn Tumors
> We present molecular subtypes that illuminate commonalities and distinguishing features across the Pan-Gyn tumor types, with the potential to inform future cross-tumor-type therapies. We first identified 16 features (listed in the STAR Methods) across 1,956 samples that were either (1) currently used in the clinic for at least 1 of the 5 tumor types, or (2) identified as informative in previous TCGA gynecologic and breast cancer studies
this is less granular clustering than PAM 50, but captures similarities on a higher level (as other gynecologic tumours were considered as well). Two examples, just to give an idea of the cluster meanings:
> SCNA load was the predominant feature and produced the first division. In the low-SCNA-load group, we found two clusters, non-hypermutator (C1) and hypermutator (C2). The non-hypermutator cluster had virtually no hypermutators but had high levels of ER+, PR+, and/or AR+ samples, indicating potential susceptibility to hormone therapies. C2, the hypermutator cluster, could be further subdivided into four subclusters (clusters C2A-C2D).
Sanity check: this clustering should have five clusters: C1-C5
```
pan_gyn_clusters_brca = brca_subtypes_integrative.dropna(subset=['pan_gyn_clusters'])
HorizontalNamespace(pan_gyn_clusters_brca.pan_gyn_clusters.value_counts())
```
#### Coverage of Pan-Gyn
For pan-gyn clusters the coverage is relatively low (79.78% participants);
```
participants_pan_gyn = set(pan_gyn_clusters_brca.participant)
len(participants_with_brca & participants_pan_gyn) / len(participants_with_brca)
```
## Summary
Only curated PAM 50 subtypes are sample specific. Clusters from other papers are assigned to participants (patients), not to samples - which kind of makes sense, once we combine information from different omics we no longer have a single sample (unless we could match e.g. DNA/expression samples). But then the heterogeneity information is lost as well - which means that for about 10% of participants the explanations will be incomplete.
### All stratifications together
#### Participant-level
```
pam50_adapted = pam50_brca[['participant', 'subtype_mrna']].drop_duplicates()
pam50_adapted.subtype_mrna = pam50_adapted.participant.map(heterogenous_groups).fillna(pam50_adapted.subtype_mrna)
pam50_adapted.head()
considered_stratifications = {
'subtype_mrna': pam50_adapted,
'icluster': icluster_brca_selected,
'paradigm_clusters': paradigm_brca_selected,
'pan_gyn_clusters': pan_gyn_clusters_brca,
}
all_stratifications = concat([
df.rename({custering_name: 'cluster'}, axis=1)[['participant', 'cluster']].drop_duplicates().assign(group=custering_name)
for custering_name, df in considered_stratifications.items()
]).sort_values(['group', 'cluster'])
from helpers.plots.alluvium import suggest_groups_ordering, determine_order_for_clusters_in_groups
ordering_ranking = suggest_groups_ordering(all_stratifications)
ordering_ranking.head()
```
I am choosing the first one (additional benefit of having the curated, well established PAM 50 on the left side)
```
order = ordering_ranking.index[0]
# clusters in the PAM 50 groups should be ordered by similarity (when possible) (so do not separate clusters that are similar).
# Globally impossible, so lets evaluate combinations to elimiate the worst cases:
from itertools import combinations, permutations
pam_50 = all_stratifications[all_stratifications.group == 'subtype_mrna']
ranked_permutations = {}
for permutation in permutations(pam_50.cluster.unique()):
score = 0
for i, current in enumerate(permutation[1:], 1):
previous = permutation[i - 1]
if previous in current.split(' and ') or current in previous.split(' and '):
score += 1
ranked_permutations[permutation] = score
chosen_pam_order = Series(ranked_permutations).sort_values(ascending=False).index[0]
counts = all_stratifications.cluster.value_counts().reset_index().rename({'index': 'group'}, axis=1)
HorizontalNamespace(**counts.set_index('group').cluster.to_dict())
ordering_method = 'by_size'
all_group_orders = (
[
*chosen_pam_order,
*[
c.group
for c in counts.itertuples()
if c.group not in chosen_pam_order
]
]
if ordering_method == 'alternative' else
list(counts.group)
)
all_stratifications.group = pd.Categorical(all_stratifications.group, ordered=True, categories=order)
assert set(all_group_orders) == set(all_stratifications.cluster.unique())
all_stratifications.cluster = pd.Categorical(all_stratifications.cluster, ordered=True, categories=all_group_orders)
corresponding_pam50 = all_stratifications[all_stratifications.group=='subtype_mrna'].set_index('participant').cluster.to_dict()
all_stratifications['corresponding_pam50'] = all_stratifications.participant.map(corresponding_pam50)
all_stratifications.to_pickle('published_brca_stratifications.pickle')
all_stratifications
all_stratifications[all_stratifications.corresponding_pam50.isnull()]
pam50_brca[pam50_brca.participant.isin(['A5EI', 'A9FZ'])]
from data_sources.tcga import TCGA
tcga = TCGA()
clinical_brca_data = tcga.clinical('BRCA')
clinical_brca_data[clinical_brca_data.participant.isin(['A5EI', 'A9FZ'])]
all_stratifications.corresponding_pam50 = all_stratifications.corresponding_pam50.fillna('Unknown')
participant_to_sample = pam50_brca[['pan_samplesid', 'participant']]
```
#### Sample-level
```
stratifications_mapped_to_samples = {}
for name, stratification in considered_stratifications.items():
stratifications_mapped_to_samples[name] = (
stratification
.merge(participant_to_sample, on='participant')
)
stratifications_mapped_to_samples['subtype_mrna'] = pam50_brca[
['pan_samplesid', 'participant', 'subtype_mrna']
]
all_stratifications_samples = concat([
df.rename({custering_name: 'cluster'}, axis=1)[['pan_samplesid', 'participant', 'cluster']].drop_duplicates().assign(group=custering_name)
for custering_name, df in stratifications_mapped_to_samples.items()
]).sort_values(['group', 'cluster'])
corresponding_pam50 = all_stratifications_samples[
all_stratifications_samples.group=='subtype_mrna'
].set_index('pan_samplesid').cluster.to_dict()
all_stratifications_samples['corresponding_pam50'] = all_stratifications_samples.pan_samplesid.map(corresponding_pam50)
assert not all_stratifications_samples.isnull().any().any()
all_stratifications_samples
```
### Summary-table
```
stratifications_summary = DataFrame([
{
'id': 'icluster',
'name': 'iCluster',
'reference': '(Hoadley et al., 2018)',
'based_on': ['CNV', 'DNA methylation', 'mRNA', 'miRNA']
},
{
'id': 'subtype_mrna',
'name': 'PAM 50',
'reference': '(TCGA Network, 2012)',
'based_on': ['mRNA']
},
{
'id': 'paradigm_clusters',
'name': 'PARADIGM',
'reference': '(Berger et al., 2018)',
'based_on': ['CNV', 'mRNA', 'Pathways (NCIPID, BioCarta, Reactome)']
},
{
'id': 'pan_gyn_clusters',
'name': 'Pan-Gyn',
'reference': '(Berger et al., 2018)',
'based_on': []
}
]).set_index('id')
```
### Sample-level comparison with PAM 50
```
heatmap = copy(all_stratifications_samples).reset_index(drop=True)
heatmap['x'] = 0
heatmap['y'] = 0
width = 20
def cluster_size(c):
return -len(c[1])
start_by_group = {}
height_by_group = {}
cluster_names = []
annotations = []
y = 0
for group, data in heatmap.groupby('group'):
start_by_group[group] = y
x = 0
for cluster, cluster_d in sorted(
data.groupby('cluster'), key=cluster_size
):
cluster_y_start = y
for corresponding_pam50_group, d in sorted(
cluster_d.groupby('corresponding_pam50'),
key=cluster_size
):
last = False
for index, row in d.iterrows():
heatmap.loc[index, 'x'] = x
heatmap.loc[index, 'y'] = y
x += 1
if x == width:
y += 1
x = 0
last = True
else:
last = False
if last:
y -= 1
cluster_h = y - cluster_y_start
y_center = cluster_y_start + (cluster_h) / 2
cluster_names.append({
'y': y_center,
'name': cluster
})
samples_n = len(cluster_d)
patients_n = len(set(cluster_d.participant))
size = 1
if samples_n < width:
label = f'{samples_n}/{patients_n}'
size = 3.5/4
elif cluster_h > 3:
label = f'{samples_n} samples\n{patients_n} participants'
size = 1.1
y_center -= 0.5
else:
label = f'{samples_n} samples / {patients_n} participants'
size = 3.6/4
annotations.append({
'y': y_center,
'label': label,
'x': min(width, samples_n) /2 - 0.5,
'group': group,
'size': size
})
x = 0
y += 2
height_by_group[group] = y - start_by_group[group]
y += 50
cluster_names = DataFrame(cluster_names)
annotations = DataFrame(annotations)
annotations
max_height = max(height_by_group.values())
for group, h in height_by_group.items():
if max_height != h:
heatmap = heatmap.append(
{
'group': group,
'corresponding_pam50': 'None',
'x': 0,
'y': start_by_group[group] + max_height,
'cluster': 'None',
'pan_samplesid': 'None',
'participant': 'None'
},
ignore_index=True
)
heatmap
names = stratifications_summary.name.to_dict()
names
def to_ordered_categories(column, order, mapping):
return pd.Categorical(
column.apply(mapping.get),
ordered=True,
categories=map(mapping.get, order)
)
annotations['stratification'] = to_ordered_categories(annotations.group, order, names)
heatmap['stratification'] = to_ordered_categories(heatmap.group, order, names)
heatmap
%%R -i heatmap -w 800 -h 800 -u px -i cluster_names -i annotations
pam50 = unique(heatmap$corresponding_pam50[heatmap$corresponding_pam50 != 'None'])
heatmap$corresponding_pam50[heatmap$corresponding_pam50 == 'None'] <- NA
g = (
ggplot(heatmap, aes(x=x, y=y))
+ facet_wrap( ~ stratification, scales='free', ncol=4)
+ geom_tile(color='white', aes(fill=corresponding_pam50))
+ theme(
legend.position='bottom',
legend.margin=margin(t=-.5, unit='cm'),
axis.text.x=element_blank(),
axis.ticks.x=element_blank(),
panel.grid.minor=element_blank(),
panel.grid.major=element_blank()
)
+ shadowtext::geom_shadowtext(
data=annotations, aes(label=label, size=4 * size),
color='white',
hjust=0.5,
vjust=0.5,
bg.color='#666666',
bg.r=0.1
)
+ scale_size_identity()
+ ggthemes::scale_fill_tableau(breaks=pam50, name='Corresponding PAM50 subtype')
+ scale_y_reverse(breaks=cluster_names$y, labels=cluster_names$name)
+ xlab('')
+ ylab('Cluster / subtype')
+ coord_cartesian(expand=F)
)
ggsave(file="stratifications_composition.png", plot=g, width=8*1.3, height=8*1.3, dpi=150)
g
```
### Participant-level comparison
```
from sklearn.metrics.cluster import v_measure_score, adjusted_rand_score
all_stratifications['stratification'] = all_stratifications.group.apply(names.get)
all_pairs = []
v_measure = []
tested_pairs = set()
for group, data in all_stratifications.groupby('stratification'):
for group_2, data_2 in all_stratifications.groupby('stratification'):
if group == group_2:
continue
pair = frozenset({group, group_2})
if pair in tested_pairs:
continue
tested_pairs.add(pair)
common_participants = list(set(data.participant) & set(data_2.participant))
c1 = data.set_index('participant').loc[common_participants].cluster
c2 = data_2.set_index('participant').loc[common_participants].cluster
score = v_measure_score(c1, c2)
rand = adjusted_rand_score(c1, c2)
v_measure.append({
'score': score,
'comparison': f'{group} vs {group_2}',
'y': max(map(len, [set(data.participant), set(data_2.participant)])),
'label': f'V-measure score: {score * 100:.2f}%\nAdjusted Rand score: {rand * 100:.2f}%'
})
all_pairs.append(
concat([data, data_2]).assign(
comparison=f'{group} vs {group_2}',
group_1=group,
group_2=group_2
)
)
v_measure = DataFrame(v_measure)
all_pairs = concat(all_pairs)
len(all_pairs.participant.unique())
%store all_pairs
counts['rank'] = counts.cluster.rank(ascending=False)
colors_order = counts.set_index('group').loc[all_pairs.cluster.cat.categories]['rank']
%%R -w 800 -h 900 -u px -i v_measure -i all_pairs -i colors_order
pal = ggthemes::tableau_color_pal("Tableau 20")
palette = colorRampPalette(pal(20))
g = (
ggplot(all_pairs, aes(x=stratification, stratum=cluster, alluvium=participant, fill=cluster))
+ ggalluvial::geom_flow(stat="alluvium", lode.guidance="leftright")
+ ggalluvial::geom_stratum()
+ facet_wrap(~ comparison, scales='free', ncol=3)
+ theme()
+ ggfittext::geom_fit_text(stat='stratum', label.strata=T, min.size=2, width=1/4, color='white', alpha=.5)
+ ggfittext::geom_fit_text(stat='stratum', label.strata=T, min.size=2, width=1/4)
+ ylab('Participants count')
+ xlab('')
+ guides(fill=F)
+ geom_text(data=v_measure, aes(label=label, x=1.5, y=y+70), vjust=0.5, inherit.aes=F)
+ scale_fill_manual(values=palette(32)[colors_order])
)
ggsave(file="side_by_side_stratification_comparison.png", plot=g, width=8*1.3, height=9*1.3, dpi=150)
g
```
Stratifications were compared with adjusted Rand Index and V-measure (sklearn) using the common subset of participants. While high values were not expected given drastically different numbers of clusters, certain similarities are apparent.
By all measures Pan-Gyn differs the most from PAM50 stratification, while PARADIGM and Pan-Gyn share the most similarities.
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# 자동차 연비 예측하기: 회귀
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/alpha/tutorials/keras/basic_regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/r2/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ko/r2/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 지원하려면 [이 양식](https://bit.ly/tf-translate)을
작성하거나
[docs@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs)로
메일을 보내주시기 바랍니다.
*회귀*(regression)는 가격이나 확률 같이 연속된 출력 값을 예측하는 것이 목적입니다. 이와는 달리 *분류*(classification)는 여러개의 클래스 중 하나의 클래스를 선택하는 것이 목적입니다(예를 들어, 사진에 사과 또는 오렌지가 포함되어 있을 때 어떤 과일인지 인식하는 것).
이 노트북은 [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) 데이터셋을 사용하여 1970년대 후반과 1980년대 초반의 자동차 연비를 예측하는 모델을 만듭니다. 이 기간에 출시된 자동차 정보를 모델에 제공하겠습니다. 이 정보에는 실린더 수, 배기량, 마력(horsepower), 공차 중량 같은 속성이 포함됩니다.
이 예제는 `tf.keras` API를 사용합니다. 자세한 내용은 [케라스 가이드](https://www.tensorflow.org/guide/keras)를 참고하세요.
```
# 산점도 행렬을 그리기 위해 seaborn 패키지를 설치합니다
!pip install seaborn
from __future__ import absolute_import, division, print_function, unicode_literals
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
!pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## Auto MPG 데이터셋
이 데이터셋은 [UCI 머신 러닝 저장소](https://archive.ics.uci.edu/ml/)에서 다운로드할 수 있습니다.
### 데이터 구하기
먼저 데이터셋을 다운로드합니다.
```
dataset_path = keras.utils.get_file("auto-mpg.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
```
판다스를 사용하여 데이터를 읽습니다.
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
### 데이터 정제하기
이 데이터셋은 일부 데이터가 누락되어 있습니다.
```
dataset.isna().sum()
```
문제를 간단하게 만들기 위해서 누락된 행을 삭제하겠습니다.
```
dataset = dataset.dropna()
```
`"Origin"` 열은 수치형이 아니고 범주형이므로 원-핫 인코딩(one-hot encoding)으로 변환하겠습니다:
```
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### 데이터셋을 훈련 세트와 테스트 세트로 분할하기
이제 데이터를 훈련 세트와 테스트 세트로 분할합니다.
테스트 세트는 모델을 최종적으로 평가할 때 사용합니다.
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### 데이터 조사하기
훈련 세트에서 몇 개의 열을 선택해 산점도 행렬을 만들어 살펴 보겠습니다.
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
전반적인 통계도 확인해 보죠:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### 특성과 레이블 분리하기
특성에서 타깃 값 또는 "레이블"을 분리합니다. 이 레이블을 예측하기 위해 모델을 훈련시킬 것입니다.
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### 데이터 정규화
위 `train_stats` 통계를 다시 살펴보고 각 특성의 범위가 얼마나 다른지 확인해 보죠.
특성의 스케일과 범위가 다르면 정규화(normalization)하는 것이 권장됩니다. 특성을 정규화하지 않아도 모델이 *수렴할 수 있지만*, 훈련시키기 어렵고 입력 단위에 의존적인 모델이 만들어집니다.
노트: 의도적으로 훈련 세트만 사용하여 통계치를 생성했습니다. 이 통계는 테스트 세트를 정규화할 때에도 사용됩니다. 이는 테스트 세트를 모델이 훈련에 사용했던 것과 동일한 분포로 투영하기 위해서입니다.
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
정규화된 데이터를 사용하여 모델을 훈련합니다.
주의: 여기에서 입력 데이터를 정규화하기 위해 사용한 통계치(평균과 표준편차)는 원-핫 인코딩과 마찬가지로 모델에 주입되는 모든 데이터에 적용되어야 합니다. 여기에는 테스트 세트는 물론 모델이 실전에 투입되어 얻은 라이브 데이터도 포함됩니다.
## 모델
### 모델 만들기
모델을 구성해 보죠. 여기에서는 두 개의 완전 연결(densely connected) 은닉층으로 `Sequential` 모델을 만들겠습니다. 출력 층은 하나의 연속적인 값을 반환합니다. 나중에 두 번째 모델을 만들기 쉽도록 `build_model` 함수로 모델 구성 단계를 감싸겠습니다.
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
```
### 모델 확인
`.summary` 메서드를 사용해 모델에 대한 간단한 정보를 출력합니다.
```
model.summary()
```
모델을 한번 실행해 보죠. 훈련 세트에서 `10` 샘플을 하나의 배치로 만들어 `model.predict` 메서드를 호출해 보겠습니다.
```
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
제대로 작동하는 것 같네요. 결괏값의 크기와 타입이 기대했던 대로입니다.
### 모델 훈련
이 모델을 1,000번의 에포크(epoch) 동안 훈련합니다. 훈련 정확도와 검증 정확도는 `history` 객체에 기록됩니다.
```
# 에포크가 끝날 때마다 점(.)을 출력해 훈련 진행 과정을 표시합니다
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
`history` 객체에 저장된 통계치를 사용해 모델의 훈련 과정을 시각화해 보죠.
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
import matplotlib.pyplot as plt
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure(figsize=(8,12))
plt.subplot(2,1,1)
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mae'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.subplot(2,1,2)
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
```
이 그래프를 보면 수 백번 에포크를 진행한 이후에는 모델이 거의 향상되지 않는 것 같습니다. `model.fit` 메서드를 수정하여 검증 점수가 향상되지 않으면 자동으로 훈련을 멈추도록 만들어 보죠. 에포크마다 훈련 상태를 점검하기 위해 *EarlyStopping 콜백(callback)*을 사용하겠습니다. 지정된 에포크 횟수 동안 성능 향상이 없으면 자동으로 훈련이 멈춥니다.
이 콜백에 대해 더 자세한 내용은 [여기](https://www.tensorflow.org/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping)를 참고하세요.
```
model = build_model()
# patience 매개변수는 성능 향상을 체크할 에포크 횟수입니다
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
이 그래프를 보면 검증 세트의 평균 오차가 약 +/- 2 MPG입니다. 좋은 결과인가요? 이에 대한 평가는 여러분에게 맡기겠습니다.
모델을 훈련할 때 사용하지 않았던 **테스트 세트**에서 모델의 성능을 확인해 보죠. 이를 통해 모델이 실전에 투입되었을 때 모델의 성능을 짐작할 수 있습니다:
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)
print("테스트 세트의 평균 절대 오차: {:5.2f} MPG".format(mae))
```
## 예측
마지막으로 테스트 세트에 있는 샘플을 사용해 MPG 값을 예측해 보겠습니다:
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
```
모델이 꽤 잘 예측한 것 같습니다. 오차의 분포를 살펴 보죠.
```
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
가우시안 분포가 아니지만 아마도 훈련 샘플의 수가 매우 작기 때문일 것입니다.
## 결론
이 노트북은 회귀 문제를 위한 기법을 소개합니다.
* 평균 제곱 오차(MSE)는 회귀 문제에서 자주 사용하는 손실 함수입니다(분류 문제에서 사용하는 손실 함수와 다릅니다).
* 비슷하게 회귀에서 사용되는 평가 지표도 분류와 다릅니다. 많이 사용하는 회귀 지표는 평균 절댓값 오차(MAE)입니다.
* 수치 입력 데이터의 특성이 여러 가지 범위를 가질 때 동일한 범위가 되도록 각 특성의 스케일을 독립적으로 조정해야 합니다.
* 훈련 데이터가 많지 않다면 과대적합을 피하기 위해 은닉층의 개수가 적은 소규모 네트워크를 선택하는 방법이 좋습니다.
* 조기 종료(Early stopping)은 과대적합을 방지하기 위한 좋은 방법입니다.
| github_jupyter |
Check of 2x6 Wood Joist Design per O86-09
E.Durham - 16-Aug-2018
```
import pint
unit = pint.UnitRegistry(system='mks')
Q = unit.Quantity
# define synonyms for common units
inch = unit.inch; mm = unit.mm; m = unit.m; kPa = unit.kPa; MPa = unit.MPa;
psi = unit.psi; kN = unit.kN; N = unit.N; ksi = unit.ksi;
dimensionless = unit.dimensionless; s = unit.second; kg = unit.kg
# Dead Load
DL_2x4 = 50 * (N/m**2) + 30 * (N/m**2) # 400 and 600 on-centre, from Wood
# Design Manual Table 11.26
DL_2x6 = 50 * (N/m**2) # 600 on-centre, ditto
DL_19 = 80 * (N/m**2) # 19 mm S-P-F sheathing, from Wood Design Manual
# Table 11.27
DL = DL_2x4 + DL_2x6 + DL_19
DL.ito(kN/m**2)
print('DL =', round(DL,2))
# Load Case
L = 2.8 * m # span of 2x6 joist, per 629768-S-A2105-ANN-0211 Rev_01,
# verified 15-Aug-2018 by EWD
LL = 2 * (kN/m**2) # Given Live Load
Tributary_width = 0.6 * m
sf_DL = 1.25
sf_LL = 1.5
w = ((sf_DL * DL)+(sf_LL * LL)) * Tributary_width # factored uniform load
w.ito(kN/m)
print('w =', round(w,2))
# Bending
M_max = (w * L**2) / 8
M_max.ito(kN*m)
print('M_max =', round(M_max,2))
# Trial Member Properties from Wood Design Manual 2010 Table 11.5
# Try 2x6 S-P-F No.1/No.2
b = 38 * mm
d = 140 * mm
A = 5.32 * 10**3 * mm**2 # Area
S_x = 124 * 10**3 * mm**3 # Section Modulus
I_x = 8.66 * 10**6 * mm**4 # Second Moment of Area
```
O86-09 5.5.4 Bending moment resistance
5.5.4.1 General
The factored bending moment resistance, $ M_r $ of sawn members shall be taken as follows:
$ M_r = \phi F_b S K_{Zb} K_L $
where
$ \phi = 0.9 $
$ F_b = f_b(K_D K_H K_{Sb} K_T) $
where
$ f_b = $ specified strength in bending, MPa
$ K_{Zb} = $ size factor in bending
$ K_L = $ lateral stability factor
```
# Determine appropriate value for Lateral Stability Factor, K_L per 5.5.4.2
print('Depth to width ratio is: ', round(d/b,2) )
```
Per 5.5.4.2.1(a) depth to width ratio less than 4:1 do not require intermediate lateral supports and K_L may be taken as unity. Lateral restraint is required at beam ends.
```
K_H = 1.0 * dimensionless # System Factor per 5.4.4
K_L = 1.0 * dimensionless # Lateral stability factor per 5.5.4.2
K_D = 1.15 * dimensionless # Load duration factor per Table 4.3.2.2 short-term
K_Sb = 1.0 * dimensionless # Service condition factor per Table 5.4.2
K_T = 1.0 * dimensionless # Treatment factor per Table 5.4.3 untreated
K_Zb = 1.4 # from Table 5,4,5
f_b = 11.8 * MPa # S-P-F No.1/No.2 from Table 5.3.1A
F_b = f_b*(K_D*K_H*K_Sb*K_T)
F_b.ito(MPa)
print('F_b =', round(F_b,2))
phi = 0.9 * dimensionless
M_r = phi*F_b*S_x*K_Zb*K_L
M_r.ito(kN*m)
print('M_r =', round(M_r,2))
M_r > M_max
```
Factored bending resistance is greater than factored load effects therefore, OK for Bending
## Check Bearing per 5.5.7
The factored compressive resistance perpendicular to grain under the effect of all factored applied loads shall be taken as $ Q_r $ as follow:
$ Q_r = \phi F_{cp} A_b K_B K_{Zcp} $
where
$ \phi = 0.8 $
$ F_{cp} = f_{cp} (K_D K_{Scp} K_T) $
where
$ f_{cp} = $ specified strength in compression perpendicular to grain, MPa
$ A_b = $ bearing area, mm^2
$ K_B = $ length of bearing factor (Clause 5.5.7.6)
$ K_{Zcp} = $ size factor for bearing (Clause 5.5.7.5)
```
Q_f = (w*L)/2
Q_f.ito(kN)
print('Q_f =', round(Q_f,2))
A_b = (38 * mm) * (20 * mm) # ASSUMED, need to verify thickness of steel
# supports
phi = 0.8 * dimensionless
f_cp = 5.3 * MPa # S-P-F No.1/No.2 from Table 5.3.1A
K_D = 1.15 * dimensionless # Load duration factor per Table 4.3.2.2
# short-term
K_B = 1.0 * dimensionless # Length of bearing factor per Table 5.5.7.6 unity
# b/c < 75mm from end
K_Scp = 1.0 * dimensionless # Service condition factor per Table 5.4.2 for
# dry service conditions
K_Zcp = 1.0 * dimensionless # Size factor for bearing per 5.5.7.5
# 38/140 << 1.0
F_cp = f_cp*(K_D*K_Scp*K_T)
F_cp.ito(MPa)
print('F_cp =', round(F_cp,2))
Q_r = phi*F_cp*A_b*K_B*K_Zcp
Q_r.ito(kN)
print('Q_r =', round(Q_r,2))
Q_r > Q_f
```
Compressive resistance perpendicular to grain exceeds factored bearing forces; therefore, OK for Bearing
## Check Shear
5.5.5 Shear Resistance
5.5.5.1 General
The factored shear resistance, $ V_r $ of sawn members shall be taken as follows:
$ V_r = \phi F_v \frac{2 A_n}{3} K_{Zv} $
where
$ \phi = 0.9 $
$ F_v = f_v (K_D K_H K_{Sv} K_T) $
where
$ f_v = $ specified shrength in shear, MPa (Clause 5.3)
$ A_n = $ net area of cross-section, mm^2 (Clause 4.3.8)
$ K_{Zv} = $ size factor in shear (Clause 5.4.5)
```
V_f = (w*L)/2
print('V_f =', round(V_f,2))
phi = 0.9
A_n = A
K_D = 1.15 * dimensionless # Load duration factor per Table 4.3.2.2
# short-term
K_H = 1.0 * dimensionless # System Factor per 5.4.4
K_Sv = 1.0 * dimensionless # Service condition factor per Table 5.4.2 dry
K_T = 1.0 * dimensionless # Treatment factor per Table 5.4.3 untreated
K_Zv = 1.4 * dimensionless # from Table 5,4,5
f_v = 1.5 * MPa # S-P-F No.1/No.2 from Table 5.3.1A
F_v = f_v*(K_D*K_H*K_Sv*K_T)
F_v.ito(MPa)
print('F_v =', round(F_v,2))
V_r = phi*F_v*((2*A_n)/3)*K_Zv
V_r.ito(kN)
print('V_r =', round(V_r,2))
V_r > V_f
```
Factored shear resistance is greater than factored shear; therefore, OK for Shear
## Check Deflection
Load combination for serviceability limit state 1.0DL + 1.0LL
```
w = (DL + LL) * Tributary_width
w.ito(kN/m)
print('w =', round(w,2))
delta_limit = L/180
delta_limit.ito(mm)
print('L/180 =', round(delta_limit,1))
E = 9500 * MPa # Modulus of Elasticity per Table 5.3.1A S-P-F No.1/No.2
K_SE = 1.0 * dimensionless # Service condition factor per Table 5.4.2 dry
K_T = 1.0 * dimensionless # Treatment factor per Table 5.4.3 untreated
E_S = E*(K_SE*K_T) # per 4.5.1
E_S.ito(MPa)
E_S
delta_max = (5*w*L**4)/(384*E_S*I_x)
delta_max.ito(mm)
print('Max deflection =', round(delta_max,1))
delta_limit > delta_max
```
Elastic deflection limit is greater than elastic deflection due to service loading; therefore, OK for Deflection
| github_jupyter |
*Sebastian Raschka*
last modified: 03/31/2014
<hr>
I am really looking forward to your comments and suggestions to improve and extend this tutorial! Just send me a quick note
via Twitter: [@rasbt](https://twitter.com/rasbt)
or Email: [bluewoodtree@gmail.com](mailto:bluewoodtree@gmail.com)
<hr>
### Problem Category
- Statistical Pattern Recognition
- Supervised Learning
- Parametric Learning
- Bayes Decision Theory
- Univariate data
- 2-class problem
- equal variances
- different priors
- Gaussian model (2 parameters)
- No Risk function
<hr>
<p><a name="sections"></a>
<br></p>
# Sections
<p>• <a href="#given">Given information</a><br>
• <a href="#deriving_db">Deriving the decision boundary</a><br>
• <a href="#plotting_db">Plotting the class conditional densities, posterior probabilities, and decision boundary</a><br>
• <a href="#classify_rand">Classifying some random example data</a><br>
• <a href="#emp_err">Calculating the empirical error rate</a><br>
<hr>
<p><a name="given"></a>
<br></p>
## Given information:
[<a href="#sections">back to top</a>] <br>
####model: continuous univariate normal (Gaussian) model for the class-conditional densities
$ p(x | \omega_j) \sim N(\mu|\sigma^2) $
$ p(x | \omega_j) \sim \frac{1}{\sqrt{2\pi\sigma^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu}{\sigma}\bigg)^2 \bigg] } $
####Prior probabilities:
$ P(\omega_1) = \frac{2}{3}, \quad P(\omega_2) = \frac{1}{3} $
#### Variances of the sample distributions
$ \sigma_1^2 = \sigma_2^2 = 1 $
#### Means of the sample distributions
$ \mu_1 = 4, \quad \mu_2 = 10 $
<br>
<p><a name="deriving_db"></a>
<br></p>
## Deriving the decision boundary
[<a href="#sections">back to top</a>] <br>
### Bayes' Rule:
$ P(\omega_j|x) = \frac{p(x|\omega_j) * P(\omega_j)}{p(x)} $
###Bayes' Decision Rule:
Decide $ \omega_1 $ if $ P(\omega_1|x) > P(\omega_2|x) $ else decide $ \omega_2 $.
<br>
$ \Rightarrow \frac{p(x|\omega_1) * P(\omega_1)}{p(x)} > \frac{p(x|\omega_2) * P(\omega_2)}{p(x)} $
We can drop $ p(x) $ since it is just a scale factor.
$ \Rightarrow P(x|\omega_1) * P(\omega_1) > p(x|\omega_2) * P(\omega_2) $
$ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > \frac{P(\omega_2)}{P(\omega_1)} $
$ \Rightarrow \frac{p(x|\omega_1)}{p(x|\omega_2)} > \Bigg(\frac{\frac{1}{3}}{\frac{2}{3}}\Bigg) $
$ \Rightarrow \frac{p(x|\omega_1)\cdot\frac{2}{3}}{p(x|\omega_2)\cdot\frac{1}{3}} > 1 $
$ \Rightarrow \frac{2}{3}\cdot\frac{1}{\sqrt{2\pi\sigma_1^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_1}{\sigma_1}\bigg)^2 \bigg] } > \frac{1}{3}\cdot\frac{1}{\sqrt{2\pi\sigma_2^2}} \exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_2}{\sigma_2}\bigg)^2 \bigg] } $
Since we have equal variances, we can drop the first term completely.
$ \Rightarrow \frac{2}{3}\cdot\exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_1}{\sigma_1}\bigg)^2 \bigg] } > \frac{1}{3}\cdot\exp{ \bigg[-\frac{1}{2}\bigg( \frac{x-\mu_2}{\sigma_2}\bigg)^2 \bigg] } \quad\quad \bigg| \;ln, \quad \mu_1 = 4, \quad \mu_2 = 10, \quad \sigma=1 $
$ \Rightarrow ln(2) - ln(3) -\frac{1}{2} (x-4)^2 > ln(1) - ln(3) -\frac{1}{2} (x-10)^2 \quad \bigg| \; + ln(3), \; \cdot \; (-2) $
$ \Rightarrow -2ln(2) + (x-4)^2 < (x-10)^2 $
$ \Rightarrow x^2 - 8x + 14.6137 < x^2 - 20x + 100 $
$ \Rightarrow 12x < 85.3863 $
$ \Rightarrow x < 7.1155 $
<p><a name="plotting_db"></a>
<br></p>
## Plotting the class conditional densities, posterior probabilities, and decision boundary
[<a href="#sections">back to top</a>] <br>
```
%pylab inline
import numpy as np
from matplotlib import pyplot as plt
def pdf(x, mu, sigma):
"""
Calculates the normal distribution's probability density
function (PDF).
"""
term1 = 1.0 / ( math.sqrt(2*np.pi) * sigma )
term2 = np.exp( -0.5 * ( (x-mu)/sigma )**2 )
return term1 * term2
# generating some sample data
x = np.arange(0, 100, 0.05)
# probability density functions
pdf1 = pdf(x, mu=4, sigma=1)
pdf2 = pdf(x, mu=10, sigma=1)
# Class conditional densities (likelihoods)
plt.plot(x, pdf1)
plt.plot(x, pdf2)
plt.title('Class conditional densities (likelihoods)')
plt.ylabel('p(x)')
plt.xlabel('random variable x')
plt.legend(['p(x|w_1) ~ N(4,1)', 'p(x|w_2) ~ N(10,1)'], loc='upper right')
plt.ylim([0,0.5])
plt.xlim([0,20])
plt.show()
def posterior(likelihood, prior):
"""
Calculates the posterior probability (after Bayes Rule) without
the scale factor p(x) (=evidence).
"""
return likelihood * prior
# probability density functions
posterior1 = posterior(pdf(x, mu=4, sigma=1), 2/3.0)
posterior2 = posterior(pdf(x, mu=10, sigma=1), 1/3.0)
# Class conditional densities (likelihoods)
plt.plot(x, posterior1)
plt.plot(x, posterior2)
plt.title('Posterior Probabilities w. Decision Boundary')
plt.ylabel('P(w)')
plt.xlabel('random variable x')
plt.legend(['P(w_1|x)', 'p(w_2|X)'], loc='upper right')
plt.ylim([0,0.5])
plt.xlim([0,20])
plt.axvline(7.1155, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.annotate('R1', xy=(4, 0.3), xytext=(4, 0.3))
plt.annotate('R2', xy=(10, 0.3), xytext=(10, 0.3))
plt.show()
```
<p><a name="classify_rand"></a>
<br></p>
## Classifying some random example data
[<a href="#sections">back to top</a>] <br>
```
# Parameters
mu_1 = 4
mu_2 = 10
sigma_1_sqr = 1
sigma_2_sqr = 1
# Generating 10 random samples drawn from a Normal Distribution for class 1 & 2
x1_samples = sigma_1_sqr**0.5 * np.random.randn(10) + mu_1
x2_samples = sigma_1_sqr**0.5 * np.random.randn(10) + mu_2
y = [0 for i in range(10)]
# Plotting sample data with a decision boundary
plt.scatter(x1_samples, y, marker='o', color='green', s=40, alpha=0.5)
plt.scatter(x2_samples, y, marker='^', color='blue', s=40, alpha=0.5)
plt.title('Classifying random example data from 2 classes')
plt.ylabel('P(x)')
plt.xlabel('random variable x')
plt.legend(['w_1', 'w_2'], loc='upper right')
plt.ylim([-0.1,0.1])
plt.xlim([0,20])
plt.axvline(7.115, color='r', alpha=0.8, linestyle=':', linewidth=2)
plt.annotate('R1', xy=(4, 0.05), xytext=(4, 0.05))
plt.annotate('R2', xy=(10, 0.05), xytext=(10, 0.05))
plt.show()
```
<p><a name="emp_err"></a>
<br></p>
## Calculating the empirical error rate
[<a href="#sections">back to top</a>] <br>
```
w1_as_w2, w2_as_w1 = 0, 0
for x1,x2 in zip(x1_samples, x2_samples):
if x1 >= 7.115:
w1_as_w2 += 1
if x2 < 7.115:
w2_as_w1 += 1
emp_err = (w1_as_w2 + w2_as_w1) / float(len(x1_samples) + len(x2_samples))
print('Empirical Error: {}%'.format(emp_err * 100))
test complete; Gopal
```
| github_jupyter |
# Water classification with radar from Sentinel 1
### Background
Over 40% of the world’s population lives within 100 km of the coastline. However, coastal environments are constantly changing, with erosion and coastal change presenting a major challenge to valuable coastal infrastructure and important ecological habitats. Up-to-date data on coastal change and erosion is essential for coastal managers to be able to identify and minimise the impacts of coastal change and erosion.
### The Problem
While coastlines can be detected using optical data (demonstrated in the [Costal Change Notebook](Coastal_erosion_monitoring.ipynb)), these images can be strongly affected by the weather, especially through the presence of clouds, which obscure the land and water below.
### Digital Earth Australia use case
Radar observations are largely unaffected by cloud cover, so can take reliable measurements of areas in any weather. Radar data is readily available from the ESA/EC Copernicus program's Sentinel 1 satellites. The two satellites provide all-weather observations, with a revisit time of 6 days. By developing a process to classify the observed pixels as either water or land, it is possible to identify the shoreline from radar data.
In this example, we use data from the Sentinel 1 satellites to build a classifier that can determine whether a pixel is water or land in radar data. The worked example takes users through the code required to:
1. Pick a study area along the coast.
1. Explore available data products and load Sentinel 1 data.
1. Visualise the returned data.
1. Perform pre-processing steps on the Sentinel 1 bands.
1. Design a classifier to distinguish land and water.
1. Apply the classifier to the study area and interpret the results.
1. Investigate how to identify change in the coastline.
### Technical details
* Products used: `s1_gamma0_geotif_scene`
* Instrument used: `C-SAR` (`VV` and `VH` polarisation). You can read more about the instrument [here](https://sentinel.esa.int/web/sentinel/technical-guides/sentinel-1-sar/sar-instrument).
* Analyses used: compositing, speckle filtering, classification, change detection
**To run this analysis, run all the cells in the notebook. When you finished the analysis, you can return to the start, modify some values (e.g. choose a different location) and re-run the analysis. Throughout the notebook, you will need to add some code of your own to complete the analysis.**
## Picking the study area
If you're running this notebook for the first time, we recommend you keep the default settings below. This will allow you to understand how the analysis works.
The example we've selected looks at part of the coastline of Melville Island, which sits off the coast of the Northen Territory, Australia. The study area also contains an additional small island, which will be useful for assessing how well radar data distinguishes between land and water.
Run the following two cells to set the latitude and logitude range, and then view the area.
```
latitude = (-11.287611, -11.085876)
longitude = (130.324262, 130.452652)
from utils.display import display_map
display_map(latitude=latitude, longitude=longitude)
```
## Loading available data
Before loading the data, we'll need to import the Open Data Cube library and load the `Datacube` class.
```
import datacube
dc = datacube.Datacube(app='sentinel-1-water-classifier')
```
### Specify product information
You'll also need to specify the data product we want to load. We'll be working with the `s1_gamma0_geotif_scene` product. The relevant information can be stored in a Python dictionary, which we'll pass to the `dc.load()` function later.
```
product_information = {
'product': "s1_gamma0_geotif_scene",
'output_crs': "EPSG:4326",
'resolution': (0.00013557119,0.00013557119)
}
```
### Specify latitude and longitude information
We can specify the latitude and longitude bounds of our area using the variables we defined earlier in the notebook.
```
area_information = {
'latitude': latitude,
'longitude': longitude
}
```
### Load Data
You might have noticed that we defined the product and area information a little differently than we did in other notebooks. Above, we specified the information in two dictonaries, which the `dc.load()` function can access by including `**` before the name of each dictionary, as demonstrated in the next cell.
```
dataset = dc.load(**product_information, **area_information)
```
If the load was sucessful, running the next cell should return the `xarray` summary of the dataset. Make a note of dimensions and data variables, as you'll need these variables during the data preperation and analysis.
```
print(dataset)
```
## Visualise loaded data
Sentinel 1 data has two observations, *VV* and *VH*, which correspond to the polarisation of the light sent and received by the satellite. *VV* refers to the satellite sending out vertically-polarised light and receiving vertically-polarised light back, whereas *VH* refers to the satellite sending out vertically-polarised light and receiving horizontally-polarised light back. These two bands can tell us different information about the area we're studying.
Before running any plotting commands, we'll load the *matplotlib* library in the cell below, along with the *numpy* library. We'll also make use of the in-built plotting functions from *xarray*.
*Note that we take the base-10 logarithm of the bands before plotting them such that we work in units of decibels (dB) rather than digital number (DN).*
```
import matplotlib.pyplot as plt
import numpy as np
```
### Visualise VH bands
```
# Plot all VH observations for the year
converted_vh = np.log10(dataset.vh) # Scale to plot data in decibels
converted_vh.plot(cmap="Greys_r", col="time", col_wrap=5)
plt.show()
# Plot the average of all VH observations
mean_converted_vh = converted_vh.mean(dim="time")
fig = plt.figure(figsize=(7,9))
mean_converted_vh.plot(cmap="Greys_r")
plt.title("Average VH")
plt.show()
```
What key differences do you notice between each individual observation and the mean?
### Visualise VV bands
We've provided two empty cells for you to perform the same analysis as above, but now for the *VV* band. Try and type the code out -- it will help you get better at using the Open Data Cube library!
*Hint: You'll want to perform the same steps, but change the data variable. We've already used the `vh` variable, so you can go back and check the* `xarray` *summary to find the variable name for the VV observation.*
```
# Plot all VV observations for the year
# Plot the average of all VV observations
```
What key differences do you notice between each individual observation and the mean? What about differences between the average *VH* and *VV* bands?
Take a look back at the map image to remind yourself of the shape of the land and water of our study area. In both bands, what distinguishes the land and the water?
## Preprocessing the data through filtering
### Speckle Filtering using Lee Filter
You may have noticed that the water in the individual *VV* and *VH* images isn't a consistent colour. The distortion you're seeing is a type of noise known as speckle, which gives the images a grainy appearence. If we want to be able to easily decide whether any particular pixel is water or land, we need to reduce the chance of misinterpreting a water pixel as a land pixel due to the noise.
Speckle can be removed through filtering. If interested, you can find a technical introduction to speckle filtering [here](https://earth.esa.int/documents/653194/656796/Speckle_Filtering.pdf). For now, it is enough to know that we can filter the data using the python function defined in the next cell.
```
# Adapted from https://stackoverflow.com/questions/39785970/speckle-lee-filter-in-python
from scipy.ndimage.filters import uniform_filter
from scipy.ndimage.measurements import variance
def lee_filter(da, size):
img = da.values
img_mean = uniform_filter(img, (size, size))
img_sqr_mean = uniform_filter(img**2, (size, size))
img_variance = img_sqr_mean - img_mean**2
overall_variance = variance(img)
img_weights = img_variance / (img_variance + overall_variance)
img_output = img_mean + img_weights * (img - img_mean)
return img_output
```
Now that we've defined the filter, we can run it on the *VV* and *VH* data. You might have noticed that the function takes a `size` argument. This will change how blurred the image becomes after smoothing. We've picked a default value for this analysis, but you can experiement with this if you're interested.
```
# Set any null values to 0 before applying the filter to prevent issues
dataset_zero_filled = dataset.where(~dataset.isnull(), 0)
# Create a new entry in dataset corresponding to filtered VV and VH data
dataset["filtered_vv"] = dataset_zero_filled.vv.groupby('time').apply(lee_filter, size=7)
dataset["filtered_vh"] = dataset_zero_filled.vh.groupby('time').apply(lee_filter, size=7)
```
### Visualise Filtered VH bands
```
# Plot all filtered VH observations for the year
converted_filtered_vh = np.log10(dataset.filtered_vh) # Scale to plot data in decibels
converted_filtered_vh.plot(cmap="Greys_r", col="time", col_wrap=5)
plt.show()
# Plot the average of all filtered VH observations
mean_converted_filtered_vh = converted_filtered_vh.mean(dim="time")
fig = plt.figure(figsize=(7,9))
mean_converted_filtered_vh.plot(cmap="Greys_r")
plt.title("Average filtered VH")
plt.show()
```
### Visualise Filtered VV bands
```
# Plot all filtered VV observations for the year
# Plot the average of all filtered VV observations
```
Now that you've finished filtering the data, compare the plots before and after and you should be able to notice the impact of the filtering. If you're having trouble spotting it, it's more noticable in the VH band.
### Plotting VH and VV histograms
Another way to observe the impact of filtering is to view histograms of the pixel values before and after filtering. Try running the next two cells to view the histograms for *VH* and *VV*.
```
fig = plt.figure(figsize=(15,3))
_ = np.log10(dataset.filtered_vh).plot.hist(bins = 1000, label="VH filtered")
_ = np.log10(dataset.vh).plot.hist(bins=1000, label="VH", alpha=.5)
plt.legend()
plt.title("Comparison of filtered VH bands to original")
plt.show()
fig = plt.figure(figsize=(15,3))
_ = np.log10(dataset.filtered_vv).plot.hist(bins=1000, label="VV filtered")
_ = np.log10(dataset.vv).plot.hist(bins=1000, label="VV", alpha=.5)
plt.legend()
plt.title("Comparison of filtered VV bands to original")
plt.show()
```
You may have noticed that both the original and filtered bands show two peaks in the histogram, which we can classify as a bimodal distribution. Looking back at the band images, it's clear that the water pixels generally have lower *VH* and *VV* values than the land pixels. This lets us conclude that the lower distribution corresponds to water pixels and the higher distribution corresponds to land pixels. Importantly, the act of filtering has made it clear that the two distributions can be separated, which is especially obvious in the *VH* histogram. This allows us to confidently say that pixel values below a certain threshold are water, and pixel values above it are land. This will form the basis for our classifier in the next section.
# Designing a threshold-based water classifier
Given that the distinction between the `land` and `water` pixel value distributions is strongest in the *VH* band, we'll base our classifier on this distribution. To separate them, we can choose a threshold: pixels with values below the threshold are water, and pixels with values above the threshold are not water (land).
There are a number of ways to determine the threshold; one is to estimate it by looking at the *VH* histogram. From this, we might guess that $\text{threshold} = -2.0$ is a reasonable value. Run the cell below to set the threshold.
```
threshold = -2.0
```
The classifier separates data into two classes: data above the threshold and data below the threshold. In doing this, we assume that values of both segments correspond to the same `water` and `not water` distinctions we make visually. This can be represented with a step function:
$$ \text{water}(VH) = \left\{
\begin{array}{lr}
\text{True} & : VH < \text{threshold}\\
\text{False} & : VH \geq \text{threshold}
\end{array}
\right.\\ $$
<br>
### Visualise threshold
To check if our chosen threshold reasonably divides the two distributions, we can add the threshold to the histogram plots we made earlier. Run the next two cells to view two different visualisations of this.
```
fig = plt.figure(figsize=(15,3))
plt.axvline(x=-2, label='Threshold at {}'.format(threshold), color="red")
_ = np.log10(dataset.filtered_vh).plot.hist(bins=1000, label="VH filtered")
_ = np.log10(dataset.vh).plot.hist(bins=1000, label="VH", alpha=.5)
plt.legend()
plt.title("Histogram Comparison of filtered VH bands to original")
plt.show()
fig, ax = plt.subplots(figsize=(15,3))
_ = np.log10(dataset.filtered_vh).plot.hist(bins=1000, label="VH filtered")
ax.axvspan(xmin=-2, xmax=-.5, alpha=0.25, color='red', label="Not Water")
ax.axvspan(xmin=-3.7, xmax=-2, alpha=0.25, color='green', label="Water")
plt.legend()
plt.title("Effect of the classifier")
plt.show()
```
If you're curious about how changing the threshold impacts the classifier, try changing the threshold value and running the previous two cells again.
## Build and apply the classifier
Now that we know the threshold, we can write a function to only return the pixels that are classified as water. The basic steps that the function will perform are:
1. Check that the data set has a *VH* band to classify.
1. Clean the data by applying the speckle filter.
1. Convert the *VH* band measurements from digital number (DN) to decibels (dB) by taking the base-10 logarithm.
1. Find all pixels that have filtered dB values lower than the threshold; these are the `water` pixels.
1. Return a data set containing the `water` pixels.
These steps correspond to the actions taken in the function below. See if you can determine which parts of the function map to each step before you continue.
```
import numpy as np
import xarray as xr
def s1_water_classifier(ds:xr.Dataset, threshold=-2) -> xr.Dataset:
assert "vh" in ds.data_vars, "This classifier is expecting a variable named `vh` expressed in DN, not DB values"
filtered = ds.vh.groupby('time').apply(lee_filter, size=7)
water_data_array = np.log10(filtered) < threshold
return water_data_array.to_dataset(name="s1_water")
```
Now that we have defined the classifier function, we can apply it to the data. After you run the classifier, you'll be able to view the classified data product by running `print(dataset.s1_water)`. Try adding this line to the cell below, or add a new cell and run it there.
```
dataset["s1_water"] = s1_water_classifier(dataset).s1_water
```
### Validation with mean
We can now view the image with our classification. The classifier returns either `True` or `False` for each pixel. To detect the shoreline, we want to check which pixels are always water and which are always land. Conveniently, Python encodes `True = 1` and `False = 0`. If we plot the average classified pixel value, pixels that are always water will have an average value of `1` and pixels that are always land will have an average of `0`. Pixels that are sometimes water and sometimes land will have an average between these values.
The following cell plots the average classified pixel value over time. How might you classify the shoreline from the average classification value?
```
# Plot the mean of each classified pixel value
plt.figure(figsize=(15,12))
dataset.s1_water.mean(dim="time").plot(cmap="RdBu")
plt.title("Average classified pixel value")
plt.show()
```
#### Interpreting the mean classification
From the image above, you should be able to see that the shoreline takes on a mix of values between `0` and `1`. You can also see that our threshold has done a good job of separating the water pixels (in blue) and land pixels (in red).
### Validation with standard deviation
Given that we've identified the shoreline as the pixels that are calssified sometimes as land and sometimes as water, we can also see if the standard deviation of each pixel in time is a reasonable way to determine if a pixel is shoreline or not. Similar to how we calculated and plotted the mean above, you can calculate and plot the standard deviation by using the `std` function in place of the `mean` function. Try writing the code in the next cell.
*Hint: the only things you'll need to change from the code above are the function you use and the title of the plot.*
If you'd like to see the results using a different colour-scheme, you can also try substituting `cmap = "Greys"` or `cmap = "Blues"` in place of `cmap = "RdBu"` from the previous plot.
```
# Plot the standard deviation of each classified pixel value
```
#### Interpreting the standard deviation of the classification
From the image above, you should be able to see that the land and water pixels almost always have a standard deviation of `0`, meaning they didn't change over the time we sampled. With further invesitgation, you could potentially turn this statistic into a new classifier to extract shoreline pixels. If you're after a challenge, have a think about how you might approach this.
An important thing to recognise is that the standard deviation might not be able to detect the difference between noise and ongoing change, since a pixel that frequently alternates between land and water (noise) could have the same standard deviation as a pixel that is land for some time, then becomes water for the remaining time (ongoing change). Consider how you might distinguish between these two different cases with the data and tools you have.
## Detecting coastal change
The standard deviation we calculated before gives us an idea of how much the pixel has changed over the entire period of time that we looked at. It might also be interesting to look at which pixels have changed between any two particular times in our sample.
In the next cell, we choose the images to compare. Printing the dataset should show you that there are 27 time-steps, so the first has an index value of `0`, and the last has an index value of `26`. You can change these to be any numbers in between, as long as the start is earlier than the end.
```
start_time_index = 0
end_time_index = 26
```
Next, we can define the change as the difference in the classified pixel value at each point. Land becoming water will have a value of `-1` and water becoming land will have a value of `1`.
```
change = np.subtract(dataset.s1_water.isel(time=start_time_index), dataset.s1_water.isel(time=end_time_index), dtype=np.float32)
change = change.where(change != 0) # set all '0' entries to NaN, which prevents them from displaying in the plot.
dataset["change"] = change
```
Now that we've added change to the data set, you should be able to plot it below to look at which pixels changed. You can also plot the original mean *VH* composite to see how well the change matches our understanding of the shoreline location.
```
plt.figure(figsize=(15,12))
dataset.filtered_vh.mean(dim="time").plot(cmap="Blues")
dataset.change.plot(cmap="RdBu", levels=2)
plt.title('Change in pixel value between time={} and time={}'.format(start_time_index, end_time_index))
plt.show()
```
## Drawing conclusions
Here are some questions to think about:
* What are the benefits and drawbacks of the possible classification options we explored?
* How could you extend the analysis to extract a shape for the coastline?
* How reliable is our classifier?
* Is there anything you can think of that would improve it?
## Next steps
When you are done, you can return to the start of the notebook and change the latitude and longitude if you're interested in rerunning the analysis for a new location. If you're going to change the location, you'll need to make sure Sentinel 1 data is available for the new location, which you can check at the [DEA Dashboard](https://dashboard.dea-sandbox.test.frontiersi.io/s1_gamma0_geotif_scene). Once you've found an area on the dashboard, navigate to it on the interactive map at the start of the notebook, and click on the area to get the latitude and longitude. You can then define a new area from these values and re-run the map to check that you're covering the area you're interested in.
| github_jupyter |
# Getting Started with TensorRT
TensorRT is an SDK for optimizing trained deep learning models to enable high-performance inference. TensorRT contains a deep learning inference __optimizer__ for trained deep learning models and an optimized __runtime__ for execution. After you have trained your deep learning model in a framework of your choice, TensorRT enables you to run it with higher throughput and lower latency.
The TensorRT ecosystem breaks broadly down into two parts:
<br><br><br>

<br><br><br>
Essentially,
1. The various paths users can follow to convert their models to optimized TensorRT engines
2. The various runtimes users can target with TensorRT when deploying their optimized TensorRT engines
If you have a model in Tensorflow or PyTorch and want to run inference as efficiently as possible - with low latency, high throughput, and less memory consumption - this guide will help you achieve just that!!
## How Do I Use TensorRT:
TensorRT is a large and flexible project. It can handle a variety of workflows, and which workflow is best for you will depend on your specific use-case and problem setting. Abstractly, the process for deploying a model from a deep learning framework to TensorRT looks like this:

To help you get there, this guide will help you answer five key questions:
1. __What format should I save my model in?__
2. __What batch size(s) am I running inference at?__
3. __What precision am I running inference at?__
4. __What TensorRT path am I using to convert my model?__
5. __What runtime am I targeting?__
This guide will walk you broadly through all of these decision points while giving you an overview of your options at each step.
We could talk about these points in isolation, but they are best understood in the context of an actual end-to-end workflow. Let's get started on a simple one here, using a TensorRT API wrapper written for this guide. Once you understand the basic workflow, you can dive into the more in depth notebooks on the TF-TRT and ONNX converters!
## Simple TensorRT Demonstration through ONNX:
There are several ways of approaching TensorRT conversion and deployment. Here, we will take a pretrained ResNet50 model, convert it to an optimized TensorRT engine, and run it in the TensorRT runtime.
For this simple demonstration we will focus the ONNX path - one of the two main automatic approaches for TensorRT conversion. We will then run the model in the TensorRT Python API using a simplified wrapper written for this guide. Essentially, we will follow this path to convert and deploy our model:

We will follow the five questions above. For a more in depth discussion, the section following this demonstration will cover options available at these steps in more detail.
__IMPORTANT NOTE:__ Please __shutdown all other notebooks and Tensorflow/PyTorch processes__ before running these steps. TensorRT and Tensorflow/PyTorch can not be loaded into your Python processes at the same time.
#### 1. What format should I save my model in?
The two main automatic conversion paths for TensorRT require different model formats to successfully convert a model. TF-TRT uses Tensorflow SavedModels, and the ONNX path requires models be saved in ONNX. Here, we will use ONNX.
We are going to use ResNet50 - a basic backbone vision model that can be used for a variety of purposes. For the sake of demonstration, here we will perform classification using a __pretrained ResNet50 ONNX__ model included with the [ONNX model zoo](https://github.com/onnx/models).
We can download a pretrained ResNet50 from the ONNX model zoo and untar it by doing the following:
```
!wget https://s3.amazonaws.com/download.onnx/models/opset_8/resnet50.tar.gz -O resnet50.tar.gz
!tar xzf resnet50.tar.gz
```
See how to export ONNX models that will work with this same trtexec command in the [Tensorflow through ONNX notebook](./3.%20Using%20Tensorflow%202%20through%20ONNX.ipynb), and in the [PyTorch through ONNX notebook](./4.%20Using%20PyTorch%20through%20ONNX.ipynb).
#### 2. Which batch size(s) will I use?
Batch size can have a large effect on the optimizations TensorRT performs on our model. When using ONNX, we need to tell TensorRT what batch size to expect. Additionally, we need to tell TensorRT whether to expect a fixed batch size, or a range of batch sizes.
TensorRT is capable of handling the batch size dynamically if you don’t know until runtime what exact batch size you will need. That said, a fixed batch size allows TensorRT to make additional optimizations. For this example workflow, we use a fixed batch size of 32.
We set the batch size when we save our model (see [the Tensorflow through ONNX notebook](./3.%20Using%20Tensorflow%202%20through%20ONNX.ipynb)), and we tell TensorRT to expect a fixed batch size by setting the _--explicitBatch_ flag in our __trtexec__ command when converting our model below.
```
import numpy as np
BATCH_SIZE=32
```
#### 3. What precision will I use?
Inference typically requires less numeric precision than training. With some care, lower precision can give you faster computation and lower memory consumption without sacrificing any meaningful accuracy. TensorRT supports TF32, FP32, FP16, and INT8 precisions.
FP32 is the default training precision of most frameworks, so we will start by using FP32 for inference here. Let's create a "dummy" batch to work with in order to test our model. TensorRT will use the precision of the input batch throughout the rest of the network by default.
```
PRECISION = np.float32
dummy_input_batch = np.zeros((BATCH_SIZE, 224, 224, 3), dtype=PRECISION)
```
#### 4. What TensorRT path am I using to convert my model?
The ONNX conversion path is one of the most universal and performant paths for automatic TensorRT conversion. It works for Tensorflow, PyTorch, and many other frameworks. There are several tools to help users convert models from ONNX to a TensorRT engine.
One common approach is to use trtexec - a command line tool included with TensorRT that can, among other things, convert ONNX models to TensorRT engines and profile them.
```
!trtexec --onnx=resnet50/model.onnx --saveEngine=resnet_engine_intro.trt --explicitBatch
```
__Notes on the flags above:__
Tell trtexec where to find our ONNX model:
--onnx=resnet50/model.onnx
Tell trtexec where to save our optimized TensorRT engine:
--saveEngine=resnet_engine_intro.trt
Tell trtexec to expect a fixed batch size when optimizing (the exact value of this batch size will be inferred from the ONNX file)
--explicitBatch
#### 5. What runtime will I use?
After we have our TensorRT engine created successfully, we need to decide how to run it with TensorRT.
There are two types of TensorRT runtimes: a standalone runtime which has C++ and Python bindings, and a native integration into TensorFlow. In this section, we will use a simplified wrapper (ONNXClassifierWrapper) which calls the standalone runtime.
```
# If you get an error in this cell, restart your notebook (possibly your whole machine) and do not run anything that imports/uses Tensorflow/PyTorch
from onnx_helper import ONNXClassifierWrapper
trt_model = ONNXClassifierWrapper("resnet_engine_intro.trt", [BATCH_SIZE, 1000], target_dtype = PRECISION)
```
__Note__: If this conversion fails, please restart your Jupyter notebook kernel (in menu bar Kernel->Restart Kernel) and run steps 3 to 5 again. If you get an error like 'TypeError: pybind11::init(): factory function returned nullptr' there is likely some dangling process on the GPU - restart your machine and try again.
We will feed our batch of randomized dummy data into our ONNXClassifierWrapper to run inference on that batch:
```
# Warm up:
trt_model.predict(dummy_input_batch)[0][:10] # softmax probability predictions for the first 10 classes of the first sample
```
We can get a rough sense of performance using %%timeit:
```
%%timeit
trt_model.predict(dummy_input_batch)[0][:10]
```
## Applying TensorRT to Your Model:
This is a simple example applied to a single model, but how should you go about answering these questions for your workload?
First and foremost, it is a good idea to get an understanding of what your options are, and where you can learn more about them!
### __Compatible Models:__ MLP/CNN/RNN/Transformer/Embedding/Etc
TensorRT is compatible with models consisting of [these layers](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#layers-matrix). Using only supported layers ensures optimal performance without having to write any custom plugin code.
In terms of framework, TensorRT is integrated directly with Tensorflow - and most other major deep learning frameworks, such as PyTorch, are supported by first converting to ONNX format.
### __Conversion Methods:__ ONNX/TF-TRT/TensorRT API
The __ONNX__ path is the most performant and framework-agnostic automatic way of converting models. It's main disadvantage is that it must convert networks completely - if a network has an unsupported layer ONNX can't convert it unless you write a custom plugin.
You can see an example of how to use TensorRT with ONNX:
- [Here](./3.%20Using%20Tensorflow%202%20through%20ONNX.ipynb) in this guide for Tensorflow
- [Here](./4.%20Using%20PyTorch%20through%20ONNX.ipynb) in this guide for PyTorch
__TF-TRT__ is a high level API for automatically converting Tensorflow models. It contains a parser, and runs inside the default Tensorflow runtime. Its ease of use and flexibility are its biggest advantages. TF-TRT can convert Tensorflow networks with unsupported layers in them - it will optimize whatever operations it can, and will leave the rest of the network alone.
You can find an example included with this guide of using TF-TRT to convert and run a model [here]((./2.%20Using%20the%20Tensorflow%20TensorRT%20Integration.ipynb)).
Last, there is the __TensorRT API__. The TensorRT ONNX path and TF-TRT integration both automatically convert models to TensorRT engines for you. Sometimes, however, we want to convert something complex, or have the maximum amount of control in how our TensorRT engine is created. This let's us do things like using dynamic batch dimensions outside of TF-TRT, or create custom plugins for layers that TensorRT doesn't support.
When using this approach, we create TensorRT engine manually operation-by-operation using the TensorRT API's available in Python and C++. This process involves building a network identical in structure to your target network using the TensorRT API, and then loading in the weights directly in proper format. You can find more details on this [in the TensorRT documentation](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#c_topics).
### __Batch Size:__ Prioritize Latency/Prioritize Throughput, Fixed Batch Size/Dynamic Batch Size
Batch size determination is usually based on the tradeoff between throughput and latency. If you need low latency, use a low batch size. If you prefer high throughput and can accept higher latency, you can use a large batch size instead.
TensorRT has two batch size modes: __explicit__ and __dynamic__.
__Explicit batch networks__ accept a fixed predetermined batch size. Explicit batch mode is useful if you know exactly what batch size you expect - as it lets you skip the added step of specifying an optimization profile. This mode is required when converting networks through the ONNX path, as opposed to TF-TRT and the TensorRT API.
You can see an example of setting an explicit batch size in either of the ONNX notebooks listed above.
__Dynamic shape networks__ can accept a range of batch sizes. You must provide an '__optimization profile__' when using dynamic shapes in order to specify the possible range of batch sizes you expect to recieve. This is required because TensorRT does a lot of batch-size specific optimizations.
For more information on best practices regarding batching, see the [TensorRT best practices guide](https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html#batching).
### __Precision:__ TF32/FP32/FP16/INT8
TensorRT feature support - such as precision - for NVIDIA GPUs is determined by their __compute capability__. You can check the compute cabapility of your card on the [NVIDIA website](https://developer.nvidia.com/cuda-gpus).
TensorRT supports different precisions depending on said compute capability. You can check what features are supported by your compute capability in the [TensorRT documentation](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#hardware-precision-matrix).
__TF32__ is the default training precision on cards with compute cabapilities 8.0 and higher (e.g. NVIDIA A100 and later) - use when you want to replicate your original model performance as closely as possible on cards with compute capability of 8.0 or higher.
TF32 is a precision designed to preserve the range of FP32 with the precision of FP16. In practice, this means that TF32 models train faster than FP32 models while still converging to the same accuracy. This feature is only available on newer GPUs.
__FP32__ is the default training precision on cards with compute cabapilities of less than 8.0 (e.g. pre-NVIDIA A100) - use when you want to replicate your original model performance as closely as possible on cards with compute capability of less than 8.0
__FP16__ is an inference focused reduced precision. It gives up some accuracy for faster models with lower latency and lower memory footprint. In practice, the accuracy loss is generally negligible in FP16 - so FP16 is a fairly safe bet in most cases for inference. Cards that are focused on deep learning training often have strong FP16 capabilities, making FP16 a great choice for GPUs that are expected to be used for both training and inference.
__INT8__ is an inference focused reduced precision. It further reduces memory requirements and latency compared to FP16. INT8 has the potential to lose more accuracy than FP16 - but TensorRT provides tools to help you quantize your network's INT8 weights to avoid this as much as possible. INT8 requires the extra step of calibrating how TensorRT should quantize your weights to integers - requiring some sample data. With careful tuning and a good calibration dataset, accuracy loss from INT8 is often minimal. This makes INT8 a great precision for lower-power environments such as those using T4 GPUs or AGX Jetson modules - both of which have strong INT8 capabilities.
### __Runtime:__ TF-TRT/Python API/C++ API/TRITON
For a more in depth discussion of these options and how they compare see [this notebook on TensorRT Runtimes!](./Intro_Notebooks/3.Runtimes.ipynb)
## What do I do if I run into issues with conversion?
Here are several steps you can try if your model is not converting to TensorRT properly:
1. Check the logs - if you are using a tool such as trtexec to convert your model, it will tell you which layer is problematic
2. Write a custom plugin - you can find more information on it [here]().
3. Alternatively, if you are using ONNX and Tensorflow try switching to TF-TRT - it can support partial Tensorflow graph optimizations
4. Use alternative implementations of the layers or operations in question in your network definition - for example, it can be easier to use the padding argument in your convolutional layers instead of adding an explicit padding layer to the network.
5. TF-TRT can be harder to debug, but tools like graph surgeon https://docs.nvidia.com/deeplearning/tensorrt/api/python_api/graphsurgeon/graphsurgeon.html can help you fix specific nodes in your graph as well as pull it apart for analysis or patch specific nodes in your graph
6. Ask on the [NVIDIA developer forums](https://forums.developer.nvidia.com/c/ai-data-science/deep-learning/tensorrt) - we have many active TensorRT experts at NVIDIA who who browse the forums and can help
7. Post an issue on the [TensorRT OSS Github](https://github.com/NVIDIA/TensorRT)
## Next Steps:
You have now taken a model saved in ONNX format, converted it to an optimized TensorRT engine, and deployed it using the Python runtime. This is a great first step towards getting better performance out of your deep learning models at inference time!
Now, you can check out the remaining notebooks in this guide. See:
- [2. Using the TF-TRT Tensorflow Integration](./2.%20Using%20the%20Tensorflow%20TensorRT%20Integration.ipynb)
- [3. Using Tensorflow 2 through ONNX.ipynb](./3.%20Using%20Tensorflow%202%20through%20ONNX.ipynb)
- [4. Using PyTorch through ONNX.ipynb](./4.%20Using%20PyTorch%20through%20ONNX.ipynb)
- [5. Understanding TensorRT Runtimes.ipynb](./5.%20Understanding%20TensorRT%20Runtimes.ipynb)
<h4> Profiling </h4>
This is a great next step for further optimizing and debugging models you are working on productionizing
You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/best-practices/index.html
<h4> TRT Dev Docs </h4>
Main documentation page for the ONNX, layer builder, C++, and legacy APIs
You can find it here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html
<h4> TRT OSS GitHub </h4>
Contains OSS TRT components, sample applications, and plugin examples
You can find it here: https://github.com/NVIDIA/TensorRT
| github_jupyter |
```
import numba as nb
import numpy as np
import awkward as ak
print(f"{nb.__version__=}")
print(f"{ak.__version__=}")
@nb.njit
def make0(n):
r = np.empty((n, 4))
for i in range(n):
# simulate some work
x = np.random.rand()
y = np.random.rand()
z = np.random.rand()
t = np.random.rand()
r[i,0] = x
r[i,1] = y
r[i,2] = z
r[i,3] = t
return r
make0(3)
%timeit make0(1000)
@nb.njit
def make1(n):
r = []
for i in range(n):
# simulate some work
x = np.random.rand()
y = np.random.rand()
z = np.random.rand()
t = np.random.rand()
r.append((x, y, z, t))
return np.asarray(r)
make1(3)
%timeit make1(1000)
@nb.njit
def make2(n):
r = []
for i in range(n):
ri = []
# simulate some work
x = np.random.rand()
y = np.random.rand()
z = np.random.rand()
t = np.random.rand()
ri.append(x)
ri.append(y)
ri.append(z)
ri.append(t)
r.append(ri)
return np.asarray(r)
make2(3)
%timeit make2(1000)
@nb.njit
def make3(n):
r = []
for i in range(n):
# simulate some work
x = np.random.rand()
y = np.random.rand()
z = np.random.rand()
t = np.random.rand()
r.append(x)
r.append(y)
r.append(z)
r.append(t)
return np.array(r).reshape((n, 4))
make3(3)
%timeit make3(1000)
@nb.njit
def _make4(b, n):
for i in range(n):
x = np.random.rand()
y = np.random.rand()
z = np.random.rand()
t = np.random.rand()
b.begin_tuple(4)
b.index(0).append(x)
b.index(1).append(y)
b.index(2).append(z)
b.index(3).append(t)
b.end_tuple()
return b
def make4(n):
return _make4(ak.ArrayBuilder(), n).snapshot()
make4(3)
%timeit make4(1000)
@nb.njit
def _make5(b, n):
for i in range(n):
x = np.random.rand()
y = np.random.rand()
z = np.random.rand()
t = np.random.rand()
b.begin_list()
b.append(x)
b.append(y)
b.append(z)
b.append(t)
b.end_list()
return b
def make5(n):
return _make5(ak.ArrayBuilder(), n).snapshot()
make5(3)
%timeit make5(1000)
import timeit
import matplotlib.pyplot as plt
n = np.geomspace(1, 1000000, 20).astype(int)
ys = []
for fn in (make0, make1, make2, make3, make4, make5):
y = []
for ni in n:
m, t = timeit.Timer(lambda : fn(ni)).autorange()
y.append(t/m)
ys.append(y)
for y, label in zip(ys, ("baseline",
"list+tuple+asarray",
"list+list+asarray",
"list+array+reshape",
"ArrayBuilder+tuple",
"ArrayBuilder+list")):
plt.plot(n, np.divide(y, 1e-6), label=label)
plt.legend()
plt.xlabel("array length")
plt.ylabel("time / μs")
plt.loglog();
```
| github_jupyter |
# Redis入门——字符串、列表与集合

## 使用Python连接Redis
### 基本语法
```
import redis
client = redis.Redis()
```
```
import redis
client = redis.Redis()
```
## 字符串
### 基本语法
```
# 向字符串中写入数据
client.set(key, value)
# 从字符串中读取数据
client.get(key)
# 设置字符串的过期时间
client.set(key, value, ex=30) # ex的单位为秒
# 如果字符串的值为数字,那么可以自增和自减
client.incr(key, n) # 自增n
client.decr(key, n) # 自减n
```
## 向Redis中添加一个字符串,Key为kingname,value为现在时间
```
import datetime
now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
client.set('kingname', now)
```
## 读取Redis中,Key为kingname的字符串的值
```
target_time = client.get('kingname')
# 我们来看看读取下来的数据是什么类型的
print(f'读取下来的数据,它的类型为:{type(target_time)}')
# 把数据类型转化为字符串
target_time_str = target_time.decode()
print(f'字符串里面的值为:{target_time_str}')
```
## 如果读取的字符串Key不存在,会怎么样?
```
not_exists_value = client.get('Iamhero')
print(f'读取不存在的值,返回的结果是:{not_exists_value}')
# 必定报错
not_exists_value.decode()
```
## 为Key设置过期时间
```
import time
now = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
client.set('kingname', now, ex=30)
value = client.get('kingname').decode()
print(f'插入以后立刻读取,结果是:{value}')
time.sleep(31)
value = client.get('kingname') # 注意,这个地方不要decode()
print(f'30秒以后再来读一下,值为:{value}')
```
# 对于值为数字的字符串,自己增
```
client.set('num_value', 10) # 设置一个值为10的字符串,key为num_value
value = client.get('num_value').decode()
print(f'当你把数字写进Redis再读出来的时候,你会发现它的类型变成了:{type(value)}')
client.incr('num_value')
value = client.get('num_value').decode()
print(f'再次读取,发现值变成了:{value}')
client.incr('num_value', 10)
value = client.get('num_value').decode()
print(f'再次读取,发现值变成了:{value}')
```
## 对于值为数字的字符串,自减
```
client.decr('num_value')
value = client.get('num_value').decode()
print(f'再次读取,发现值变成了:{value}')
client.decr('num_value', 10)
value = client.get('num_value').decode()
print(f'再次读取,发现值变成了:{value}')
```
# Redis入门——字符串、列表与集合

## Redis列表的基本操作
### 基本语法
```
# 向列表左侧添加一个值
client.lpush(key, 'value1')
# 向列表左侧添加很多个值
client.lpush(key, 'value1', 'value2')
#向列表右侧添加一个或者多个值
client.rpush(key, 'value1', 'value2')
#从列表左侧弹出一个值
client.lpop(key)
#从列表右侧弹出一个值
client.rpop(key)
#获得列表的长度
client.llen(key)
# 获得列表里面的值,但是不删除他们
client.lrange(key, 1, 5)
```
## 向列表中添加数据
```
# 添加一条数据
client.lpush('users', 'kingname')
# 添加多条
client.lpush('users', 'one', 'two', 'three')
# 另一种方式添加多条数据
data = ['four', 5, 'VI', '七']
client.lpush('users', *data)
# 向右侧添加一条数据
client.rpush('users', 8)
# 向右侧添加多条数据
client.rpush('users', 9, 10, 11)
```
## 从列表中弹出数据
```
# 从左侧弹出数据
data = client.lpop('users')
print(f'弹出的数据,类型为:{type(data)}')
data_str = data.decode()
print(f'弹出的数据为:{data_str}')
# 从列表右侧弹出数据
data = client.rpop('users')
data_str = data.decode()
print(f'弹出的数据为:{data_str}')
```
## 获取列表长度
```
length = client.llen('users')
print(f'当前列表里面有{length}条数据')
```
## 从列表里面获取数据但不删除
```
# 获取前4条数据
first_three_data = client.lrange('users', 0, 3)
print(f'直接获取的数据为:{first_three_data}')
for data in first_three_data:
print(data.decode())
# 获取所有数据
all_data = client.lrange('users', 0, -1)
for data in all_data:
print(data.decode())
# 获取倒数第6至倒数第2条数据
last_few_data = client.lrange('users', -6, -2)
for data in last_few_data:
print(data.decode())
```



## 集合的基本操作
```
# 往集合中添加数据
client.sadd(key, 'value1', 'value2')
# 列出集合中所有数据(慎用)
client.smembers(key)
# 从集合中弹出数据
client.spop(key)
# 判断value是否在集合中
client.sismember(key, 'value')
# 两个集合做交集
client.sinter(key1, key2)
# 两个集合做并集
client.sunion(key1, key2)
# 两个集合做差集
client.sdiff(key1, key2)
```
## 向集合中添加数据
```
# 添加一条数据
result = client.sadd('account', 'kingname')
print(f'添加数据的返回结果:{result}')
print(f'返回数据的类型:{type(result)}')
# 通过多个参数添加多条数据
result = client.sadd('account', '王小二', '张小三', '李小四')
print(f'添加数据的返回结果:{result}')
# 使用列表添加多个数据
account = ['Bob', 'Alice', 'Dilen']
result = client.sadd('account', *account)
print(f'添加数据的返回结果:{result}')
# 添加一条重复数据
result = client.sadd('account', 'kingname')
print(f'添加数据的返回结果:{result}')
```
## 列出集合中的所有数据
```
# 列出集合中的所有数据
all_data = client.smembers('account')
print(f'返回的数据: {all_data}')
for data in all_data:
print(data.decode())
```
## 从集合中弹出数据
```
# 从集合中弹出数据
data = client.spop('account')
print(f'被弹出的数据类型为:{type(data)}')
print(f'被弹出的数据为:{data.decode()}')
```
## 判断一条数据是否在集合中
```
# 数据在集合中
exists = client.sismember('account', 'kingname')
print(f'如果数据在集合中,返回:{exists}')
# 数据不在集合中
not_exists = client.sismember('account', 'asdfasdfadf')
print(f'如果数据不在集合中,返回:{not_exists}')
```
## 交集、并集、差集
```
# 生成测试数据
data_1 = [1, 2, 3, 4, 5]
data_2 = [4, 5, 6, 7]
client.sadd('data_1', *data_1)
client.sadd('data_2', *data_2)
# 交集
client.sinter('data_1', 'data_2')
# 并集
client.sunion('data_1', 'data_2')
# 差集 data_1 - data_2
client.sdiff('data_1', 'data_2')
# 差集 data_2 - data_1
client.sdiff('data_2', 'data_1')
```
# Redis入门——字符串、列表与集合的使用场景举例

## 字符串——简单的映射关系
> 注意:绝对不能滥用字符串!!!
```
# 生成初始数据
import redis
data = {
1001: '王一',
1002: '马二',
1003: '张三',
1004: '李四',
1005: '赵五',
1006: '朱六',
1007: '卓七',
1008: '钱八',
1009: '孙九',
1010: '周十'
}
client = redis.Redis()
for num, name in data.items():
client.set(num, name)
# 查询ID为1006的人,它的名字是什么?
name = client.get(1006)
print(f'ID为1006的人,它的名字为:{name.decode()}')
```
## 列表——用来作为队列
```
import random
def send_sms(phone_number):
print(f'开始给{phone_number}发送短信,整个过程大概1-2秒钟')
time.sleep(random.randint(1, 2))
return random.randint(0, 9) % 2 == 0 # 在0到9之间随机生成一个整数,偶数表示发送成功
import time
import json
def read_wait_queue():
while True:
phone_info_bytes = client.lpop('phone_queue')
if not phone_info_bytes:
print('队列中没有短信,等待')
time.sleep(10)
continue
phone_info = json.loads(phone_info_bytes.decode())
retry_times = phone_info.get('retry_times', 0)
phone_number = phone_info['phone_number']
result = send_sms(phone_number)
if result:
print(f'手机号:{phone_number} 短信发送成功!')
continue
print(f'手机号:{phone_number} 短信发送失败,等待重试!')
if retry_times >= 3:
print(f'重试超过3次,放弃手机号:{phone_number}')
continue
next_phone_info = {'phone_number': phone_number, 'retry_times': retry_times + 1}
client.rpush('phone_queue', json.dumps(next_phone_info))
read_wait_queue()
```
## 集合——去重、多条件查询
```
def register(account, password):
if client.sadd('web:account', account):
print('注册成功!')
return
print('账号已经被人注册了!')
register('kingname', 123456)
register('boy', 987654)
register('kingname', 5534354)
# 使用集合实现多条件筛选
# 初始化数据
math = [
'王晓一',
'张小二',
'刘小三',
'钱小七',
'李小八',
'周小九'
]
computer = [
'王晓一',
'刘小三',
'马小时',
'朱小五',
'李小八',
'周小九'
]
history = [
'王晓一',
'刘小三',
'马小时',
'朱小五',
'钱小七',
'李小八',
]
english = [
'张小二',
'刘小三',
'马小时',
'朱小五',
'李小八',
'周小九'
]
client.sadd('math', *math)
client.sadd('english', *english)
client.sadd('history', *history)
client.sadd('computer', *computer)
# 同时上数学和历史的人
student_list = client.sinter('math', 'history')
for student in student_list:
print(student.decode())
# 同时上数学、计算机、历史和英语的人
student_list = client.sinter('math', 'history', 'computer', 'english')
for student in student_list:
print(student.decode())
# 上数学没上计算机的人
student_list = client.sdiff('math', 'computer')
for student in student_list:
print(student.decode())
```



| github_jupyter |
## Requirements
A [pip requirements file](https://pip.pypa.io/en/stable/user_guide/#requirements-files) can be found at: [/sashimdig/requirements.txt](../requirements.txt)
Notable requirements
|package |version |
|---- |----- |
|tensorflow | 0.10.0 |
| tflearn | 0.2.1 |
----
### [TFLearn installation instructions](http://tflearn.org/installation/)
Must install older tensorflow version 0.10 (NOT the latest 1.0) to work w/ `tflearn`
```
# Mac OS X, CPU only, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/cpu/tensorflow-0.10.0-py3-none-any.whl
# Mac OS X, GPU enabled, Python 3.4 or 3.5:
$ export TF_BINARY_URL=https://storage.googleapis.com/tensorflow/mac/gpu/tensorflow-0.10.0-py3-none-any.whl
sudo -H pip3 install --upgrade $TF_BINARY_URL --ignore-installed
```
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
import os
from os import getcwd
from os import listdir
from os.path import isfile, join, isdir
import skimage
from skimage import measure
from skimage import io
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss
from sklearn.preprocessing import LabelEncoder
from skimage.transform import resize
import tensorflow as tf
import tflearn
from tflearn.data_utils import shuffle
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.normalization import batch_normalization
from tflearn.layers.estimator import regression
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
def get_paths(foldNames):
paths = dict.fromkeys(foldNames)
for idx,g in enumerate(foldNames):
fileNames = [f for f in listdir(join(trainPath,g)) if isfile(join(trainPath,g, f))]
for i,f in enumerate(fileNames):
fileNames[i] = join(trainPath,g,f)
paths[g] = fileNames
return paths
def read_image(src):
"""Read and resize individual images"""
im = io.imread(src)
im = resize(im, (ROWS, COLS))
return im
```
# Setup
```
trainPath = '../data/raw/train'
testPath = '../data/raw/test_stg1'
rawdataPath = '../data/raw'
fish_classes = [f for f in listdir(trainPath) if isdir(join(trainPath, f))]
groupData = pd.DataFrame ({'group': fish_classes})
fish_paths = get_paths(fish_classes)
```
# Model Parameters
```
subsample_amnt = 50 # len(files) = 3777
downsample_amnt = 2
ROWS = 128 # int(720 / downsample_amnt)
COLS = 128 # int(1280 / downsample_amnt)
CHANNELS = 3
NUM_CATEGORIES = len(fish_classes)
```
# Build x and y arrays
```
%%time
for idx,fish in enumerate(fish_classes):
groupData.ix[idx,'num files'] = int(len(fish_paths[fish]))
files = []
Y_all = []
for fish in fish_classes:
fish_files = fish_paths[fish]
files.extend(fish_files)
y_fish = np.tile(fish, len(fish_files))
Y_all.extend(y_fish)
Y_all = np.array(Y_all)
print('Y_all: Raw')
print('Shape:', Y_all.shape)
print(Y_all, '\n\n')
# One Hot Encoding Labels
# Transform the categorical array Y_all into matrix of the same height,
# but with a boolean column for each category.
Y_all = LabelEncoder().fit_transform(Y_all)
Y_all = tflearn.data_utils.to_categorical(Y_all, NUM_CATEGORIES)
print('Y_all: One Hot Encoded')
print('Shape:', Y_all.shape)
print(Y_all, '\n\n')
```
## Sub-sample the training set
```
if isinstance(subsample_amnt, int):
from sklearn.utils import resample
files, Y_all = resample(files, Y_all,
n_samples = subsample_amnt,
replace = False)
```
## Load all training images into `X_all`
```
%%time
X_all = np.ndarray((len(files), ROWS, COLS, CHANNELS))
for i, f in enumerate(files):
im = read_image(f)
X_all[i] = im
if i%500 == 0: print('Processed {} of {}'.format(i, len(files)))
```
### View resampled image
### Test: Ensure that loaded images are correctly stored in `X_all`
```
def show_images(images,titles=None):
"""Display a list of images"""
n_ims = len(images)
if titles is None: titles = ['(%d)' % i for i in range(1,n_ims + 1)]
fig = plt.figure(figsize=(10,10))
n = 1
for image,title in zip(images,titles):
a = fig.add_subplot(1,n_ims,n) # Make subplot
plt.imshow(image, cmap='gray', interpolation='nearest')
plt.axis('off')
a.set_title(title)
# print(submission.iloc[jImage,:])
n += 1
fig.set_size_inches(np.array(fig.get_size_inches()) * n_ims)
plt.show()
# Show 5 images randomly sample from `X_all`
num_images = 5
start_idx = np.random.randint(0,len(X_all)-num_images)
end_idx = start_idx + num_images
show_images(images=X_all[start_idx:end_idx])
# # Test image with `im`
# show_images(images=[X_all[i],
# read_image(f),
# X_all[i]-im],
# titles=['X_all',
# 'read_image(f)',
# 'X_all[i]-read_image(f)'])
```
# Training data: `Y_all`, `X_train`, `Y_train`
* Split data
```
%%time
# test_size: between 0 and 1. proportion of the dataset to include in the test split
# random_state: Pseudo-random number generator state used for random sampling. How to shoose this?
# stratify: this is ensuring that the split datasets are balanced, i.e. contains the same
# percentage of classes
X_train, X_valid, Y_train, Y_valid = train_test_split(X_all, Y_all,
test_size = 0.2,
random_state = 23,
stratify = Y_all)
```
# TFLEARN
## Define the model
```
# %%time
# def dnn_test1():
# #needed to run this tensorflow operation in order to build the network and subsequently
# #create the model, multiple times. Rebuilding without resetting the tf.Graph object produces
# #errors. Could also get around this issue by restarting kernel, but that's annoying.
# with tf.Graph().as_default():
# # # Real-time data preprocessing
# # img_prep = ImagePreprocessing()
# # img_prep.add_featurewise_zero_center()
# # # Convolutional network building
# # network = input_data(shape=[None, ROWS, COLS, CHANNELS],
# # data_preprocessing=img_prep)
# # input layer
# network = input_data(shape=[None, ROWS, COLS, CHANNELS])
# # hidden layers
# network = conv_2d(network, 32, 3, activation='relu', regularizer='L2')
# network = max_pool_2d(network, 2)
# network = conv_2d(network, 64, 3, activation='relu', regularizer='L2')
# network = conv_2d(network, 64, 3, activation='relu', regularizer='L2')
# network = max_pool_2d(network, 2)
# network = fully_connected(network, 512, activation='relu', regularizer='L2')
# network = dropout(network, 0.5)
# # output layer
# network = fully_connected(network, NUM_CATEGORIES, activation='softmax', regularizer='L2')
# network = regression(network,
# loss='categorical_crossentropy',
# learning_rate=0.01)
# return tflearn.DNN(network,
# tensorboard_verbose=0)
# # Define model
# model = dnn_test1()
# # Start training (apply gradient descent algorithm). Will want to specify multiple epochs
# # typically unless just testing
# # Train using classifier
# model.fit(X_train, Y_train,
# n_epoch = 20,
# shuffle = True,
# validation_set = (X_valid, Y_valid),
# show_metric = True,
# batch_size = 96)
```
### Padding
- ['Same' vs 'Valid' Padding | StackOverflow](http://stackoverflow.com/questions/37674306/what-is-the-difference-between-same-and-valid-padding-in-tf-nn-max-pool-of-t)
```
conv_2d?
tflearn.DNN?
def dnn_test():
#needed to run this tensorflow operation in order to build the network and subsequently
#create the model, multiple times. Rebuilding without resetting the tf.Graph object produces
#errors. Could also get around this issue by restarting kernel, but that's annoying.
with tf.Graph().as_default():
# Real-time data preprocessing
# img_prep = ImagePreprocessing()
# img_prep.add_featurewise_zero_center()
# Convolutional network building
# network = input_data(shape=[None, ROWS, COLS, CHANNELS],
# data_preprocessing=img_prep,
# dtype=tf.float32)
# input layer
network = input_data(shape=[None, ROWS, COLS, CHANNELS])
# hidden layers
network = conv_2d(network,
32, 3,
activation='relu', regularizer='L2')
network = batch_normalization (network)
network = conv_2d(network, 32, 3,
activation='relu', regularizer='L2', padding='same',
weights_init='Xavier')
network = batch_normalization (network)
network = max_pool_2d(network, 2)
network = conv_2d(network, 32, 3, activation='relu', regularizer='L2', padding='same')
network = batch_normalization (network)
network = conv_2d(network, 32, 3, activation='relu', regularizer='L2', padding='same')
network = batch_normalization (network)
network = max_pool_2d(network, 2)
network = conv_2d(network, 32, 3, activation='relu', regularizer='L2', padding='same')
network = batch_normalization (network)
network = conv_2d(network, 32, 3, activation='relu', regularizer='L2', padding='same')
network = batch_normalization (network)
network = max_pool_2d(network, 2)
network = fully_connected(network, 512, activation='relu', regularizer='L2')
network = dropout(network, 0.5)
# output layer
network = fully_connected(network, NUM_CATEGORIES, activation='softmax', regularizer='L2')
network = regression(network,
loss = 'categorical_crossentropy',
learning_rate = 0.01)
return tflearn.DNN(network,
tensorboard_verbose=0)
# Define model
model = dnn_test()
# Train using classifier
model.fit(X_train, Y_train,
n_epoch = 2,
shuffle = True,
validation_set = (X_valid, Y_valid),
show_metric = True,
batch_size = 96)
```
## Predict & save to submission file
### Load Test Data
```
%%time
# read in test photo set
test_files = [im for im in os.listdir(testPath)]
X_submit = np.ndarray((len(test_files), ROWS, COLS, CHANNELS))
for i, im in enumerate(test_files):
X_submit[i] = read_image(join(testPath,im))
#model predict
test_preds1 = model.predict(X_submit)
%%time
submissionPath = join(rawdataPath,'jfa-2.0-submission.csv')
submission = pd.DataFrame(test_preds1, columns=fish_classes)
submission.insert(0, 'image', test_files)
submission.to_csv(submissionPath, index=False)
submission.head()
def sample_prediction(jImage):
im = read_image(join(testPath, submission.image[jImage]))
plt.figure(figsize=(5, 5))
plt.imshow(im, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.tight_layout()
plt.show()
foo = submission.iloc[jImage,1:]
print(foo.sort_values(ascending=False))
num_samples = 20
for i in range(num_samples):
sample_prediction(np.random.randint(low=0, high=1000))
```
## Visualization
Using [tensorboard & tflearn](http://tflearn.org/getting_started/#visualization) to visualize the loss and accuracy functions
```
# %%bash
# tensorboard --logdir='/tmp/tflearn_logs'
```
----
## HDF5 sandbox
Using HDF5 with TFLearn to save the full dataset. HDF5 is a data model, library, and file format for storing and managing data. It can handle large dataset that could not fit totally in ram memory. Note that this example just give a quick compatibility demonstration. In practice, there is no so real need to use HDF5 for small dataset (e.g. CIFAR-10)
```
def save_h5f(h5f_filename, X_all, Y_all, X_submit):
"""Create hdf5 dataset from fish numpy arrays"""
import h5py
h5f = h5py.File(fish_h5f_filename)
h5f.create_dataset('fish_X_submit', data=X_submit)
h5f.close()
def load_h5f(h5f_filename):
"""Load fish hdf5 data"""
import h5py
h5f = h5py.File(fish_h5f_filename)
X_all = h5f['X'][()]
Y_all = h5f['Y'][()]
X_submit = h5f['fish_X_submit'][()]
return X_all, Y_all, X_submit
fish_h5f_filename = join(rawdataPath, 'fish_data.h5')
save_h5f(fish_h5f_filename, X_all, Y_all, X_submit)
fish_h5f_filename = join(rawdataPath, 'fish_data.h5')
foo, bar, baz = load_h5f(fish_h5f_filename)
X_train, X_valid, Y_train, Y_valid = train_test_split(X_all, Y_all,
test_size =0.2,
random_state =23,
stratify =Y_all)
```
## `build_hdf5_image_dataset`
```
build_hdf5_image_dataset?
%%time
BUILD_HDF5_DATASET = False
IMAGE_SIZE = 128
VALIDATION_SPLIT = True
output_path = join(rawdataPath, 'fish_dataset_{}x{}.h5'.format(IMAGE_SIZE, IMAGE_SIZE))
input_path = join(rawdataPath, 'train')
if BUILD_HDF5_DATASET:
# Build a HDF5 dataset (only required once)
from tflearn.data_utils import build_hdf5_image_dataset
build_hdf5_image_dataset(target_path =input_path,
image_shape =(IMAGE_SIZE, IMAGE_SIZE),
mode ='folder',
output_path =output_path,
categorical_labels =True,
normalize =True)
%%time
# Load HDF5 dataset
import h5py
h5f = h5py.File(output_path, 'r')
X_all = h5f['X'][()]
Y_all = h5f['Y'][()]
# Split into
if VALIDATION_SPLIT:
X_train, X_valid, Y_train, Y_valid = train_test_split(X_all, Y_all,
test_size =0.2,
random_state =23,
stratify =Y_all)
```
## Layer Visualization
```
import tflearn
import numpy as np
import matplotlib.pyplot as plt
import six
def display_convolutions(model, layer, padding=4, filename=''):
if isinstance(layer, six.string_types):
vars = tflearn.get_layer_variables_by_name(layer)
variable = vars[0]
else:
variable = layer.W
data = model.get_weights(variable)
# N is the total number of convolutions
N = data.shape[2] * data.shape[3]
# Ensure the resulting image is square
filters_per_row = int(np.ceil(np.sqrt(N)))
# Assume the filters are square
filter_size = data.shape[0]
# Size of the result image including padding
result_size = filters_per_row * (filter_size + padding) - padding
# Initialize result image to all zeros
result = np.zeros((result_size, result_size))
# Tile the filters into the result image
filter_x = 0
filter_y = 0
for n in range(data.shape[3]):
for c in range(data.shape[2]):
if filter_x == filters_per_row:
filter_y += 1
filter_x = 0
for i in range(filter_size):
for j in range(filter_size):
result[filter_y * (filter_size + padding) + i, filter_x * (filter_size + padding) + j] = \
data[i, j, c, n]
filter_x += 1
# Normalize image to 0-1
min = result.min()
max = result.max()
result = (result - min) / (max - min)
# Plot figure
plt.figure(figsize=(10, 10))
plt.axis('off')
plt.imshow(result, cmap='gray', interpolation='nearest')
# Save plot if filename is set
if filename != '':
plt.savefig(filename, bbox_inches='tight', pad_inches=0)
plt.show()
model
model.
```
| github_jupyter |
# Exploratory Data Analysis with Titanic dataset
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sb
plt.style.use('fivethirtyeight')
import warnings
warnings.filterwarnings('ignore')
# we need below line for displaying graphis inline (in a notebook)
%matplotlib inline
train_data=pd.read_csv('train.csv')
test_data=pd.read_csv('test.csv')
train_data.head(10)
train_data.shape
test_data.shape
train_data.describe()
test_data.head()
train_data.isnull().sum()
sb.countplot('Survived',data=train_data)
plt.show()
```
From the above graph it is clear that not many persons survived.
Out of 891 persons in training dataset only 350, 38.4% of total training dataset survived. We will get more insight of data by exploring more.
Here we'll explore features
```
train_data.groupby(['Sex', 'Survived'])['Survived'].count()
```
It is clear that 233 female survived out of 344. And out of 577 male 109 survived. The survival ratio of female is much greater than that of male. It can be seen clearly in following graph
```
train_data[['Sex','Survived']].groupby(['Sex']).mean().plot.bar()
sb.countplot('Sex',hue='Survived',data=train_data,)
plt.show()
```
'Sex' is very interesting feature. Isn't it? Let's explore more features
```
# Pclass is a column name and we color by another column 'Survived'
sb.countplot('Pclass', hue='Survived', data=train_data)
plt.title('Pclass: Survived vs Dead')
plt.show()
```
Wow.... That looks amazing. It is usually said that Money can't buy Everything, But it is clearly seen that pasangers of Class 1 are given high priority while Rescue. There are greater number of passangers in Class 3 than Class 1 and Class 2 but very few, almost 25% in Class 3 survived. In Class 2, survivail and non-survival rate is 49% and 51% approx.
While in Class 1 almost 68% people survived. So money and status matters here.
Let's dive in again into data to check more interesting observations.
```
# cmap is a color map setting
pd.crosstab([train_data.Sex,train_data.Survived],train_data.Pclass,margins=True).style.background_gradient(cmap='summer_r')
sb.factorplot('Pclass', 'Survived', hue='Sex', data=train_data)
plt.show()
```
I use FactorPlot and CrossTab here because with these plots categorical variables can easily be visualized. Looking at FactorPlot and CrossTab, it is clear that women survival rate in Class 1 is about 95-96%, as only 3 out of 94 women died. So, it is now more clear that irrespective of Class, women are given first priority during Rescue. Because survival rate for men in even Class 1 is also very low.
From this conclusion, PClass is also a important feature.
```
print('Oldest person Survived was of:',train_data['Age'].max())
print('Youngest person Survived was of:',train_data['Age'].min())
print('Average person Survived was of:',train_data['Age'].mean())
f,ax=plt.subplots(1,2,figsize=(18,8))
sb.violinplot('Pclass','Age',hue='Survived',data=train_data,split=True,ax=ax[0])
ax[0].set_title('PClass and Age vs Survived')
ax[0].set_yticks(range(0,110,10))
sb.violinplot("Sex","Age", hue="Survived", data=train_data,split=True,ax=ax[1])
ax[1].set_title('Sex and Age vs Survived')
ax[1].set_yticks(range(0,110,10))
plt.show()
```
From above violen plots, following observations are clear,
1) The no of children is increasing from Class 1 to 3, the number of children in Class 3 is greater than other two.
2) Survival rate of children, for age 10 and below is good irrespective of Class
3) Survival rate between age 20-30 is well and is quite better for women.
Now, in Age feature we have 177 null values filled with NaN. We have to deal with it. But we can't enter mean of age in every NaN column, because our average/mean is 29 and we cannot put 29 for a child or some olde man. So we have to discover something better.
Let's do something more interesting with dataset by exploring more.
What is, if I look at 'Name' feature, It looks interesting. Let's check it....
```
train_data['Initial']=0
for i in train_data:
train_data['Initial']=train_data.Name.str.extract('([A-Za-z]+)\.') #extracting Name initials
train_data['First']=""
for i in train_data:
train_data['First']=train_data.Name.str.extract('[A-Za-z]+\. ([A-Za-z]+)') #extracting First Name
train_data.head()
train_data['First'].isnull().sum()
train_data[train_data.First.isnull()]
train_data.head()
pd.crosstab(train_data.Initial,train_data.Sex).T.style.background_gradient(cmap='summer_r')
```
There are many names which are not relevant like Mr, Mrs etc. So I will replace them with some relevant names,
```
train_data.groupby('Initial')['Age'].mean()
train_data['Initial'].replace(['Mlle','Mme','Ms','Dr','Major','Lady','Countess',
'Jonkheer','Col','Rev','Capt','Sir','Don'],['Miss',
'Miss','Miss','Mr','Mr','Mrs','Mrs','Other','Other','Other','Mr','Mr','Mr'],inplace=True)
train_data.groupby('Initial')['Age'].mean()
train_data.loc[(train_data.Age.isnull()) & (train_data.Initial=='Mr'),'Age']=33
train_data.loc[(train_data.Age.isnull()) & (train_data.Initial=='Mrs'),'Age']=36
train_data.loc[(train_data.Age.isnull()) & (train_data.Initial=='Master'),'Age']=5
train_data.loc[(train_data.Age.isnull()) & (train_data.Initial=='Miss'),'Age']=22
train_data.loc[(train_data.Age.isnull()) & (train_data.Initial=='Other'),'Age']=46
train_data.Age.isnull().any()
f,ax=plt.subplots(1,2,figsize=(20,20))
train_data[train_data['Survived']==0].Age.plot.hist(ax=ax[0],bins=20,edgecolor='black',color='red')
ax[0].set_title('Survived = 0')
x1=list(range(0,85,5))
ax[0].set_xticks(x1)
train_data[train_data['Survived']==1].Age.plot.hist(ax=ax[1],bins=20,edgecolor='black',color='green')
x2=list(range(0,85,5))
ax[1].set_xticks(x2)
ax[1].set_title('Survived = 1')
plt.show()
```
From the above plots, I found the following observations
(1) First priority during Rescue is given to children and women, as the persons<5 are save by large numbers
(2) The oldest saved passanger is of 80
(3) The most deaths were between 30-40
```
sb.factorplot('Pclass','Survived',col='Initial',data=train_data)
plt.show()
```
From the above FactorPlots it is Clearly seen that women and children were saved irrespective of PClass
Let's explore some more
# Feature: SibSip
SibSip feature indicates that whether a person is alone or with his family. Siblings=brother,sister, etc
and Spouse= husband,wife
```
pd.crosstab([train_data.SibSp],train_data.Survived).style.background_gradient('summer_r')
f,ax=plt.subplots(1,2,figsize=(20,8))
sb.barplot('SibSp','Survived', data=train_data,ax=ax[0])
ax[0].set_title('SipSp vs Survived in BarPlot')
sb.factorplot('SibSp','Survived', data=train_data,ax=ax[1])
ax[1].set_title('SibSp vs Survived in FactorPlot')
plt.close(2)
plt.show()
pd.crosstab(train_data.SibSp,train_data.Pclass).style.background_gradient('summer_r')
```
There are many interesting facts with this feature. Barplot and FactorPlot shows that if a passanger is alone in ship with no siblings, survival rate is 34.5%. The graph decreases as no of siblings increase. This is interesting because, If I have a family onboard, I will save them instead of saving myself. But there's something wrong, the survival rate for families with 5-8 members is 0%. Is this because of PClass?
Yes this is PClass, The crosstab shows that Person with SibSp>3 were all in Pclass3. It is imminent that all the large families in Pclass3(>3) died.
That are some interesting facts we have observed with Titanic dataset.
| github_jupyter |
<a href="https://colab.research.google.com/github/syamkakarla98/DataScience_Head_Start/blob/master/Student_Preformance.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Student Performance in Exams
This notebook provides the in depth analysis on the [student performance in exams at public schools](http://roycekimmons.com/tools/generated_data/exams).

## Importing Libraries
```
#!/usr/bin/env python -W ignore::DeprecationWarning
# Data Handling
import pandas as pd
import numpy as np
from itertools import combinations
# Visualization
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
from IPython.display import HTML
plt.rcParams['figure.figsize'] = (14, 8)
sns.set_style('whitegrid')
# Chand=ging directory
%cd '/content/drive/My Drive/DataScience/Data_Science_Head_Start'
```
## Reading Data
```
df = pd.read_csv('data/StudentsPerformance.csv')
df.shape
df.info()
df.describe()
```
## Viewing data
```
df.reset_index()
df.head(10)
(df.head(20)
.style
.hide_index()
.bar(color='#70A1D7', vmin=0, subset=['math score'])
.bar(color='#FF6F61', vmin=0, subset=['reading score'])
.bar(color='mediumspringgreen', vmin=0, subset=['writing score'])
.set_caption(''))
```
### Analysis of categorical attributes
### A Bar Plot & Count Plot w.r.t **Gender**, **Race/ethnicity**, **Parental Level of Education**, **Lunch** and **Test Preparation Course**.
```
for attribute in ['gender', 'race/ethnicity', 'parental level of education', 'lunch', 'test preparation course']:
f, ax = plt.subplots(1,2)
data = df[attribute].value_counts().sort_index()
bar = sns.barplot(x = data.index, y = data, ax = ax[0], palette="Set2",)
for item in bar.get_xticklabels():
item.set_rotation(45)
ax[1].pie(data.values.tolist() , labels= [i.title() for i in data.index.tolist()], autopct='%1.1f%%',shadow=True, startangle=90);
plt.show()
```
### Distribution Plots of Numeric Attributes **Math Score**, **Reading Score** and **wrtiting Score**
```
for lab, col in zip(['math score', 'reading score', 'writing score'], ['tomato', 'mediumspringgreen', 'blue']):
sns.distplot(df[lab], label=lab.title(), color = col, ).set(xlabel=lab.title(), ylabel='Count')
plt.show()
```
### Relationship Between Numerical Attributes
```
for attr, col in zip(list(combinations(['math score', 'reading score', 'writing score'], 2)), ['#77DF79', '#82B3FF', '#F47C7C']):
sns.jointplot(df[attr[0]], df[attr[1]], color = col)
plt.show()
```
### Barplot between **Parent Level of education** and the student score in **Math**, **Reading** and **Writing**
```
df.groupby('parental level of education')['math score', 'reading score', 'writing score'].mean().plot(kind = 'bar');
```
### Barplot between **Parent Level of education** and the student score in **Reading** and **Writing**
```
cond_plot = sns.FacetGrid(data=df, col='parental level of education', hue='gender', col_wrap=3, height = 5)
cond_plot.map(sns.scatterplot, 'reading score', 'writing score' );
```
### Bar graph between **Race/Ethnicity** and **Test Preparation Course**
```
df.groupby('race/ethnicity')['test preparation course'].value_counts().plot(kind = 'bar', colormap='Set2')
plt.ylabel('Count');
```
| github_jupyter |
```
import torch
import torch.nn as nn
from torch import optim
import torch.nn.functional as F
from torch.utils.data import DataLoader,Dataset
import torchvision
import torchvision.models as tvm
from torchvision import transforms
from torchvision.datasets.folder import DatasetFolder,ImageFolder
import numpy as np
from glob import glob
from PIL import Image
import pandas as pd
import os,time,gc
from pathlib import Path
from tqdm import tqdm_notebook as tqdm
import datetime,random,string
ngpu=torch.cuda.device_count()
device = torch.device("cuda" if (torch.cuda.is_available() and ngpu > 0) else "cpu")
print("Using Pytorch Version : {} and Torchvision Version : {}. Using Device {}".format(torch.__version__,torchvision.__version__,device))
from torch import nn
ngf=128
nz= latent_dim=10
e_lim = 10
nc=3 # Number of Channels
# Fixed Architecture: Weights will be updated by Backprop.
class AdveraryGenerator(nn.Module):
def __init__(self,e_lim):
super(AdveraryGenerator, self).__init__()
self.e_lim = e_lim
self.main = nn.Sequential(
nn.ConvTranspose2d( in_channels=nz,out_channels= 1024, kernel_size=4, stride=1, padding=0, bias=False),
nn.BatchNorm2d(1024),
nn.ReLU(True),
# state size. (ngf*8) x 4 x 4
nn.ConvTranspose2d(1024, 512, 4, 2, 1, bias=False),
nn.BatchNorm2d(512),
nn.ReLU(True),
# state size. (ngf*4) x 8 x 8
nn.ConvTranspose2d( 512, 256, 4, 2, 1, bias=False),
nn.BatchNorm2d(256),
nn.ReLU(True),
# state size. (ngf*2) x 16 x 16
nn.ConvTranspose2d(256, 128, 4, 2, 2, bias=False),
nn.BatchNorm2d(128),
nn.ReLU(True),
# state size. (ngf) x 32 x 32
nn.ConvTranspose2d( 128, 64, 4, 2, 2, bias=False),
nn.BatchNorm2d(64),
nn.ReLU(True),
# state size. (nc) x 64 x 64
nn.ConvTranspose2d( 64, 3, 4, 4,4, bias=False),
nn.BatchNorm2d(3),
nn.ReLU(True),
nn.Tanh()
)
def forward(self, x):
return self.e_lim * self.main(x) # Scaling of ε
adversarygen=AdveraryGenerator(e_lim).to(device)
PATH='/home/ubuntu/data/VGG16_Results/GeneratorW_vgg16_19_EEggLg.pth'
# Load Weights
adversarygen.load_state_dict(torch.load(PATH))
adversarygen.eval()
num_adv = 640
# latent_seed = 2 * torch.zeros(num_adv, nz, 1, 1, device=device,requires_grad=True) -1 # (r1 - r2) * torch.rand(a, b) + r2
latent_seed = torch.zeros(num_adv, nz, 1, 1, device=device,requires_grad=True)
latent_seed.shape[1]
spaced=np.linspace(-1, 1, num=num_adv, endpoint=False)
for dim in range(latent_seed.shape[1]):
latent_seed = torch.zeros(num_adv, nz, 1, 1, device=device,requires_grad=True)
latent_seed[:,dim,0,0]= torch.from_numpy(spaced)
noise = adversarygen(latent_seed).permute(0,2,3,1).cpu().detach().numpy() *255/10
create_video_frm_frames(noise,filename=f'Interpolating_Dimension-{dim}.mp4')
noise.shape, noise.mean(),noise.std()
import matplotlib.pyplot as plt
%matplotlib inline
import cv2
def create_video_frm_frames(frames,filename='video.mp4'):
height, width, layers = frames[0].shape
size = (width,height)
out = cv2.VideoWriter(filename,cv2.VideoWriter_fourcc(*'DIVX'), 15, size)
for i in range(len(frames)):
out.write(frames[i].astype(np.uint8))
out.release()
i=1
create_video_frm_frames(noise,filename=f'dim_{i}.mp4')
fig, ax = plt.subplots(nrows=4, ncols=4,figsize=(15, 7), dpi=80)
n=0
for row in ax:
for col in row:
img = noise[n,:]
n+=1
col.imshow(img)
```
| github_jupyter |
```
from google.colab import drive
drive.mount('/content/drive')
path = '/content/drive/MyDrive/Research/AAAI/dataset1/first_layer_with_entropy/k_001/'
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [4,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [5.5,6],cov=[[0.01,0],[0,0.01]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [4.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [3,3.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [2.5,5.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [3.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [5.5,8],cov=[[0.01,0],[0,0.01]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [7,6.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [6.5,4.5],cov=[[0.01,0],[0,0.01]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [5,3],cov=[[0.01,0],[0,0.01]],size=sum(idx[9]))
color = ['#1F77B4','orange', 'g','brown']
name = [1,2,3,0]
for i in range(10):
if i==3:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3],label="D_"+str(name[i]))
elif i>=4:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[3])
else:
plt.scatter(x[idx[i],0],x[idx[i],1],c=color[i],label="D_"+str(name[i]))
plt.legend()
x[idx[0]][0], x[idx[5]][5]
desired_num = 6000
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,9)
a = []
for i in range(9):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(a)
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
len(mosaic_list_of_images), mosaic_list_of_images[0]
```
# load mosaic data
```
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
batch = 250
msd1 = MosaicDataset(mosaic_list_of_images[0:3000], mosaic_label[0:3000] , fore_idx[0:3000])
train_loader = DataLoader( msd1 ,batch_size= batch ,shuffle=True)
batch = 250
msd2 = MosaicDataset(mosaic_list_of_images[3000:6000], mosaic_label[3000:6000] , fore_idx[3000:6000])
test_loader = DataLoader( msd2 ,batch_size= batch ,shuffle=True)
```
# models
```
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,50, bias=False) #,self.output)
self.linear2 = nn.Linear(50,50 , bias=False)
self.linear3 = nn.Linear(50,self.output, bias=False)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.xavier_normal_(self.linear3.weight)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,50], dtype=torch.float64)
features = torch.zeros([batch,self.K,50],dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
features = features.to("cuda")
for i in range(self.K):
alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d
x[:,i] = alp[:,0]
features[:,i] = ftrs
log_x = F.log_softmax(x,dim=1)
x = F.softmax(x,dim=1) # alphas
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d
return y , x , log_x
def helper(self,x):
x = self.linear1(x)
x1 = F.tanh(x)
x = F.relu(x)
x = F.relu(self.linear2(x))
x = self.linear3(x)
#print(x1.shape)
return x,x1
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,50)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(50,self.output)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
# torch.manual_seed(12)
# focus_net = Focus_deep(2,1,9,2).double()
# focus_net = focus_net.to("cuda")
# focus_net.linear2.weight.shape,focus_net.linear3.weight.shape
# focus_net.linear2.weight.data[25:,:] = focus_net.linear2.weight.data[:25,:] #torch.nn.Parameter(torch.tensor([last_layer]) )
# (focus_net.linear2.weight[:25,:]== focus_net.linear2.weight[25:,:] )
# focus_net.linear3.weight.data[:,25:] = -focus_net.linear3.weight.data[:,:25] #torch.nn.Parameter(torch.tensor([last_layer]) )
# focus_net.linear3.weight
# focus_net.helper( torch.randn((5,2,2)).double().to("cuda") )
criterion = nn.CrossEntropyLoss()
def my_cross_entropy(x, y,alpha,log_alpha,k):
# log_prob = -1.0 * F.log_softmax(x, 1)
# loss = log_prob.gather(1, y.unsqueeze(1))
# loss = loss.mean()
loss = criterion(x,y)
#alpha = torch.clamp(alpha,min=1e-10)
b = -1.0* alpha * log_alpha
b = torch.mean(torch.sum(b,dim=1))
closs = loss
entropy = b
loss = (1-k)*loss + ((k)*b)
return loss,closs,entropy
def calculate_attn_loss(dataloader,what,where,criter,k):
what.eval()
where.eval()
r_loss = 0
cc_loss = 0
cc_entropy = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha,log_alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
#ent = np.sum(entropy(alpha.cpu().detach().numpy(), base=2, axis=1))/batch
# mx,_ = torch.max(alpha,1)
# entropy = np.mean(-np.log2(mx.cpu().detach().numpy()))
# print("entropy of batch", entropy)
#loss = (1-k)*criter(outputs, labels) + k*ent
loss,closs,entropy = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
r_loss += loss.item()
cc_loss += closs.item()
cc_entropy += entropy.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,cc_loss/i,cc_entropy/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
```
# training
```
number_runs = 10
full_analysis =[]
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
k = 0.001
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
where.linear2.weight.data[25:,:] = where.linear2.weight.data[:25,:]
where.linear3.weight.data[:,25:] = -where.linear3.weight.data[:,:25]
where = where.double().to("cuda")
ex,_ = where.helper( torch.randn((5,2,2)).double().to("cuda"))
print(ex)
torch.manual_seed(n)
what = Classification_deep(50,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.0005)
optimizer_what = optim.Adam(what.parameters(), lr=0.0005)
#criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 4000
# calculate zeroth epoch loss and FTPT values
running_loss ,_,_,anlys_data= calculate_attn_loss(train_loader,what,where,criterion,k)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha,log_alpha = where(inputs)
outputs = what(avg)
my_loss,_,_ = my_cross_entropy(outputs,labels,alpha,log_alpha,k)
# print statistics
running_loss += my_loss.item()
my_loss.backward()
optimizer_where.step()
optimizer_what.step()
#break
running_loss,ccloss,ccentropy,anls_data = calculate_attn_loss(train_loader,what,where,criterion,k)
analysis_data.append(anls_data)
if(epoch % 200==0):
print('epoch: [%d] loss: %.3f celoss: %.3f entropy: %.3f' %(epoch + 1,running_loss,ccloss,ccentropy))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training run ' +str(n))
#break
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
full_analysis.append((epoch, analysis_data))
correct = 0
total = 0
with torch.no_grad():
for data in test_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha,log_alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 test images: %f %%' % ( 100 * correct / total))
print(np.mean(np.array(FTPT_analysis),axis=0)) #[7.42700000e+01 2.44100000e+01 7.33333333e-02 1.24666667e+00]
FTPT_analysis
cnt=1
for epoch, analysis_data in full_analysis:
analysis_data = np.array(analysis_data)
# print("="*20+"run ",cnt,"="*20)
plt.figure(figsize=(6,5))
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0]/30,label="FTPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1]/30,label="FFPT")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2]/30,label="FTPF")
plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3]/30,label="FFPF")
plt.title("Training trends for run "+str(cnt))
plt.grid()
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.legend()
plt.xlabel("epochs", fontsize=14, fontweight = 'bold')
plt.ylabel("percentage train data", fontsize=14, fontweight = 'bold')
# plt.savefig(path + "run"+str(cnt)+".png",bbox_inches="tight")
# plt.savefig(path + "run"+str(cnt)+".pdf",bbox_inches="tight")
cnt+=1
# FTPT_analysis.to_csv(path+"synthetic_zeroth.csv",index=False)
```
| github_jupyter |
```
import pandas as pd
df = pd.read_csv('processed.csv.gz')
df.head()
df.info()
df = df.drop(columns=df.columns[0])
df.head()
df.groupby('vaderSentimentLabel').size()
import matplotlib.pyplot as plt
df.groupby('vaderSentimentLabel').count().plot.bar()
plt.show()
df.groupby('ratingSentimentLabel').size()
df.groupby('ratingSentimentLabel').count().plot.bar()
plt.show()
df.groupby('ratingSentiment').size()
positive_vader_sentiments = df[df.ratingSentiment == 2]
positive_string = []
for s in positive_vader_sentiments.cleanReview:
positive_string.append(s)
positive_string = pd.Series(positive_string).str.cat(sep=' ')
from wordcloud import WordCloud
wordcloud = WordCloud(width=2000,height=1000,max_font_size=200).generate(positive_string)
plt.imshow(wordcloud,interpolation='bilinear')
plt.show()
for s in positive_vader_sentiments.cleanReview[:20]:
if 'side effect' in s:
print(s)
negative_vader_sentiments = df[df.ratingSentiment == 1]
negative_string = []
for s in negative_vader_sentiments.cleanReview:
negative_string.append(s)
negative_string = pd.Series(negative_string).str.cat(sep=' ')
from wordcloud import WordCloud
wordcloud = WordCloud(width=2000,height=1000,max_font_size=200).generate(negative_string)
plt.imshow(wordcloud,interpolation='bilinear')
plt.axis('off')
plt.show()
neutral_vader_sentiments = df[df.ratingSentiment == 0]
neutral_string = []
for s in neutral_vader_sentiments.cleanReview:
neutral_string.append(s)
neutral_string = pd.Series(neutral_string).str.cat(sep=' ')
from wordcloud import WordCloud
wordcloud = WordCloud(width=2000,height=1000,max_font_size=200).generate(neutral_string)
plt.imshow(wordcloud,interpolation='bilinear')
plt.axis('off')
plt.show()
for s in neutral_vader_sentiments.cleanReview[:20]:
if 'side effect' in s:
print(s)
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english',ngram_range=(1,2))
features = tfidf.fit_transform(df.cleanReview)
labels = df.vaderSentiment
features.shape
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.naive_bayes import MultinomialNB
x_train,x_test,y_train,y_test = train_test_split(df['cleanReview'],df['ratingSentimentLabel'],random_state=0)
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
models = [RandomForestClassifier(n_estimators=200,max_depth=3,random_state=0),LinearSVC(),MultinomialNB(),LogisticRegression(random_state=0,solver='lbfgs',max_iter=2000,multi_class='auto')]
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model,features,labels,scoring='accuracy',cv=CV)
for fold_idx,accuracy in enumerate(accuracies):
entries.append((model_name,fold_idx,accuracy))
cv_df = pd.DataFrame(entries,columns=['model_name','fold_idx','accuracy'])
cv_df
cv_df.groupby('model_name').accuracy.mean()
from sklearn.preprocessing import Normalizer
model = LinearSVC('l2')
x_train,x_test,y_train,y_test = train_test_split(features,labels,test_size=0.25,random_state=0)
normalize = Normalizer()
x_train = normalize.fit_transform(x_train)
x_test = normalize.transform(x_test)
model.fit(x_train,y_train)
y_pred = model.predict(x_test)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
from sklearn.metrics import confusion_matrix
conf_mat = confusion_matrix(y_test,y_pred)
conf_mat
from mlxtend.plotting import plot_confusion_matrix
fig,ax = plot_confusion_matrix(conf_mat=conf_mat,colorbar=True,show_absolute=True,cmap='viridis')
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred,target_names= df['ratingSentimentLabel'].unique()))
df.head()
df.info()
y0 = df['vaderSentimentLabel']
y0
```
| github_jupyter |
```
from keras.datasets import mnist
from keras.models import Model
from keras.layers import Conv2D, MaxPool2D, UpSampling2D, Input
import cv2
import os
import numpy as np
import tensorflow as tf
devices = tf.config.experimental.get_visible_devices('GPU')
tf.config.experimental.set_memory_growth(device=devices[0], enable = True)
# dataset from keras
(X_train, y_train),(X_test, y_test) = mnist.load_data()
# load your dataset here
def load_data(path):
images = [os.path.join(path, i) for i in os.listdir(path) if i.endswith(".jpg") or i .endswith(".png")]
images = [cv2.imread(image_path) for image_path in images]
images = [cv2.cvtColor(image, cv2.COLOR_BGR2RGB) for image in images]
images = [cv2.resize(image, (28, 28)) for image in images]
return np.array(images)
#X_train = load_data('H:/Datasets/Face dataset/WIDER_val/images/0--Parade')
X_train.shape
import matplotlib.pyplot as plt
plt.imshow(X_train[0], cmap = 'gray')
X_train = X_train / 255.0
X_test = X_test / 255.0
X_train = X_train.reshape(X_train.shape[0], 28,28,1)
X_test = X_test.reshape(X_test.shape[0], 28,28,1)
X_train.shape
```
# Model
```
input_layer = Input(shape = (28,28,1))
# encoder
x = Conv2D(16, (3,3), activation = 'relu', padding = 'same')(input_layer)
x = MaxPool2D(pool_size = (2,2), padding = 'same')(x)
x = Conv2D(8, (3,3), activation = 'relu', padding = 'same')(x)
x = MaxPool2D(pool_size = (2,2), padding = 'same')(x)
x = Conv2D(8, (3,3), activation = 'relu', padding = 'same')(x)
encoded = MaxPool2D(pool_size = (2,2), padding = 'same')(x)
# decoder
x = Conv2D(8, (3,3), activation = 'relu', padding = 'same')(encoded)
x = UpSampling2D((2,2))(x)
x = Conv2D(8, (3,3), activation = 'relu', padding = 'same')(x)
x = UpSampling2D((2,2))(x)
x = Conv2D(16, (3,3), activation = 'relu')(x)
x = UpSampling2D((2,2))(x)
decoded = Conv2D(1, (3,3), activation = 'relu', padding = 'same')(x)
autoencoder = Model(input_layer, decoded)
autoencoder.summary()
# compile
autoencoder.compile(loss = 'binary_crossentropy', optimizer = 'adam')
history = autoencoder.fit(X_train, X_train, epochs=50, batch_size = 128, validation_data=(X_test,X_test))
autoencoder.summary()
encoder = Model(input_layer, encoded)
encoder.summary()
decoder_layer = Input(shape = (4,4,8))
decoder = autoencoder.layers[7](decoder_layer)
for layer in autoencoder.layers[8:]:
decoder = layer(decoder)
decoder = Model(decoder_layer, decoder, name = 'Decoder')
decoder.summary()
# encoder images for testing
encoded_images = encoder.predict(X_test, verbose = 1)
encoded_images.shape
decoded_images = decoder.predict(encoded_images, verbose=1)
decoded_images.shape
decoded_images = decoded_images.reshape(decoded_images.shape[0], 28,28)
decoded_images.shape
plt.imshow(decoded_images[1], cmap = 'gray')
X_test = X_test.reshape(X_test.shape[0], 28,28)
plt.imshow(X_test[1], cmap = 'gray')
```
| github_jupyter |
# APA Calling
## Aim
The purpose of this notebook is to call APA-based information (PDUI) based on [DAPARS2 method](https://github.com/3UTR/DaPars2).
## Methods
```
%preview ../../images/apa_calling.png
```
### 3'UTR Reference
* _gtf2bed12.py_ : Covert gtf to bed format (Source from in-house codes from Li Lab: https://github.com/Xu-Dong/Exon_Intron_Extractor/blob/main/scripts/gtf2bed12.py)
* _DaPars_Extract_Anno.py_ : extract the 3UTR regions in bed formats from the whole genome bed (Source from Dapars 2: https://github.com/3UTR/DaPars2/blob/master/src/DaPars_Extract_Anno.py)
### Call WIG data from transcriptome BAM files
Using bedtools or rsem-bam2wig, for RSEM based alignment
### Config files Generation
* _Python 3_ loops to read line by line the sum of reads coverage of all chromosome.
### Dapars2 Main Function
* _Dapars2_Multi_Sample.py_: use the least sqaures methods to calculate the usage of long isoforms (https://github.com/3UTR/DaPars2/blob/master/src/Dapars2_Multi_Sample.py)
Note: this part of code have been modified from source to deal with some formatting discrepancy in wig file
### Impute missing values in Dapars result
KNN using `impute` R package.
## Input
- A list of transcriptome level BAM files, eg generated by RSEM
- The 3'UTR annotation reference file
If you do not have 3'UTR annotation file, please generate it first. Input to this step is the transcriptome level gene feature file in `GTF` format that [we previously prepared](../../data_preprocessing/reference_data.html).
## Output
* Dapars config files
* PUDI (Raw) information saved in txt
* PDUI (Imputed) information saved in txt. This is recommended for further analysis.
## Minimal working example
To generate 3'UTR reference data,
```
sos run apa_calling.ipynb UTR_reference \
--cwd output/apa \
--hg-gtf reference_data/Homo_sapiens.GRCh38.103.chr.reformatted.ERCC.gtf \
--container /mnt/mfs/statgen/ls3751/container/dapars2_final.sif
```
## Command interface
```
sos run apa_calling.ipynb -h
```
## Workflow implementation
```
[global]
parameter: walltime = '400h'
parameter: mem = '200G'
parameter: ncore = 16
# the output directory for generated files
parameter: cwd = path
# path to GTF file
parameter: thread = 8
parameter: job_size = 1
parameter: container = ''
```
### Step 0: Generate 3UTR regions based on GTF
The 3UTR regions (saved in bed format) could be use __repeatly__ for different samples. It only served as the reference region. You may not need to run it if given generated hg19/hg38 3UTR regions.
```
# Generate the 3UTR region according to the gtf file
[UTR_reference]
# gtf file
parameter: hg_gtf = path
input: hg_gtf
output: f'{cwd}/{_input:bn}.bed',
f'{cwd}/{_input:bn}.transcript_to_geneName.txt',
f'{cwd}/{_input:bn}_3UTR.bed'
bash: expand = '${ }', container = container
gtf2bed12.py --gtf ${_input} --out ${cwd}
mv ${cwd}/gene_annotation.bed ${_output[0]}
mv ${cwd}/transcript_to_geneName.txt ${_output[1]}
DaPars_Extract_Anno.py -b ${_output[0]} -s ${_output[1]} -o ${_output[2]}
```
### Step 1: Generate WIG calls and flagstat files from BAM files
Generating WIG from BAM data via `bedtools` is recommended by Dapars authors. However, our transcriptome level calls are made using RSEM, which in fact contains a program called `rsem-bam2wig` with this one additional feature:
```
–no-fractional-weight : If this is set, RSEM will not look for “ZW” tag and each alignment appeared in the BAM file has weight 1. Set this if your BAM file is not generated by RSEM. Please note that this option must be at the end of the command line
```
Here we stick to `bedtools` because of its popularity and most generic BAM files may not have the `ZW` tag anyways.
```
[bam2tools]
parameter: n = 9
n = [x for x in range(n)]
input: for_each = 'n'
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = ncore
python: expand = True, container = container
import glob
import os
import subprocess
path = "/mnt/mfs/ctcn/datasets/rosmap/rnaseq/dlpfcTissue/batch{_n}/STAR_aligned"
name = glob.glob(path + "/**/*Aligned.sortedByCoord.out.bam", recursive = True)
wigpath = "/home/ls3751/project/ls3751/wig/batch{_n}/"
if not os.path.exists(wigpath):
os.makedirs(os.path.dirname(wigpath))
for i in name:
id = i.split("/")[-2]
filedir = path + "/" + id + "/" + id + ".bam"
out = wigpath + id + ".wig"
new_cmd = "bedtools genomecov -ibam " + filedir + " -bga -split -trackline" + " > " + out
os.system(new_cmd)
out_2 = wigpath + id + ".flagstat"
new_cmd_2 = "samtools flagstat --thread 8 " + filedir + " > " + out_2
os.system(new_cmd_2)
```
### Step 2: Generating config files and calculating sample depth
#### Notes on input file format
For the input file, it has the following format. Additional notes are:
* The first line is the information of file. If you do not have them, please add any content on first line
* The file must end with ".wig". It will not cause any problem if you directly change from ".bedgraph"
* If your input wig file did not have the characters __"chr"__ in the first column, please set `no_chr_prefix = T`
```
head -n 10 /mnt/mfs/statgen/ls3751/MWE_dapars2/sample1.wig
# Generate configuration file
[APAconfig]
parameter: bfile = path
parameter: annotation = path
parameter: job_size = 1
# Default parameters for Dapars2:
parameter: least_pass_coverage_percentage = 0.3
parameter: coverage_threshold = 10
output: [f'{cwd}/sample_mapping_files.txt',f'{cwd}/sample_configuration_file.txt']
task: trunk_workers = 1, trunk_size = 1, walltime = walltime, mem = mem, cores = ncore
python3: expand = "${ }", container = container
import re
import os
target_all_sample = os.listdir("${bfile}")
target_all_sample = list(filter(lambda v: re.match('.*wig$', v), target_all_sample))
target_all_sample = ["${bfile}" + "/" + w for w in target_all_sample]
def extract_total_reads(input_flagstat_file):
num_line = 0
total_reads = '-1'
#print input_flagstat_file
for line in open(input_flagstat_file,'r'):
num_line += 1
if num_line == 5:
total_reads = line.strip().split(' ')[0]
break
return total_reads
#print(target_all_sample)
print("INFO: Total",len(target_all_sample),"samples found in provided dirctory!")
# Total depth file:
mapping_file = open("${_output[0]}", "w")
for current_sample in target_all_sample:
flag = current_sample.split(".")[0] + ".flagstat"
current_sample_total_depth = extract_total_reads(flag)
field_out = [current_sample, str(current_sample_total_depth)]
mapping_file.writelines('\t'.join(field_out) + '\n')
print("Coverage of sample ", current_sample, ": ", current_sample_total_depth)
mapping_file.close()
# Configuration file:
config_file = open(${_output[1]:r},"w")
config_file.writelines(f"Annotated_3UTR=${annotation}\n")
config_file.writelines( "Aligned_Wig_files=%s\n" % ",".join(target_all_sample))
config_file.writelines(f"Output_directory=${cwd}/apa \n")
config_file.writelines(f"Output_result_file=Dapars_result\n")
config_file.writelines(f"Least_pass_coverage_percentage=${least_pass_coverage_percentage}\n")
config_file.writelines( "Coverage_threshold=${coverage_threshold}\n")
config_file.writelines( "Num_Threads=${thread}\n")
config_file.writelines(f"sequencing_depth_file=${_output[0]}")
config_file.close()
```
### Step 3: Run Dapars2 main to calculate PDUIs
Default input of `Dapars2_Multi_Sample.py` did not consider the situation that first column did not contain "chr" (shown in _Step 2_). We add a new argument no_chr_prefix (default is FALSE)
```
# Call Dapars2 multi_chromosome
[APAmain]
parameter: chr_prefix = False
parameter: chrlist = list
input: for_each = 'chrlist'
output: [f'{cwd}/apa_{x}/Dapars_result_result_temp.{x}.txt' for x in chrlist], group_by = 1
task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = ncore
bash: expand = True, container = container
python2 /mnt/mfs/statgen/ls3751/github/xqtl-pipeline/code/Dapars2_Multi_Sample.py {cwd}/sample_configuration_file.txt {_chrlist} {"F" if chr_prefix else "T"}
```
## Analysis demo
### Step 0: 3UTR generation
```
sos run /mnt/mfs/statgen/ls3751/github/xqtl-pipeline/code/molecular_phenotypes/calling/apa_calling.ipynb UTR_reference \
--cwd /mnt/mfs/statgen/ls3751/MWE_dapars2/Output \
--hg_gtf /mnt/mfs/statgen/ls3751/MWE_dapars2/gencode.v39.annotation.gtf \
--container /mnt/mfs/statgen/ls3751/container/dapars2_final.sif
tree /mnt/mfs/statgen/ls3751/MWE_dapars2/Output
```
### Step 1: Bam to wig process and extracting reads depth file
```
sos run /mnt/mfs/statgen/ls3751/github/xqtl-pipeline/code/molecular_phenotypes/calling/apa_calling.ipynb bam2tools \
--n 0 1 2 3 4 5 6 7 8 \
--container /mnt/mfs/statgen/ls3751/container/dapars2_final.sif
```
### Step 2: Generating config files
```
sos run /mnt/mfs/statgen/ls3751/github/xqtl-pipeline/code/molecular_phenotypes/calling/apa_calling.ipynb APAconfig \
--cwd /mnt/mfs/statgen/ls3751/rosmap/dlpfcTissue/batch0 \
--bfile /mnt/mfs/statgen/ls3751/rosmap/dlpfcTissue/batch0 \
--annotation /mnt/mfs/statgen/ls3751/MWE_dapars2/Output/gencode.v39.annotation_3UTR.bed \
--container /mnt/mfs/statgen/ls3751/container/dapars2_final.sif
tree /mnt/mfs/statgen/ls3751/MWE_dapars2/Output
```
### Step 3: Dapars2 Main
Note: the example is a truncated version, which just have coverage in chr1,chr11 and chr12
```
sos run /mnt/mfs/statgen/ls3751/github/xqtl-pipeline/code/molecular_phenotypes/calling/apa_calling.ipynb APAmain \
--cwd /mnt/mfs/statgen/ls3751/rosmap/dlpfcTissue/batch0 \
--chrlist chr21 chr14 chr1 \
--container /mnt/mfs/statgen/ls3751/container/dapars2_final.sif
tree /mnt/mfs/statgen/ls3751/MWE_dapars2/Output
```
| github_jupyter |
# [Module 2.1] Write Preproces Code
preprocessing.py 의 역할은 링크를 참조 바랍니다. --> [여기](https://github.com/gonsoomoon-ml/churn-prediction-workshop/blob/master/9.1.Understand-Preprocess.py.ipynb)<br>
아래 코드는 전처리 로직(알고리즘)에 대해서 설명 합니다.
## Feature Transformer (전처리 학습 모델) - preprocessing.py 파일
- Numerical 데이타는 <a href=https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html>StandardScaler</a>를 사용하여 Normalization을 함.
* z = (x - u) / s
* (z: 표준화된 값. 이 값을 학습시에 사용, x: 각 테이타의 값, u: 데이타 항목의 평균, s: 데이타 항목의 표준편차)
- 아래 Account Length, ..CustServ Calls까지 모두 위의 방법으로 전처리 함.
- 아래 imputer는 결측값이 있을 경우에 해당 컬럼의 median 값을 사용 함.
```python
numeric_features = list([
'Account Length',
'VMail Message',
'Day Mins',
'Day Calls',
'Eve Mins',
'Eve Calls',
'Night Mins',
'Night Calls',
'Intl Mins',
'Intl Calls',
'CustServ Calls'])
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
```
- Categorical 데이타는 One Hot Encoding 방식으로 전처리 함. (예, 남자:0, 여자:1 일 경우에 남자:(1,0), 여자:(0,1) 방식으로 처리)
- State, Area Code, Int'l Plan, VMail Plan을 적용 함
```python
categorical_features = ['State','Area Code',"Int'l Plan",'VMail Plan']
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
```
- 최종적으로 Numerical and Categorical Transformer를 합쳐서 Transformer 생성하고, 학습하여 Transformer의 모델을 S3에 업로드 함.
```python
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)],
remainder="drop")
preprocessor.fit(concat_data)
joblib.dump(preprocessor, os.path.join(args.model_dir, "model.joblib"))
```
- Phone 데이타 항목은 위의 전처리 항목에서 제외 함. 유저별로 고유한 번호이기에 피쳐로서 의미가 없을 것으로 보임
```
%%writefile preprocessing.py
from __future__ import print_function
import time
import sys
from io import StringIO
import os
import shutil
import argparse
import csv
import json
import numpy as np
import pandas as pd
import logging
from sklearn.compose import ColumnTransformer
from sklearn.externals import joblib
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Binarizer, StandardScaler, OneHotEncoder
from sagemaker_containers.beta.framework import (
content_types, encoders, env, modules, transformer, worker)
# Since we get a headerless CSV file we specify the column names here.
feature_columns_names = [
'State',
'Account Length',
'Area Code',
'Phone',
"Int'l Plan",
'VMail Plan',
'VMail Message',
'Day Mins',
'Day Calls',
'Day Charge',
'Eve Mins',
'Eve Calls',
'Eve Charge',
'Night Mins',
'Night Calls',
'Night Charge',
'Intl Mins',
'Intl Calls',
'Intl Charge',
'CustServ Calls']
label_column = 'Churn?'
feature_columns_dtype = {
'State' : str,
'Account Length' : np.int64,
'Area Code' : str,
'Phone' : str,
"Int'l Plan" : str,
'VMail Plan' : str,
'VMail Message' : np.int64,
'Day Mins' : np.float64,
'Day Calls' : np.int64,
'Day Charge' : np.float64,
'Eve Mins' : np.float64,
'Eve Calls' : np.int64,
'Eve Charge' : np.float64,
'Night Mins' : np.float64,
'Night Calls' : np.int64,
'Night Charge' : np.float64,
'Intl Mins' : np.float64,
'Intl Calls' : np.int64,
'Intl Charge' : np.float64,
'CustServ Calls' : np.int64}
label_column_dtype = {'Churn?': str}
def merge_two_dicts(x, y):
z = x.copy() # start with x's keys and values
z.update(y) # modifies z with y's keys and values & returns None
return z
def _is_inverse_label_transform():
"""Returns True if if it's running in inverse label transform."""
return os.getenv('TRANSFORM_MODE') == 'inverse-label-transform'
def _is_feature_transform():
"""Returns True if it's running in feature transform mode."""
return os.getenv('TRANSFORM_MODE') == 'feature-transform'
if __name__ == '__main__':
parser = argparse.ArgumentParser()
# Sagemaker specific arguments. Defaults are set in the environment variables.
parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
args = parser.parse_args()
input_files = [ os.path.join(args.train, file) for file in os.listdir(args.train) ]
if len(input_files) == 0:
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(args.train, "train"))
raw_data = [ pd.read_csv(
file,
header=None,
names=feature_columns_names + [label_column],
dtype=merge_two_dicts(feature_columns_dtype, label_column_dtype)) for file in input_files ]
concat_data = pd.concat(raw_data)
print(concat_data.head(5))
numeric_features = list([
'Account Length',
'VMail Message',
'Day Mins',
'Day Calls',
'Eve Mins',
'Eve Calls',
'Night Mins',
'Night Calls',
'Intl Mins',
'Intl Calls',
'CustServ Calls'])
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
categorical_features = ['State','Area Code',"Int'l Plan",'VMail Plan']
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)],
remainder="drop")
preprocessor.fit(concat_data)
joblib.dump(preprocessor, os.path.join(args.model_dir, "model.joblib"))
print("saved model!")
def input_fn(input_data, request_content_type):
"""Parse input data payload
We currently only take csv input. Since we need to process both labelled
and unlabelled data we first determine whether the label column is present
by looking at how many columns were provided.
"""
print("input_fn-request_content_type: ", request_content_type)
print("input_fn-type of input_data: ", type(input_data))
content_type = request_content_type.lower(
) if request_content_type else "text/csv"
content_type = content_type.split(";")[0].strip()
if isinstance(input_data, str):
str_buffer = input_data
else:
str_buffer = str(input_data,'utf-8')
if _is_feature_transform():
logging.info(f"Input_fn, Mode: feature_transform")
if content_type == 'text/csv':
# Read the raw input data as CSV.
df = pd.read_csv(StringIO(input_data), header=None)
if len(df.columns) == len(feature_columns_names) + 1:
# This is a labelled example, includes the label
df.columns = feature_columns_names + [label_column]
elif len(df.columns) == len(feature_columns_names):
# This is an unlabelled example.
df.columns = feature_columns_names
return df
else:
raise ValueError("{} not supported by script!".format(content_type))
if _is_inverse_label_transform():
if (content_type == 'text/csv' or content_type == 'text/csv; charset=utf-8'):
# Read the raw input data as CSV.
df = pd.read_csv(StringIO(str_buffer), header=None)
logging.info(f"input_fn, Mode: inverse_label_transform")
logging.info(f"Shape of the requested data: '{df.shape}'")
return df
else:
raise ValueError("{} not supported by script!".format(content_type))
def output_fn(prediction, accept):
"""Format prediction output
The default accept/content-type between containers for serial inference is JSON.
We also want to set the ContentType or mimetype as the same value as accept so the next
container can read the response payload correctly.
"""
logging.info(f"Output_fn: prdiction - '{prediction}' ")
# Set to text/csv
accept = 'text/csv'
if type(prediction) is not np.ndarray:
prediction=prediction.toarray()
print("output_fn-type of prediction: ", type(prediction))
if accept == "application/json": # Code in the case of future use
instances = []
for row in prediction.tolist():
instances.append({"features": row})
json_output = {"instances": instances}
return worker.Response(json.dumps(json_output), mimetype=accept)
elif accept == 'text/csv':
return worker.Response(encoders.encode(prediction, accept), mimetype=accept)
else:
raise RuntimeException("{} accept type is not supported by this script.".format(accept))
def predict_fn(input_data, model):
"""Preprocess input data
We implement this because the default predict_fn uses .predict(), but our model is a preprocessor
so we want to use .transform().
The output is returned in the following order:
rest of features either one hot encoded or standardized
"""
if _is_feature_transform():
logging.info(f"predict_fn, Mode: feature_transform")
features = model.transform(input_data)
print("After trainsformation")
print(features[0:2])
if label_column in input_data:
# Return the label (as the first column) and the set of features.
label_features = np.insert(features.toarray(), 0, pd.get_dummies(input_data[label_column])['True.'], axis=1)
print("After insering a label")
print(label_features[0:2])
return label_features
else:
# Return only the set of features
return features
if _is_inverse_label_transform():
logging.info(f"predict_fn, Mode: inverse_transform - input_data: '{input_data}'")
features = input_data.iloc[:,0]>0.5
features = features.values
logging.info(f"predict_fn, Mode: inverse_transform - features after transformation: '{features}'")
return features
def model_fn(model_dir):
"""Deserialize fitted model
"""
if _is_feature_transform():
logging.info(f"model_fn, Mode: feature_transform")
preprocessor = joblib.load(os.path.join(model_dir, "model.joblib"))
return preprocessor
if _is_inverse_label_transform():
logging.info(f"model_fn, Mode: inverse_transform")
```
| github_jupyter |
BTW strings can be converted to the following formats via the `output_format` parameter:
* `compact`: only number strings without any seperators or whitespace, like "004495445B01"
* `standard`: BTW strings with proper whitespace in the proper places. Note that in the case of BTW, the compact format is the same as the standard one.
Invalid parsing is handled with the `errors` parameter:
* `coerce` (default): invalid parsing will be set to NaN
* `ignore`: invalid parsing will return the input
* `raise`: invalid parsing will raise an exception
The following sections demonstrate the functionality of `clean_nl_btw()` and `validate_nl_btw()`.
### An example dataset containing BTW strings
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"btw": [
'004495445B01',
'123456789B90',
'BE 428759497',
'BE431150351',
"002 724 334",
"hello",
np.nan,
"NULL",
],
"address": [
"123 Pine Ave.",
"main st",
"1234 west main heights 57033",
"apt 1 789 s maple rd manhattan",
"robie house, 789 north main street",
"1111 S Figueroa St, Los Angeles, CA 90015",
"(staples center) 1111 S Figueroa St, Los Angeles",
"hello",
]
}
)
df
```
## 1. Default `clean_nl_btw`
By default, `clean_nl_btw` will clean btw strings and output them in the standard format with proper separators.
```
from dataprep.clean import clean_nl_btw
clean_nl_btw(df, column = "btw")
```
## 2. Output formats
This section demonstrates the output parameter.
### `standard` (default)
```
clean_nl_btw(df, column = "btw", output_format="standard")
```
### `compact`
```
clean_nl_btw(df, column = "btw", output_format="compact")
```
## 3. `inplace` parameter
This deletes the given column from the returned DataFrame.
A new column containing cleaned BTW strings is added with a title in the format `"{original title}_clean"`.
```
clean_nl_btw(df, column="btw", inplace=True)
```
## 4. `errors` parameter
### `coerce` (default)
```
clean_nl_btw(df, "btw", errors="coerce")
```
### `ignore`
```
clean_nl_btw(df, "btw", errors="ignore")
```
## 4. `validate_nl_btw()`
`validate_nl_btw()` returns `True` when the input is a valid BTW. Otherwise it returns `False`.
The input of `validate_nl_btw()` can be a string, a Pandas DataSeries, a Dask DataSeries, a Pandas DataFrame and a dask DataFrame.
When the input is a string, a Pandas DataSeries or a Dask DataSeries, user doesn't need to specify a column name to be validated.
When the input is a Pandas DataFrame or a dask DataFrame, user can both specify or not specify a column name to be validated. If user specify the column name, `validate_nl_btw()` only returns the validation result for the specified column. If user doesn't specify the column name, `validate_nl_btw()` returns the validation result for the whole DataFrame.
```
from dataprep.clean import validate_nl_btw
print(validate_nl_btw("004495445B01"))
print(validate_nl_btw("123456789B90"))
print(validate_nl_btw('BE 428759497'))
print(validate_nl_btw('BE431150351'))
print(validate_nl_btw("004085616"))
print(validate_nl_btw("hello"))
print(validate_nl_btw(np.nan))
print(validate_nl_btw("NULL"))
```
### Series
```
validate_nl_btw(df["btw"])
```
### DataFrame + Specify Column
```
validate_nl_btw(df, column="btw")
```
### Only DataFrame
```
validate_nl_btw(df)
```
| github_jupyter |
```
import sys # required for relative imports in jupyter lab
sys.path.insert(0, '../')
from cosmosis.model import FFNet
from cosmosis.learning import Learn, Selector
from cosmosis.dataset import SKDS
from dataset import QM7, QM7b, QM7X, QM9, ANI1x
from torch.optim import Adam
from torch.nn import MSELoss, L1Loss
from torch.optim.lr_scheduler import ReduceLROnPlateau
model_params = {'D_in': 128,
'H': 512,
'D_out': 1,
'model_name': 'funnel'}
ds_params = {'train_params': {'features': ['X'],
'targets': ['y'],
'features_dtype': 'float32',
'targets_dtype': 'float32',
'make': 'make_regression',
'transform': [],
'target_transform': [],
'sk_params': {'n_samples': 10000,
'n_features': 128}}}
metrics_params = {'report_interval': 10}
opt_params = {'lr': 0.01}
crit_params = {'reduction': 'sum'}
sample_params = {'set_seed': 88,
'splits': (.7, .15)}
sched_params = {'factor': .5,
'patience': 2,
'cooldown': 1}
l = Learn([SKDS], FFNet, Selector,
Optimizer=Adam, Scheduler=ReduceLROnPlateau, Criterion=MSELoss,
model_params=model_params, ds_params=ds_params, sample_params=sample_params,
opt_params=opt_params, sched_params=sched_params, crit_params=crit_params,
metrics_params=metrics_params,
adapt=False, load_model=False, load_embed=False, save_model=False,
batch_size=256, epochs=20)
model_params = {'D_in': 23*23+23*32,
'H': 4096,
'D_out': 1,
'model_name': 'funnel',
'embed_params': [('atoms',7,32,None,True)]}
ds_params = {'train_params': {'features': ['coulomb'],
'targets': ['ae'],
'embeds': ['atoms'],
'in_file': './data/qm7/qm7.mat',
'flatten': True}}
metrics_params = {'report_interval': 10}
crit_params = {'reduction': 'sum'}
sample_params = {'set_seed': 88,
'splits': (.7,.15)}
sched_params = {'factor': .5,
'patience': 2,
'cooldown': 1}
opt_params = {'lr': 0.01}
l = Learn([QM7], FFNet, Selector,
Optimizer=Adam, Scheduler=ReduceLROnPlateau, Criterion=L1Loss,
model_params=model_params, ds_params=ds_params, sample_params=sample_params,
opt_params=opt_params, sched_params=sched_params, crit_params=crit_params,
metrics_params=metrics_params,
adapt=False, load_model=False, load_embed=False, save_model=False,
batch_size=256, epochs=20)
model_params = {'D_in': 23*23,
'H': 2048,
'D_out': 1,
'model_name': 'funnel'}
ds_params = {'train_params': {'features': ['coulomb'],
'targets': ['E'],
'in_file': './data/qm7b/qm7b.mat',
'flatten': True}}
metrics_params = {'report_interval': 10}
crit_params = {'reduction': 'sum'}
sample_params = {'set_seed': 88,
'splits': (.7,.15)}
sched_params = {'factor': .5,
'patience': 5,
'cooldown': 2}
opt_params = {'lr': 0.01}
l = Learn([QM7b], FFNet, Selector,
Optimizer=Adam, Scheduler=ReduceLROnPlateau, Criterion=L1Loss,
model_params=model_params, ds_params=ds_params, sample_params=sample_params,
opt_params=opt_params, sched_params=sched_params, crit_params=crit_params,
metrics_params=metrics_params,
adapt=False, load_model=False, load_embed=False, save_model=False,
batch_size=256, epochs=50)
#find the longest molecule
ds_params = {'train_params': {'features': ['atNUM'],
'pad': None,
'targets': [],
'embeds': [],
'selector': ['opt']}}
qm7x = QM7X(**ds_params['train_params'])
l = 0
for i in qm7x.ds_idx:
s = qm7x[i][0].shape[0]
if s > l:
l = s
print('longest molecule length: ', l)
qm7x[1]
model_params = {'D_in': 23*23+23*64,
'H': 4096,
'D_out': 1,
'model_name': 'funnel',
'embed_params': [('atNUM',9,64,None,True)]}
ds_params = {'train_params': {'features': ['distance'],
'pad': 23,
'targets': ['eAT'],
'embeds': ['atNUM'],
'selector': ['opt'],
'flatten': True}}
metrics_params = {'report_interval': 10}
crit_params = {'reduction': 'sum'}
sample_params = {'set_seed': 88,
'splits': (.7,.15)}
sched_params = {'factor': .5,
'patience': 5,
'cooldown': 2}
opt_params = {'lr': 0.01}
l = Learn([QM7X], FFNet, Selector,
Optimizer=Adam, Scheduler=ReduceLROnPlateau, Criterion=L1Loss,
model_params=model_params, ds_params=ds_params, sample_params=sample_params,
opt_params=opt_params, sched_params=sched_params, crit_params=crit_params,
metrics_params=metrics_params,
adapt=False, load_model=False, load_embed=False, save_model=False,
batch_size=256, epochs=50)
model_params = {'D_in': 29*29,
'H': 4096,
'D_out': 1,
'model_name': 'funnel'}
ds_params = {'train_params': {#'n': 10000,
'features': ['coulomb'],
'embeds': [],
'targets': ['U0'],
'pad': 29,
'filter_on': None,
'use_pickle': 'qm9.p',
'flatten': True}}
metrics_params = {'report_interval': 10}
crit_params = {'reduction': 'sum'}
sample_params = {'set_seed': 88,
'splits': (.7,.15)}
sched_params = {'factor': .5,
'patience': 5,
'cooldown': 2}
opt_params = {'lr': 0.01}
l = Learn([QM9], FFNet, Selector,
Optimizer=Adam, Scheduler=ReduceLROnPlateau, Criterion=L1Loss,
model_params=model_params, ds_params=ds_params, sample_params=sample_params,
opt_params=opt_params, sched_params=sched_params, crit_params=crit_params,
metrics_params=metrics_params,
adapt=False, load_model=False, load_embed=False, save_model=False,
batch_size=256, epochs=20)
model_params = {'D_in': 29*29,
'H': 4096,
'D_out': 1,
'model_name': 'funnel',
'embed_params': []}
ds_params = {'train_params': {#'n': 10000,
'features': ['coulomb'],
'embeds': [],
'targets': ['U0'],
'pad': 29,
'filter_on': ('n_atoms','>','18'),
'use_pickle': 'n_atoms_greater_than_18.p',
'flatten': True}}
crit_params = {'reduction': 'sum'}
sample_params = {'set_seed': 88,
'splits': (.7,.15)}
sched_params = {'factor': .5,
'patience': 5,
'cooldown': 2}
opt_params = {'lr': 0.01}
l = Learn([QM9], FFNet, Selector, Optimizer=Adam, Scheduler=ReduceLROnPlateau, Criterion=L1Loss,
model_params=model_params, ds_params=ds_params, sample_params=sample_params,
opt_params=opt_params, sched_params=sched_params, crit_params=crit_params,
adapt=False, load_model=False, load_embed=False, save_model=False,
batch_size=256, epochs=30)
model_params = {'D_in': 63*63+63*32,
'H': 8192,
'D_out': 1,
'model_name': 'funnel',
'embed_params': [('atomic_numbers',9,32,None,True)]}
ds_params = {'train_params': {'features': ['distance'],
'targets': ['wb97x_dz.energy'],
'embeds': ['atomic_numbers'],
'pad': 63, #length of the longest molecule in the dataset
'flatten': True,
'criterion': ['wb97x_dz.energy'],
'conformation': 'random',
'in_file': './data/ani1x/ani1x-release.h5'}}
metrics_params = {'report_interval': 20}
crit_params = {'reduction': 'sum'}
sample_params = {'set_seed': 88,
'splits': (.7,.15)}
sched_params = {'factor': .5,
'patience': 5,
'cooldown': 5}
opt_params = {'lr': 0.01}
l = Learn([ANI1x], FFNet, Selector, Optimizer=Adam, Scheduler=ReduceLROnPlateau, Criterion=L1Loss,
model_params=model_params, ds_params=ds_params, sample_params=sample_params,
opt_params=opt_params, sched_params=sched_params, crit_params=crit_params,
adapt=False, load_model=False, load_embed=False, save_model=False,
batch_size=128, epochs=50)
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import pyplot
import numpy
```
Part of interaction between codes in AMUSE is based on exchanging data between the *community* codes or exchanging data between these codes and AMUSE. As you might have noticed in the pervious tutorial topic, every code provides access to particle collections or grids. The data of these collections or grids *live* inside the code, while the data of collections created in the script *live* inside the python process.
<p style="background-color: lightyellow">
<em>Background:</em> All data storage of particle collections (or grids) is implemented by different storage classes. AMUSE supports storage classes that simply store the data in python lists and numpy arrays. AMUSE also supports storage classes that send messages to the codes to perform the actual storage and retrieval. At the script level the interface to these classes is all the same, so in normal use they behave the same. The performance of the different storage classes will vary, for a code storage the data may be sent over an internet connection causing slower reaction times. Smart usage of channels and caching data in memory sets will increase performance.
</p>
```
from amuse.lab import *
```
It is easy to make two collections with the same particles, we only have to copy the collection
```
particles1 = Particles(4)
particles2 = particles1.copy()
print(particles1)
print(particles2)
```
The particles in the collection have the same keys and are considered the same particles in AMUSE, although they are not identical.
```
print(particles1[1] == particles2[1])
print(particles1[1] is particles2[1])
```
Setting the mass of the particles in one collection will not influence the particles in the second collection.
```
particles1.mass = [5, 6, 7, 8] | units.MSun
particles1.radius = [1, 2, 3, 4] | units.RSun
print(particles2)
```
You could however easily copy the data over with an attribute assignment
```
particles2.mass = particles1.mass
print(particles2)
```
However this will fail (or be incorrect) if one of the sets changed in before the copy action
```
particles2.remove_particle(particles2[2])
particles2.mass = particles1.mass
print(particles2)
```
In general assuming that the number and order of particles in sets is maintained is unsafe. The particle set indices no longer refer to the same particles as we removed the third particle from `particles2`. We just tried to copy the masses based on the position of the particle in the collection and not based on the identity of the particle. In complex scripts where particles are removed and added due to physical processes this will cause incorrect results.
```
print(particles1[2] == particles2[2])
print(particles1[2].mass)
print(particles2[2].mass)
```
AMUSE provides channels to track the particle identities and optimize the transport of attribute values between collections. Channels are save to use when adding or removing particles. Channels are uni-directional, you'll need two to be able to do bi-derectional information exchange.
```
channel_from_1_to_2 = particles1.new_channel_to(particles2)
channel_from_1_to_2.copy_attribute("mass")
print(particles1)
print(particles2)
```
As you can see the particles with the same key now also have the same mass. Channels are always defined between exactly 2 collections and will only copy data of the overlapping particles in both collections. In the abouve case data of 3 particles was copied.
Channels can copy an attribute from one set to another and give the copy a new name. This is useful, as some codes define particles with attributes having the same name but a script my assign a different meaning to these names. A stellar evolution code will define the star radius as just that, the star radius, but a stellar dynamics code might interpret the star radius as the star interaction radius (which will be factors larger).
```
channel_from_1_to_2.copy_attribute("mass", "core_mass")
print(particles2)
```
Channels can be used to copy multiple attributes in one go, this can optimize data transport between codes.
```
channel_from_1_to_2.copy_attributes(["mass", "radius"])
print(particles2)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.