text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
```
import sys # for automation and parallelisation
manual, scenario = (True, 'base') if 'ipykernel' in sys.argv[0] else (False, sys.argv[1])
if manual:
%matplotlib inline
import numpy as np
import pandas as pd
from quetzal.model import stepmodel
```
# Modelling steps 1 and 2.
## Saves transport demand between zones
## Needs zones
```
input_path = '../input/transport_demand/'
output_path = '../output/'
model_path = '../model/'
sm = stepmodel.read_json(model_path + 'de_zones')
```
### Emission and attraction with quetzal
Steps: Generation and distribution --> Transport demand in volumes<br>
Transport volumes can be generated by using the function<br>
step_distribution(impedance_matrix=None, **od_volume_from_zones_kwargs)<br>
:param impedance_matrix: an OD unstaked friction dataframe<br>
used to compute the distribution.<br>
:param od_volume_from_zones_kwargs: if the friction matrix is not<br>
provided, it will be automatically computed using a gravity<br>
distribution which uses the following parameters:<br>
param power: (int) the gravity exponent<br>
param intrazonal: (bool) set the intrazonal distance to 0 if False,<br>
compute a characteristic distance otherwise.<br>
Or create the volumes from input data<br>
### Load transport demand data from VP2030
The German federal government's transport study "[Bundesverkehrswegeplan 2030](https://www.bmvi.de/SharedDocs/DE/Artikel/G/BVWP/bundesverkehrswegeplan-2030-inhalte-herunterladen.html)" uses origin destination matrices on NUTS3-level resolution and makes them accessible under copyright restrictions for the base year and the year of prognosis. These matrices cannot be published in their original form.
```
vp2010 = pd.read_excel(input_path + 'PVMatrix_BVWP15_A2010.xlsx')
vp2030 = pd.read_excel(input_path + 'PVMatrix_BVWP15_P2030.xlsx')
#print(vp2010.shape)
vp2010[vp2010.isna().any(axis=1)]
for df in [vp2010, vp2030]:
df.rename(columns={'# Quelle': 'origin', 'Ziel': 'destination'}, inplace=True)
def get_vp2017(vp2010_i, vp2030_i):
return vp2010_i + (vp2030_i - vp2010_i) * (7/20)
# Calculate a OD table for the year 2017
vp2017 = get_vp2017(vp2010.set_index(['origin', 'destination']),
vp2030.set_index(['origin', 'destination']))
vp2017.dropna(how='all', inplace=True)
#print(vp2010.shape)
vp2017[vp2017.isna().any(axis=1)]
vp2017 = vp2017[list(vp2017.columns)].astype(int)
#vp2017.head()
```
### Create the volumes table
```
# Sum up trips by purpose
for suffix in ['Fz1', 'Fz2', 'Fz3', 'Fz4', 'Fz5', 'Fz6']:
vp2017[suffix] = vp2017[[col for col in list(vp2017.columns) if col[-3:] == suffix]].sum(axis=1)
# Merge purpose 5 and 6 due to calibration data limitations
vp2017['Fz6'] = vp2017['Fz5'] + vp2017['Fz6']
# Replace LAU IDs with NUTS IDs in origin and destination
nuts_lau_dict = sm.zones.set_index('lau_id')['NUTS_ID'].to_dict()
vp2017.reset_index(level=['origin', 'destination'], inplace=True)
# Zones that appear in the VP (within Germany) but not in the model
sorted([i for i in set(list(vp2017['origin'])+list(vp2017['destination'])) -
set([int(k) for k in nuts_lau_dict.keys()]) if i<=16077])
# Most of the above numbers are airports in the VP, however
# NUTS3-level zones changed after the VP2030
# Thus the VP table needs to be updated manually
update_dict = {3156: 3159, 3152: 3159, # Göttingen
13001: 13075, 13002: 13071, 13005: 13073, 13006: 13074,
13051: 13072, 13052: 13071, 13053: 13072, 13054: 13076, 13055: 13071, 13056: 13071,
13057: 13073, 13058: 13074, 13059: 13075, 13060: 13076, 13061: 13073, 13062: 13075}
# What is the sum of all trips? For Validation
cols = [c for c in vp2017.columns if c not in ['origin', 'destination']]
orig_sum = vp2017[cols].sum().sum()
orig_sum
# Update LAU codes
vp2017['origin'] = vp2017['origin'].replace(update_dict)
vp2017['destination'] = vp2017['destination'].replace(update_dict)
sorted([i for i in set(list(vp2017['origin'])+list(vp2017['destination'])) -
set([int(k) for k in nuts_lau_dict.keys()]) if i<=16077])
# Replace LAU with NUTS
vp2017['origin'] = vp2017['origin'].astype(str).map(nuts_lau_dict)
vp2017['destination'] = vp2017['destination'].astype(str).map(nuts_lau_dict)
# Restrict to cells in the model
vp2017 = vp2017[~vp2017.isna().any(axis=1)]
vp2017.shape
# What is the sum of all trips after ditching outer-German trips?
vp_sum = vp2017[cols].sum().sum()
vp_sum / orig_sum
# Aggregate OD pairs
vp2017 = vp2017.groupby(['origin', 'destination']).sum().reset_index()
vp2017[cols].sum().sum() / orig_sum
```
### Add car ownership segments
```
sm.volumes = vp2017[['origin', 'destination', 'Fz1', 'Fz2', 'Fz3', 'Fz4', 'Fz6']
].copy().set_index(['origin', 'destination'], drop=True)
# Car availabilities from MiD2017 data
av = dict(zip(list(sm.volumes.columns),
[0.970375, 0.965208, 0.968122, 0.965517, 0.95646]))
# Split purpose cells into car ownership classes
for col in sm.volumes.columns for car in [0,1]:
sm.volumes[(col, car)] = sm.volumes[col] * abs(((1-car)*1 - av[col]))
sm.volumes.drop(col, inplace=True)
sm.volumes.reset_index(inplace=True)
```
## Save model
```
sm.volumes.shape
sm.volumes.columns
# Empty rows?
assert len(sm.volumes.loc[sm.volumes.sum(axis=1)==0])==0
# Saving volumes
sm.to_json(model_path + 'de_volumes', only_attributes=['volumes'], encoding='utf-8')
```
## Create validation table
Generate a normalised matrix for the year 2017 in order to validate model results against each other. It is needed for the calibration step.
```
# Merge purpose 5 and 6
for prefix in ['Bahn', 'MIV', 'Luft', 'OESPV', 'Rad', 'Fuß']:
vp2017[prefix + '_Fz6'] = vp2017[prefix + '_Fz5'] + vp2017[prefix + '_Fz6']
vp2017 = vp2017[[col for col in list(vp2017.columns) if col[-1]!='5']]
# Merge bicycle and foot
for p in [1,2,3,4,6]:
vp2017['non_motor_Fz' + str(p)] = vp2017['Rad_Fz' + str(p)] + vp2017['Fuß_Fz' + str(p)]
vp2017 = vp2017[[col for col in list(vp2017.columns) if not col[:3] in ['Rad', 'Fuß']]]
# Prepare columns
vp2017.set_index(['origin', 'destination'], drop=True, inplace=True)
vp2017 = vp2017[[col for col in vp2017.columns if col[:2]!='Fz']]
vp2017.columns
# Normalise
vp2017_norm = (vp2017-vp2017.min())/(vp2017.max()-vp2017.min()).max()
vp2017_norm.sample(5)
# Save normalised table
vp2017_norm.to_csv(input_path + 'vp2017_validation_normalised.csv')
vp2017_norm.columns = pd.MultiIndex.from_tuples(
[(col.split('_')[0], col.split('_')[-1]) for col in vp2017_norm.columns],
names=['mode', 'segment'])
if manual:
vp2017_norm.T.sum(axis=1).unstack('segment').plot.pie(
subplots=True, figsize=(16, 4), legend=False)
# Restrict to inter-cell traffic and cells of the model
vp2017_norm.reset_index(level=['origin', 'destination'], inplace=True)
vp2017_norm = vp2017_norm.loc[(vp2017_norm['origin']!=vp2017_norm['destination']) &
(vp2017_norm['origin'].notna()) &
(vp2017_norm['destination'].notna())]
vp2017_norm.set_index(['origin', 'destination'], drop=True, inplace=True)
if manual:
vp2017_norm.T.sum(axis=1).unstack('segment').plot.pie(
subplots=True, figsize=(16, 4), legend=False)
# Clear the RAM if notebook stays open
vp2010 = None
vp2030 = None
```
| github_jupyter |
# Identifying country names from incomplete house addresses
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc">
<ul class="toc-item">
<li><span><a href="#Introduction" data-toc-modified-id="Introduction-1">Introduction</a></span></li>
<li><span><a href="#Prerequisites" data-toc-modified-id="Prerequisites-2">Prerequisites</a></span></li>
<li><span><a href="#Imports" data-toc-modified-id="Imports-3">Imports</a></span></li>
<li><span><a href="#Data-preparation" data-toc-modified-id="Data-preparation-4">Data preparation</a></span></li>
<li><span><a href="#TextClassifier-model" data-toc-modified-id="TextClassifier-model-5">TextClassifier model</a></span></li>
<ul class="toc-item">
<li><span><a href="#Load-model-architecture" data-toc-modified-id="Load-model-architecture-5.1">Load model architecture</a></span></li>
<li><span><a href="#Model-training" data-toc-modified-id="Model-training-5.2">Model training</a></span></li>
<li><span><a href="#Validate-results" data-toc-modified-id="Validate-results-5.3">Validate results</a></span></li>
<li><span><a href="#Model-metrics" data-toc-modified-id="Model-metrics-5.4">Model metrics</a></span></li>
<li><span><a href="#Get-misclassified-records" data-toc-modified-id="Get-misclassified-records-5.5">Get misclassified records</a></span></li>
<li><span><a href="#Saving-the-trained-model" data-toc-modified-id="Saving-the-trained-model-5.6">Saving the trained model</a></span></li>
</ul>
<li><span><a href="#Model-inference" data-toc-modified-id="Model-inference-6">Model inference</a></span></li>
<li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-7">Conclusion</a></span></li>
<li><span><a href="#References" data-toc-modified-id="References-8">References</a></span></li>
</ul></div>
# Introduction
[Geocoding](https://en.wikipedia.org/wiki/Geocoding) is the process of taking input text, such as an **address** or the name of a place, and returning a **latitude/longitude** location for that place. In this notebook, we will be picking up a dataset consisting of incomplete house addresses from 10 countries. We will build a classifier using `TextClassifier` class of `arcgis.learn.text` module to predict the country for these incomplete house addresses.
The house addresses in the dataset consist of text in multiple languages like English, Japanese, French, Spanish, etc. The dataset is a small subset of the house addresses taken from [OpenAddresses data](http://results.openaddresses.io/)
**A note on the dataset**
- The data is collected around 2020-05-27 by [OpenAddresses](http://openaddresses.io).
- The data licenses can be found in `data/country-classifier/LICENSE.txt`.
# Prerequisites
- Data preparation and model training workflows using arcgis.learn have a dependency on [transformers](https://huggingface.co/transformers/v3.0.2/index.html). Refer to the section **"Install deep learning dependencies of arcgis.learn module"** [on this page](https://developers.arcgis.com/python/guide/install-and-set-up/#Install-deep-learning-dependencies) for detailed documentation on the installation of the dependencies.
- **Labeled data**: For `TextClassifier` to learn, it needs to see documents/texts that have been assigned a label. Labeled data for this sample notebook is located at `data/country-classifier/house-addresses.csv`
- To learn more about how `TextClassifier` works, please see the guide on [Text Classification with arcgis.learn](https://developers.arcgis.com/python/guide/text-classification).
# Imports
```
import os
import zipfile
import pandas as pd
from pathlib import Path
from arcgis.gis import GIS
from arcgis.learn import prepare_textdata
from arcgis.learn.text import TextClassifier
gis = GIS('home')
```
# Data preparation
Data preparation involves splitting the data into training and validation sets, creating the necessary data structures for loading data into the model and so on. The `prepare_data()` function can directly read the training samples and automate the entire process.
```
training_data = gis.content.get('ab36969cfe814c89ba3b659cf734492a')
training_data
filepath = training_data.download(file_name=training_data.name)
with zipfile.ZipFile(filepath, 'r') as zip_ref:
zip_ref.extractall(Path(filepath).parent)
DATA_ROOT = Path(os.path.join(os.path.splitext(filepath)[0]))
data = prepare_textdata(DATA_ROOT, "classification", train_file="house-addresses.csv",
text_columns="Address", label_columns="Country", batch_size=64)
```
The `show_batch()` method can be used to see the training samples, along with labels.
```
data.show_batch(10)
```
# TextClassifier model
`TextClassifier` model in `arcgis.learn.text` is built on top of [Hugging Face Transformers](https://huggingface.co/transformers/v3.0.2/index.html) library. The model training and inferencing workflow are similar to computer vision models in `arcgis.learn`.
Run the command below to see what backbones are supported for the text classification task.
```
print(TextClassifier.supported_backbones)
```
Call the model's `available_backbone_models()` method with the backbone name to get the available models for that backbone. The call to **available_backbone_models** method will list out only few of the available models for each backbone. Visit [this](https://huggingface.co/transformers/pretrained_models.html) link to get a complete list of models for each backbone.
```
print(TextClassifier.available_backbone_models("xlm-roberta"))
```
## Load model architecture
Invoke the `TextClassifier` class by passing the data and the backbone you have chosen. The dataset consists of house addresses in multiple languages like Japanese, English, French, Spanish, etc., hence we will use a [multi-lingual transformer backbone](https://huggingface.co/transformers/v3.0.2/multilingual.html) to train our model.
```
model = TextClassifier(data, backbone="xlm-roberta-base")
```
## Model training
The `learning rate`[[1]](#References) is a **tuning parameter** that determines the step size at each iteration while moving toward a minimum of a loss function, it represents the speed at which a machine learning model **"learns"**. `arcgis.learn` includes a learning rate finder, and is accessible through the model's `lr_find()` method, that can automatically select an **optimum learning rate**, without requiring repeated experiments.
```
model.lr_find()
```
Training the model is an iterative process. We can train the model using its `fit()` method till the validation loss (or error rate) continues to go down with each training pass also known as an epoch. This is indicative of the model learning the task.
```
model.fit(epochs=6, lr=0.001)
```
## Validate results
Once we have the trained model, we can see the results to see how it performs.
```
model.show_results(15)
```
### Test the model prediction on an input text
```
text = """1016, 8A, CL RICARDO LEON - SANTA ANA (CARTAGENA), 30319"""
print(model.predict(text))
```
## Model metrics
To get a sense of how well the model is trained, we will calculate some important metrics for our `text-classifier` model. First, to find how accurate[[2]](#References) the model is in correctly predicting the classes in the dataset, we will call the model's `accuracy()` method.
```
model.accuracy()
```
Other important metrics to look at are Precision, Recall & F1-measures [[3]](#References). To find `precision`, `recall` & `f1` scores per label/class we will call the model's `metrics_per_label()` method.
```
model.metrics_per_label()
```
## Get misclassified records
Its always a good idea to see the cases where your model is not performing well. This step will help us to:
- Identify if there is a problem in the dataset.
- Identify if there is a problem with text/documents belonging to a specific label/class.
- Identify if there is a class imbalance in your dataset, due to which the model didn't see much of the labeled data for a particular class, hence not able to learn properly about that class.
To get the **misclassified records** we will call the model's `get_misclassified_records` method.
```
misclassified_records = model.get_misclassified_records()
misclassified_records.style.set_table_styles([dict(selector='th', props=[('text-align', 'left')])])\
.set_properties(**{'text-align': "left"}).hide_index()
```
## Saving the trained model
Once you are satisfied with the model, you can save it using the save() method. This creates an Esri Model Definition (EMD file) that can be used for inferencing on unseen data.
```
model.save("country-classifier")
```
# Model inference
The trained model can be used to classify new text documents using the predict method. This method accepts a string or a list of strings to predict the labels of these new documents/text.
```
text_list = data._train_df.sample(15).Address.values
result = model.predict(text_list)
df = pd.DataFrame(result, columns=["Address", "CountryCode", "Confidence"])
df.style.set_table_styles([dict(selector='th', props=[('text-align', 'left')])])\
.set_properties(**{'text-align': "left"}).hide_index()
```
# Conclusion
In this notebook, we have built a text classifier using `TextClassifier` class of `arcgis.learn.text` module. The dataset consisted of house addresses of 10 countries written in languages like English, Japanese, French, Spanish, etc. To achieve this we used a [multi-lingual transformer backbone](https://huggingface.co/transformers/v3.0.2/multilingual.html) like `XLM-RoBERTa` to build a classifier to predict the country for an input house address.
# References
[1] [Learning Rate](https://en.wikipedia.org/wiki/Learning_rate)
[2] [Accuracy](https://en.wikipedia.org/wiki/Accuracy_and_precision)
[3] [Precision, recall and F1-measures](https://scikit-learn.org/stable/modules/model_evaluation.html#precision-recall-and-f-measures)
| github_jupyter |
```
import os, sys
os.getcwd()
#!pip install azure-storage-blob --user
#!pip install storefact --user
import os, sys
import configparser
sys.path.append('/home/jovyan/.local/lib/python3.6/site-packages/')
print(sys.path)
os.path.abspath("AzureDownload/config.txt")
os.getcwd()
config = configparser.ConfigParser()
config.read("/home/jovyan/AzureDownload/config.txt")
config.sections()
```
### Credentials setup, read the WoS jounral name mapped table from Azure
```
import time
from azure.storage.blob import BlockBlobService
CONTAINERNAME = "mag-2019-01-25"
BLOBNAME= "MAGwosJournalMatch/OpenSci3Journal.csv/part-00000-tid-8679026268804875386-7586e989-d017-4b12-9d5a-53fc6497ec02-1116-c000.csv"
LOCALFILENAME= "/home/jovyan/openScience/code-data/OpenSci3Journal.csv"
block_blob_service=BlockBlobService(account_name=config.get("configuration","account"),account_key=config.get("configuration","password"))
#download from blob
t1=time.time()
block_blob_service.get_blob_to_path(CONTAINERNAME,BLOBNAME,LOCALFILENAME)
t2=time.time()
print(("It takes %s seconds to download "+BLOBNAME) % (t2 - t1))
import pandas as pd
openJ = pd.read_csv('OpenSci3Journal.csv', escapechar='\\', encoding='utf-8')
openJ.count()
```
### To verify that the Spark output is consistent, we compare the pandas dataframes before and after the WoS jounral mapping
```
open0 = pd.read_csv('OpenSci3.csv', escapechar='\\', encoding='utf-8')
open0.count()
```
### Compare matched MAG journal names and WoS journal names
```
openJ['Journal'] = openJ.Journal.str.lower()
openJ['WoSjournal'] = openJ.WoSjournal.str.lower()
matched = openJ[openJ['Journal'] == openJ['WoSjournal']]
matched.count()
```
### Matching with UCSD map of science journal names
```
journalMap = pd.read_csv('WoSmatch/journalName.csv')
journalMap['journal_name'] = journalMap.journal_name.str.lower()
JwosMap = journalMap[journalMap['source_type']=="Thomson"]
MAGmatched = pd.merge(openJ, JwosMap, left_on=['Journal'], right_on=['journal_name'], how='left')
MAGmatched.count()
WoSmatched = pd.merge(openJ, JwosMap, left_on=['WoSjournal'], right_on=['journal_name'], how='left')
WoSmatched.count()
```
### Combining matched journal names from WoS and MAG to the UCSD map of science
```
MAGmatched.update(WoSmatched)
MAGmatched.count()
```
### Mapping from matched jounrals to subdisciplines
```
JsubMap = pd.read_csv('WoSmatch/jounral-subdiscipline.csv')
JsubMap.journ_id = JsubMap.journ_id.astype('float64')
subMatched = pd.merge(MAGmatched, JsubMap, left_on=['journ_id'], right_on=['journ_id'], how='left').drop(columns='formal_name')
subMatched.count()
#subMatched.dtypes
subTable = pd.read_csv('WoSmatch/subdiscipline.csv')
subTable.subd_id = subTable.subd_id.astype('float64')
subNameMatched = pd.merge(subMatched, subTable, left_on=['subd_id'], right_on=['subd_id'], how='left').drop(columns=['size','x','y'])
subNameMatched.count()
```
### Since each journal has a distribution of corresponding disciplines, we will collect the disipline vectors into new columns
```
majTable = pd.read_csv('WoSmatch/discipline.csv')
majTable.disc_id = majTable.disc_id.astype('float64')
discMatched = pd.merge(subNameMatched, majTable, left_on=['disc_id'], right_on=['disc_id'], how='left').drop(columns=['color','x','y'])
discMatched.jfraction = discMatched.jfraction.astype('str')
discMatched.subd_name = discMatched.subd_name.astype('str')
discMatched.disc_name = discMatched.disc_name.astype('str')
temp = pd.DataFrame()
temp = discMatched[['PaperId','jfraction','subd_name','disc_name']]
temp['jfraction'] = discMatched.groupby(['PaperId'])['jfraction'].transform(lambda x: ';'.join(x)).replace('nan', np.nan)
temp['subd_name'] = discMatched.groupby(['PaperId'])['subd_name'].transform(lambda x: ';'.join(x)).replace('nan', np.nan)
temp['disc_name'] = discMatched.groupby(['PaperId'])['disc_name'].transform(lambda x: ';'.join(x)).replace('nan', np.nan)
temp2 = temp.drop_duplicates()
temp2.count()
OpenSci3Disc = pd.merge(MAGmatched, temp2, left_on=['PaperId'], right_on=['PaperId'], how='left').drop(columns=['source_type','journ_id','journal_name'])
OpenSci3Disc
OpenSci3Disc.to_csv('OpenSci3Discipline.csv',index=False, sep=',', encoding='utf-8')
```
| github_jupyter |
In the [previous part](http://earthpy.org/pandas-basics.html) we looked at very basic ways of work with pandas. Here I am going to introduce couple of more advance tricks. We will use very powerful pandas IO capabilities to create time series directly from the text file, try to create seasonal means with *resample* and multi-year monthly means with *groupby*.
Import usual suspects and change some output formatting:
```
import pandas as pd
import numpy as np
%matplotlib inline
pd.set_option('max_rows',15) # this limit maximum numbers of rows
```
## Load data
We load data from two files, parse their dates and create Dataframe
```
ham_tmin = pd.read_csv('./Ham_tmin.txt', parse_dates=True, index_col=0, names=['Time','tmin'])
ham_tmax = pd.read_csv('./Ham_tmax.txt', parse_dates=True, index_col=0, names=['Time','tmax'])
tm = pd.DataFrame({'TMAX':ham_tmax.tmax/10.,'TMIN':ham_tmin.tmin/10.})
tm
```
## Seasonal means with resample
Initially pandas was created for analysis of financial information and it thinks not in seasons, but in quarters. So we have to resample our data to quarters. We also need to make a shift from standard quarters, so they correspond with seasons. This is done by using 'Q-NOV' as a time frequency, indicating that year in our case ends in November:
```
tmd = tm.to_period(freq='D')
tmd.resample('Q-NOV').head()
q_mean = tm.resample('Q-NOV')
q_mean.head()
```
Winter temperatures
```
q_mean.index.quarter
q_mean[q_mean.index.quarter==1].plot(figsize=(8,5))
```
##Exercise
Plot summer mean
If you don't mind to sacrifice first two months (that strictly speaking can't represent the whole winter of 1890-1891), there is another way to do similar thing by just resampling to 3M (3 months) interval starting from March (third data point):
```
tm[59:63]
m3_mean = tm[59:].resample('3M', closed='left')
m3_mean.head()
```
Results are different, let's find out wich one is wrong, or maybe we did something silly?
```
tm[59:151]
```
Now in order to select all winter months we have to choose Februaries (last month of the season):
```
m3_mean[m3_mean.index.month==2].plot(figsize=(8,5))
```
Result is the same except for the first point.
##Exercise
Calculate 10 day intervals
```
tm.resample('10D', closed='left')['TMAX'].plot()
```
## Multi-year monthly means with *groupby*
<img src="files/splitApplyCombine.png">
First step will be to add another column to our DataFrame with month numbers:
```
tm['mon'] = tm.index.month
tm
```
Now we can use [*groupby*](http://pandas.pydata.org/pandas-docs/stable/groupby.html) to group our values by months and calculate mean for each of the groups (month in our case):
```
monmean = tm.groupby('mon').aggregate(np.mean)
monmean.plot(kind='bar')
```
##Exercise
- Calculate and plot monthly mean temperatures for 1891-1950 and 1951-2010
- Calculate and plot differences between this two variables
Sometimes it is useful to look at the [box plots](http://en.wikipedia.org/wiki/Box_plot) for every month:
```
ax = tm.boxplot(column=['TMAX'], by='mon')
ax = tm.boxplot(column=['TMIN'], by='mon')
```
| github_jupyter |
# Semantic querying of earth observation data
Semantique (to be pronounced with sophisticated French accent) is a structured framework for semantic querying of earth observation data.
The core of a semantic query is the **query recipe**. It contains instructions that together formulate a recipe for inference of new knowledge. These instructions can be grouped into multiple **results**, each representing a distinct piece of knowledge. A semantic query recipe is different from a regular data cube query statement because it allows you to refer directly to real-world concepts by their name, without having to be aware how these concepts are actually represented by the underlying data, and all the technical implications that come along with that. For example, you can ask how often *water* was observed at certain locations during a certain timespan, without the need to specify the rules that define how the collected data should be used to infer if an observation can actually be classified as being *water*.
These rules are instead specified in a separate component which we call the **ontology**. It maps *a priori* knowledge of the real world to the data values in the image domain. Hence, an ontology is a repository of rulesets. Each ruleset uniquely defines a **semantic concept** that exists in the real world, by formulating how this concept is represented by collected data (which may possibly be [semantically enriched](https://doi.org/10.3390/data4030102) to some extent). Usually, these rules describe a binary relationship between the data values and the semantic concepts (i.e. the rules can be evaluated to either "true" or "false"). For example:
> IF data value a > x AND data value b < y THEN water
The data and information layers are stored together in a **factbase**. A factbase is described by its layout, which is a repository of metadata objects. Each metadata object describes the content of a specific **resource** of data or information.
An ontology and a factbase should be provided when executing a semantic query recipe, together with the spatio-temporal extent in which the query should be evaluated. The query recipe itself is independent from these components. To some extent, at least. Of course, when you refer to a concept named "water" in your query recipe, it can only be executed alongside an ontology that defines how "water" can be represented by collected data, and a factbase that acutally contains these data. Unfortunately, we can't do magic.. However, the query recipe itself does not contain any information nor cares about how "water" is defined, and all the technical details that come along with that. There is a clear separation between the *definitions of the concepts* (these are stored as rules in the ontology) and *how these definitions are applied to infer new knowledge* (this is specified as instructions in the query recipe).
That also means that query recipes remain fairly stable even when concepts are defined in a different way. For example, if we have a new technique to utilize novel data source for water detection from space, the factbase and the ontology change. The factbase needs to contain these novel data sources, and the ontology needs to implement rules that use the new technique for water detection. However, the query *how often was water observed* remains the same, since in itself it does not contain any information on how water is defined. This is in line with the seperation between the *world domain* and the *image domain*. Concepts in the world domain are fairly stable, while data and techniques in the image domain constantly change.
Hence, the explicitly separated structure makes the semantic EO data querying process as implemented in semantique different from regular EO data querying, where this separation is usually not clear, and the different components are weaved together into a single query statement. Thanks to this structure, semantique is useful for those user groups that lack the advanced technical knowledge of EO data, but can benefit from the applications of it in their specific domain. Furthermore, it eases interoperability of EO data analysis workflows, also for expert users.
This notebook introduces the semantique package and provides basic examples of how to use it in a common semantic querying workflow.
## Content
- [Components](#Components)
- [The query recipe](#The-query-recipe)
- [The factbase](#The-factbase)
- [The ontology](#The-ontology)
- [The spatio-temporal extent](#The-spatio-temporal-extent)
- [Additional configuration parameters](#Additional-configuration-parameters)
- [Processing](#Processing)
## Prepare
Import the semantique package:
```
import semantique as sq
```
Import other packages we will use in this demo:
```
import xarray as xr
import geopandas as gpd
import matplotlib.pyplot as plt
import numpy as np
import json
```
## Components
In semantique, a semantic query is processed by a query processor, with respect to a given ontology and factbase, and within the bounds of a given spatio-temporal extent. Below we will describe in more detail how semantique allows you to construct the required components for query processing.
### The query recipe
The first step in the semantic querying process is to construct the query recipe for inference of new knowledge. That is, you have to write *instructions* that tell the query processor what steps it should take to obtain your desired result. In semantique you can do this in a flexible manner, by combining basic building blocks with each other. Each building block represents a specific component of a result instruction, like a reference to a semantic concept or a certain processing task.
We start with an empty query recipe:
```
recipe = sq.QueryRecipe()
```
Such a [QueryRecipe](https://zgis.github.io/semantique/_generated/semantique.QueryRecipe.html) object has the same structure as a dictionary, with each element containing the instructions for a specific result. You can request as many results as you want.
Now we have to fill the empty query recipe by adding the instructions for all of our desired results one by one to our initialized recipe object. We do this by combining semantique's building blocks together in **processing chains**. A processing chain always has a *with-do structure*.
In the *with* part, you attach a block that contains a **reference** to an object that contains data or information. We call this the *input object* of the processing chain. The query processor will evaluate this reference into a multi-dimensional array containing a set of data values, and usually having at least a spatial and a temporal dimension. Each cell in this array is called a *pixel* and represents an observation on a specific location in space at a specific moment in time. We also call the array a **data cube**.
In most cases the reference in the *with* part of the processing chain will be a reference to a real-world semantic concept defined in an ontology. If the rules in the ontology describe *binary relationships* between the semantic concepts and the pixel values, the corresponding data cube will be boolean, with "true" values (i.e. 1) for those pixels that are identified as being an observation of the referenced concept, and "false" values (i.e. 0) for all other pixels in the spatio-temporal extent. In the [References notebook](references.ipynb) you can find an overview of all other types of references a processing chain may start with.
In the *do* part, you specify one or more **actions** that should be applied to the input object. Each action is a well-defined data cube operation that performs a *single* task. For example, applying a function to each pixel of a data cube, reducing a particular dimension of a data cube, filtering the pixels of a data cube based on some condition, et cetera. Each building block that represents such an action is labeled by an action word that should intuitively describe the operation it performs. Therefore we also call these type of building blocks **verbs**. In the [Verbs notebook](verbs.ipynb) you can find an overview of all implemented verbs and their functionalities.
> WITH input_object DO apply_first_action THEN apply_second_action THEN apply_third_action
So let's show a basic example of how to construct such a processing chain. You can refer to any semantic concept by using the [concept()](https://zgis.github.io/semantique/_generated/semantique.concept.html#semantique.concept) function. How to specify the reference, depends on the structure of the ontology that the query will be processed against. Usually, an ontology does not only list rulesets of semantic concepts, but also formalizes a categorization of these concepts. That is, a reference to a specific semantic concept usually consists of the name of that concept, *and* the name of the category it belongs to. Optionally there can be multiple hierarchies of categories, for example to group concepts of different semantic levels (e.g. an entity *water body* is of a lower semantic level than an entity *lake*, since lake is by definition always a water body, but a water body not necessarily a lake). See the [Ontology section](#The-ontology) for details. The [concept()](https://zgis.github.io/semantique/_generated/semantique.concept.html#semantique.concept) function lets you specify as many levels as you need, starting with the lowest-level category, and ending with the name of the semantic concept itself.
The common lowest-level categorization groups the semantic concepts into very abstract types. For example, a semantic concept might be an *entity* (a phenonemon with a distinct and independent *existence*, e.g. a forest or a lake) or an *event* (a phenonemon that *takes place*, e.g. a fire or a flood). If the semantic concepts are stored as direct element of these lowest-level categories without any further subdivision, we can refer to a semantic concept such as *water body* as follows.
> **NOTE** <br/> Currently we only focus on pixel based queries. Hence, the query processor evaluates for each pixel if the observed phenonemon in that pixel is *part of* a given entity or not, considering only the data value of the pixel itself. The semantique framework is flexible enough to also support object-based approaches. In that case te rulesets of concepts should look further than only individual pixels. Creating such rulesets is still a challenge..
```
water = sq.concept("entity", "water")
print(json.dumps(water, indent = 2))
```
If you use ontologies that include sub-categories, you can simply use the same function to refer to them, in a form as below. There is no limit on how many sub-categories you can use in a reference. Of course, this all depends on the categorization of the ontology that you will use.
```
lake = sq.concept("entity", "natural_entities", "water_bodies", "lake")
```
Note that each reference is nothing more than a textual reference. At the construction stage, no data processing is done at all. More specifically: the reference is an object of class [CubeProxy](https://zgis.github.io/semantique/_generated/semantique.CubeProxy.html), meaning that it will be evaluated into a data cube, but only when executing the query recipe.
```
type(water)
```
For convenience, commonly used lowest-level semantic concept categories (e.g. entities) are also implemented as separate construction functions, such that you can call them directly. Hence, the code below produces the same output as above.
```
water = sq.entity("water")
print(json.dumps(water, indent = 2))
```
The *do* part of the processing chain can be formulated by applying the actions as methods to the input object. Just an in the *with* part, this will not perform any action just yet. It only constructs the textual recipe for the result, which will be executed at the processing stage.
The code below shows a simple set of instructions that form the recipe for a result. The instructions consist of a single processing chain, starting with a reference to the concept "water", and subsequently applying a single action to it. During processing, this will be evaluated into a two dimensional data cube with for each location in space the number of times water was observed. Right now, it is nothing more than a textual recipe.
```
water_count = sq.entity("water").reduce("time", "count")
print(json.dumps(water_count, indent = 2))
```
Instead saving result instructions as separate objects, we include them as an element in our recipe object. We can include as many result instructions in a single query as we want.
```
recipe["water_map"] = sq.entity("water").reduce("time", "count")
recipe["vegetation_map"] = sq.entity("vegetation").reduce("time", "count")
recipe["water_time_series"] = sq.entity("water").reduce("space", "percentage")
```
You can apply as many actions as you want simply by adding more actions blocks to the chain.
```
recipe["avg_water_count"] = sq.entity("water").\
reduce("time", "count").\
reduce("space", "mean")
```
Some of the action blocks allow to join information from other objects into the active evaluation object. For example, instead of only calculating the water count as shown above, we might be interested in the summed count of the concepts water and vegetation. Such an instruction can be modelled by nesting multiple processing chains.
```
recipe["summed_count"] = sq.entity("water").\
reduce("time", "count").\
evaluate("add", sq.entity("vegetation").reduce("time", "count"))
```
Again, it is important to notice that the query construction phase does not include *any* loading nor analysis of data or information. It simply creates a textual query recipe, which will be executed at a later stage. The query we constructed in all the examples above looks like [this](https://github.com/ZGIS/semantique/blob/main/demo/files/recipe.json).
We can export and share this query recipe as a JSON file.
```
with open("files/recipe.json", "w") as file:
json.dump(recipe, file, indent = 2)
```
### The factbase
The factbase is the place where the raw EO data and possibly derived information layers are stored. As mentioned before, the factbase is supposed to have a *layout* file that describes its content. This file has a dictionary-like structure. Each of its elements is again a dictionary, and represents the highest-level category of resources. This nested, hierarchical structure continues depending on the amount of sub-categories, until the point where you reach a metadata object belonging to a specific resource. It summarizes the data values of that resource, and also contains information on where to find this resource inside the storage structure of the factbase. Unless you create your own factbase, you will usually not write a layout file from scratch. Instead, the factbase you are using should already come with a layout file.
Semantique utilizes the layout file to create an internal model of the factbase. It pairs it with a **retriever function**. This function is able to read a reference to a specific resource, lookup its metadata object in the layout file, and use these metadata to retrieve the corresponding data values as a data cube from the actual data storage location.
However, the exact structure of a layout file (i.e. what metadata keys it exactly contains), as well as the way the retriever function has to retrieve the actual data values, heavily depends on the format of the data storage. Data may be stored on a database server utilizing some specific database management system, simply as files on disk, or whatever else.
Therefore, semantique offers a flexible structure in which different factbase formats are modelled by different classes, with different retriever functions. All these classes inherit from an abstract base class named [Factbase](https://zgis.github.io/semantique/_generated/semantique.factbase.Factbase.html#semantique.factbase.Factbase), which serves a general template for how a factbase should be modelled.
Currently semantique contains two built-in factbase formats. The first one is called [Opendatacube](https://zgis.github.io/semantique/_generated/semantique.factbase.Opendatacube.html) and is tailored to usage with the EO specific [OpenDataCube](https://www.opendatacube.org/) database management system. This class has a OpenDataCube-specific retriever function that knows exactly how to retrieve data from this system. You would initialize an instance from this class as by providing it the layout file, as well as an OpenDataCube connection object. This object allows the retriever function to connect with the database server an actually retrieve data from it. Probably all factbase formats that store the data on a server will need such kind of a connection object.
```python
factbase = sq.factbase.Opendatacube(layout, connection = datacube.Datacube())
```
The second one is called [GeotiffArchive](https://zgis.github.io/semantique/_generated/semantique.factbase.GeotiffArchive.html) and has a much simpler format that assumes each resource is stored as a GeoTIFF file within a single ZIP archive. This class contains a retriever function that knows how to load GeoTIFF files as multi-dimensional arrays in Python, and how to subset (and possibly also resample and/or reproject) them to a given spatio-temporal extent. Instead of a database connection, we provide the initializer with the location of the ZIP file in which the resources are stored.
```python
factbase = sq.factbase.GeotiffArchive(layout, src = "foo.zip")
```
In the future more built-in factbase formats might be added, but as user you can also write your own class for a specific factbase format that you use. See the [Advanced usage notebook](https://zgis.github.io/semantique/_notebooks/advanced.html#Creating-custom-factbase-classes) for details. It is important to note that the query processor does not care at all what the format of the factbase is and how resources are retrieved from the factbase. It only cares about what input the retriever function accepts, and in what format it returns the retrieved resource.
In our examples we will use the simpler [GeotiffArchive](https://zgis.github.io/semantique/_generated/semantique.factbase.GeotiffArchive.html) factbase format. We have a set of [example resources](https://github.com/ZGIS/semantique/blob/main/demo/files/resources.zip) for a tiny [spatial extent](https://github.com/ZGIS/semantique/blob/main/demo/files/footprint.json) and only three different timestamps, as well as a [layout file](https://github.com/ZGIS/semantique/blob/main/demo/files/factbase.json) that contains all necessary metadata entries the retriever function of this format needs.
```
with open("files/factbase.json", "r") as file:
layout = json.load(file)
factbase = sq.factbase.GeotiffArchive(layout, src = "files/resources.zip")
```
The retriever function is a method of this factbase instance, which will internally be called by the query processor whenever a specific resource is referenced.
```
hasattr(factbase, "retrieve")
```
### The ontology
The ontology plays an essential role in the semantic querying framework. It serves as the mapping between the image-domain and the real-world domain. That is, it contains rulesets that define how real-world concepts and their properties are represented by the data in the factbase. By doing that, it also formalizes how concepts are categorized and how the relations between multiple concepts and/or their properties are structured.
These rulesets are stored in a dictionary-like structure. Each of its elements is again a dictionary, and represents the highest-level category of concepts. This nested, hierarchical structure continues depending on the amount of sub-categories, until the point where you reach a ruleset defining a specific concept.
In semantique, an ontology is always paired with a **translator function**. This function is able to read a reference to a specific concept, lookup its ruleset object in the ontology, and use these rules to translate the reference into a data cube. When the rules describe *binary relationships* between the semantic concepts and the data values, this data cube will be boolean, where pixels that are identified as being an observation of the concept get a a "true" value (i.e. 1), and the other pixels get a "false" value (i.e. 0).
However, the way the rules are specified, and therefore also the way they should be evaluated by the translator function, are not fixed. Basically, you can do this in any way you want. For example, your rules could be a set of parameters for a given machine learning model, and your translator a function that knows how to run that model with those parameters. Your rules could also be paths or download links to some Python scripts, and your translator a function that knows how to execute these scripts. Hence, just as the factbase models described before, the ontology models in semantique can have many different formats.
Therefore, semantique offers a flexible structure in which different ontology formats are modelled by different classes, with different translator functions. All these classes inherit from an abstract base class named [Ontology](https://zgis.github.io/semantique/_generated/semantique.ontology.Ontology.html), which serves a general template for how an ontology should be modelled. Currently there is only one built-in ontology format in semantique, called (unsurprisinly) [Semantique](https://zgis.github.io/semantique/_generated/semantique.ontology.Semantique.html). We will introduce this format below. As a user you can also write your own class for a specific ontology format that you use by inheriting from the abstract [Ontology](https://zgis.github.io/semantique/_generated/semantique.ontology.Ontology.html) class. See the [Advanced usage notebook](https://zgis.github.io/semantique/_notebooks/advanced.html#Creating-custom-ontology-classes) for details. It is important to note that the query processor does not care at all what the format of the ontology is and how it translated concept references. It only cares about what input the translator function accepts, and in what format it returns the translated concepts.
Back to the semantique specific ontology format. We can create an instance of it by providing a dictionary with rulesets that was shared with us. However, expert users can also create their own ontology from scratch. In that case, you'll start with an empty ontology, and iteratively fill it with rules afterwards:
```
ontology = sq.ontology.Semantique()
```
The translator function is a method of this ontology instance, which will internally be called by the query processor whenever a specific concept is referenced.
```
hasattr(ontology, "translate")
```
In this example, we will focus solely on defining entities, and use a one-layer categorization. That is, our only category is *entity*. The first step is to add this category as element to the ontology. Its value can still be an empty dictionary. We will add the concept definitions afterwards.
> **NOTE** <br/> The examples we use below are heavily simplified and don't always make sense, but are meant mainly to get an idea of how the package works.
```
ontology["entity"] = {}
```
Lets first look deeper into the structure of concept definitions. Each concept is defined by one or more named **properties** it has. For example, a entity *lake* may be defined by its *color* (a blueish, water-like color) in combination with its *texture* (it has an approximately flat surface). That is, the ruleset of a semantic concept definition is a set of distinct property definitions.
Now, we need to construct rules that define a binary relationship between a property and the data values in the factbase. That is, the rules should define for each pixel in our data if it meets a specific property ("true"), or not ("false"). In the Semantique-format, we can do this by utilizing the same building blocks as we did for constructing our query recipe. The only difference is that a processing chain will now usually start with a reference to a factbase resource. During query processing, this reference will be send to the retriever function of the factbase, which will return a data cube filled with the requested data values. Then, pre-defined actions will be applied to this data cube. Usually these actions will encompass the evaluation of a comparison operator, in which the value of each pixel is compared to some constant (set of) value(s), returning a "true" value (i.e. 1) when the comparison holds, and a "false" value (i.e. 0) otherwise.
For example: we utilize the "Color type" resource to define if a pixel has a water-like color. This resource is a layer of semantically enriched data and contains categorical values. The categories with indices 21, 22, 23 and 24 correspond to color combinations that *appear* to be water. Hence, we state that a pixel meets the color property of a lake when its value in this "Color type" resource corresponds with one of the above mentioned indices. Furthermore, we state that a pixel meets the texture property of a lake when its value in the "slope" resource equals 0.
```
ontology["entity"]["lake"] = {
"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24]),
"texture": sq.topography("slope").evaluate("equal", 0)
}
```
To define the entity, its property cubes are combined using an [all()](https://zgis.github.io/semantique/_generated/semantique.processor.reducers.all_.html) merger. That means that a pixel is evaluated as being part of an entity if and only if it meets *all* properties of that entity.
Now we define a second entity *river*, which we say has the same color property of a lake, but instead has a non-zero slope.
> **NOTE** <br/> Different entities do not *need* to have the same properties defined.
```
ontology["entity"]["river"] = {
"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24]),
"texture": sq.topography("slope").evaluate("not_equal", 0)
}
```
As you see, there is a relation between the entities *lake* and *river*. They share a property. However, we defined the same property twice. This is not needed, because in the Semantique-format, you can always refer to other entities in your ontology, as well as to properties in these entities. In this way, you can intuitively model relations between different semantic concepts. Hence, the same *river* definition can also be structured as follows:
```
ontology["entity"]["river"] = {
"color": sq.entity("lake", property = "color"),
"texture": sq.entity("lake", property = "texture").evaluate("invert")
}
```
Or, to take it a step further, as below. Basically we are saying here that a *lake* has the color of *water* and the texture of a *plain* (again, we oversimplify here!).
```
ontology["entity"]["water"] = {
"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24]),
}
ontology["entity"]["vegetation"] = {
"color": sq.appearance("Color type").evaluate("in", [1, 2, 3, 4, 5, 6]),
}
ontology["entity"]["plain"] = {
"color": sq.entity("vegetation", property = "color"),
"texture": sq.topography("slope").evaluate("equal", 0)
}
ontology["entity"]["lake"] = {
"color": sq.entity("water", property = "color"),
"texture": sq.entity("plain", property = "texture")
}
ontology["entity"]["river"] = {
"color": sq.entity("water", property = "color"),
"texture": sq.entity("plain", property = "texture").evaluate("invert")
}
```
We can also model relationships in a way where some entity is the union of other entities.
```
ontology["entity"]["natural_area"] = {
"members": sq.collection(sq.entity("water"), sq.entity("vegetation")).merge("or")
}
```
It is also possible to include temporal information. For example, we only consider an observation to be part of a lake when over time more than 80% of the observations at that location are identified as water, excluding those observations that are identified as a cloud.
```
ontology["entity"]["lake"] = {
"color": sq.entity("water", property = "color"),
"texture": sq.entity("plain", property = "texture"),
"continuity": sq.entity("water", property = "color").\
filter(sq.entity("cloud").evaluate("invert")).\
reduce("time", "percentage").\
evaluate("greater", 80)
}
```
The flexible structure with the building blocks of semantique make many more structures possible. Now you have an idea of how to construct and ontology from scratch using the built-in Semantique-format, we move on and construct a complete ontology in one go. We use simpler rulesets as above, since our demo factbase only contains a very limited set of resources.
```
ontology = sq.ontology.Semantique()
ontology["entity"] = {}
ontology["entity"]["water"] = {"color": sq.appearance("Color type").evaluate("in", [21, 22, 23, 24])}
ontology["entity"]["vegetation"] = {"color": sq.appearance("Color type").evaluate("in", [1, 2, 3, 4, 5, 6])}
ontology["entity"]["builtup"] = {"color": sq.appearance("Color type").evaluate("in", [13, 14, 15, 16, 17])}
ontology["entity"]["cloud"] = {"color": sq.atmosphere("Color type").evaluate("equal", 25)}
ontology["entity"]["snow"] = {"color": sq.appearance("Color type").evaluate("in", [29, 30])}
```
Our constructed ontology looks like [this](https://github.com/ZGIS/semantique/blob/main/demo/files/ontology.json). We can export and share this ontology as a JSON file.
```
with open("files/ontology.json", "w") as file:
json.dump(ontology, file, indent = 2)
```
That also means that as non-expert we don't have to worry about constructing our own ontology from scratch. We can simply load a shared ontology in the same way as we loaded the layout file of the factbase, and construct the ontology object accordingly.
```
with open("files/ontology.json", "r") as file:
rules = json.load(file)
ontology = sq.ontology.Semantique(rules)
```
### The spatio-temporal extent
Semantic query recipes are general recipes for inference of new knowledge. In theory, they are not restricted to specific areas or specific timespans. However, the recipes are executed with respect to given spatio-temporal bounds. That is, we need to provide both a spatial and temporal extent when executing a semantic query recipe.
To model a spatial extent, semantique contains the [SpatialExtent](https://zgis.github.io/semantique/_generated/semantique.extent.SpatialExtent.html) class. An instance of this class can be initialized by providing it any object that can be read by the [GeoDataFrame](https://geopandas.org/docs/reference/api/geopandas.GeoDataFrame.html) initializer of the [geopandas](https://geopandas.org/en/stable/) package. Any additional keyword arguments will be forwarded to this initializer. In practice, this means you can read any GDAL-supported file format with [geopandas.read_file()](https://geopandas.org/en/stable/docs/reference/api/geopandas.read_file.html), and then use that object to initialize a spatial extent. In this demo we use a small, rectangular area around Zell am See in Salzbuger Land, Austria.
```
geodf = gpd.read_file("files/footprint.geojson")
geodf.explore()
space = sq.SpatialExtent(geodf)
```
To model a temporal extent, semantique contains the [TemporalExtent](https://zgis.github.io/semantique/_generated/semantique.extent.TemporalExtent.html) class. An instance of this class can be initialized by providing it the first timestamp of the timespan, and the last timestamp of the timespan. The given interval is treated as being closed at both sides.
```
time = sq.TemporalExtent("2019-01-01", "2020-12-31")
```
Just as with the spatial extent, there is a lot of flexibility in how you can provide your timestamps. You can provide dates in formats as "2020-12-31" or "2020/12/31", but also complete ISO8601 timestamps such as "2020-12-31T14:37:22". As long as the [Timestamp](https://pandas.pydata.org/docs/reference/api/pandas.Timestamp.html) initializer of the [pandas](https://pandas.pydata.org/) package can understand it, it is supported by semantique. Any additional keyword arguments will be forwarded to this initializer.
### Additional configuration parameters
The last thing we have left before executing our semantic query recipe, is to define some additional configuration parameters. This includes the desired coordinate reference system (CRS) in which spatial coordinates should be represented, as well as the time zone in which temporal coordinates should be represented. You should also provide the desired spatial resolution of your output, as a list containing respectively the y and x resolution in CRS units (i.e. usually meters for projected CRS and degrees for geographic CRS) and including direction. Note that for most CRS, that means that the first value (i.e. the y-value) of the resolution will always be negative.
There are also other configuration parameters that can be included to tune the behaviour of the query processor. See the [Advanced usage notebook](advanced.ipynb) for details.
```
config = {"crs": 3035, "tz": "UTC", "spatial_resolution": [-10, 10]}
```
## Processing
Now we have all components constructed, we are ready to execute our semantic query recipe. Hooray! This step is quite simple. You call the [execute()](https://zgis.github.io/semantique/_generated/semantique.QueryRecipe.execute.html#semantique.QueryRecipe.execute) method of our recipe object, and provide it the factbase object, the ontology object, the spatial and temporal extents, and the additional configuration parameters. Then, just be a bit patient... Internally, the query processor will solve all references, evaluate them into data cubes, and apply the defined actions to them. In the [Advanced usage notebook](https://zgis.github.io/semantique/_notebooks/advanced.html#The-query-processor-class) the implementation of query processing is described in some more detail.
```
response = recipe.execute(factbase, ontology, space, time, **config)
```
The response of the query is a dictionary which one element per result.
```
for key in response.keys():
print(key)
```
Each result is stored as an instance of the [DataArray](http://xarray.pydata.org/en/stable/user-guide/data-structures.html#dataarray) class from the [xarray](https://docs.xarray.dev/en/stable/) package, which serves as the backbone for most of the analysis tasks the query processor performs.
```
for x in response.values():
print(type(x))
```
The dimensions the arrays depend on the actions that were called in the result instruction. Some results might only have spatial dimensions (i.e. a map).
```
response["water_map"]
```
Other results might only have the temporal dimension (i.e. a time series).
```
response["water_time_series"]
```
And other results might even be dimensionless (i.e. a single aggregated value).
```
response["avg_water_count"]
```
There may also be results that contain both the spatial and temporal dimension, as well as results that contain an additonal, thematic dimension.
Since the result objects are [DataArray](http://xarray.pydata.org/en/stable/user-guide/data-structures.html#dataarray) objects, we can use xarray for any further processing, and also to visualize the results. Again, see the [xarray documentation](http://xarray.pydata.org/en/stable/index.html) for more details on what that package has to offer (which is a lot!). For now, we will just plot some of our obtained results to give an impression. In the [Gallery notebook](gallery.ipynb) you can find much more of such examples.
```
f, (ax1, ax2) = plt.subplots(1, 2, figsize = (15, 5))
water_count = response["water_map"]
values = list(range(int(np.nanmin(water_count)), int(np.nanmax(water_count)) + 1))
levels = [x - 0.5 for x in values + [max(values) + 1]]
colors = plt.cm.Blues
water_count.plot(ax = ax1, levels = levels, cmap = colors, cbar_kwargs = {"ticks": values, "label": "count"})
ax1.set_title("Water")
vegetation_count = response["vegetation_map"]
values = list(range(int(np.nanmin(vegetation_count)), int(np.nanmax(vegetation_count)) + 1))
levels = [x - 0.5 for x in values + [max(values) + 1]]
colors = plt.cm.Greens
vegetation_count.plot(ax = ax2, levels = levels, cmap = colors, cbar_kwargs = {"ticks": values, "label": "count"})
ax2.set_title("Vegetation")
plt.tight_layout()
plt.draw()
```
Do note how the water count map contains many pixels that are counted as water but are clearly not water in the real world. Instead, these pixels correspond to observations in the shadow of a mountain. The color of water and shadow on a satellite image is very similar. Since in our ontology we only defined water based on its *color* property, it cannot differtiate it from shadow. This shows how important it is for accurate results to use multiple properties in entity definitions!
| github_jupyter |
# Customize a TabNet Model
## This tutorial gives examples on how to easily customize a TabNet Model
### 1 - Customizing your learning rate scheduler
Almost all classical pytroch schedulers are now easy to integrate with pytorch-tabnet
### 2 - Use your own loss function
It's really easy to use any pytorch loss function with TabNet, we'll walk you through that
### 3 - Customizing your evaluation metric and evaluations sets
Like XGBoost, you can easily monitor different metrics on different evaluation sets with pytorch-tabnet
```
from pytorch_tabnet.tab_model import TabNetClassifier
import torch
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import roc_auc_score
import pandas as pd
import numpy as np
np.random.seed(0)
import os
import wget
from pathlib import Path
from matplotlib import pyplot as plt
%matplotlib inline
```
### Download census-income dataset
```
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data"
dataset_name = 'census-income'
out = Path(os.getcwd()+'/data/'+dataset_name+'.csv')
out.parent.mkdir(parents=True, exist_ok=True)
if out.exists():
print("File already exists.")
else:
print("Downloading file...")
wget.download(url, out.as_posix())
```
### Load data and split
```
train = pd.read_csv(out)
target = ' <=50K'
if "Set" not in train.columns:
train["Set"] = np.random.choice(["train", "valid", "test"], p =[.8, .1, .1], size=(train.shape[0],))
train_indices = train[train.Set=="train"].index
valid_indices = train[train.Set=="valid"].index
test_indices = train[train.Set=="test"].index
```
### Simple preprocessing
Label encode categorical features and fill empty cells.
```
nunique = train.nunique()
types = train.dtypes
categorical_columns = []
categorical_dims = {}
for col in train.columns:
if types[col] == 'object' or nunique[col] < 200:
print(col, train[col].nunique())
l_enc = LabelEncoder()
train[col] = train[col].fillna("VV_likely")
train[col] = l_enc.fit_transform(train[col].values)
categorical_columns.append(col)
categorical_dims[col] = len(l_enc.classes_)
else:
train.fillna(train.loc[train_indices, col].mean(), inplace=True)
```
### Define categorical features for categorical embeddings
```
unused_feat = ['Set']
features = [ col for col in train.columns if col not in unused_feat+[target]]
cat_idxs = [ i for i, f in enumerate(features) if f in categorical_columns]
cat_dims = [ categorical_dims[f] for i, f in enumerate(features) if f in categorical_columns]
```
# 1 - Customizing your learning rate scheduler
TabNetClassifier, TabNetRegressor and TabNetMultiTaskClassifier all takes two arguments:
- scheduler_fn : Any torch.optim.lr_scheduler should work
- scheduler_params : A dictionnary that contains the parameters of your scheduler (without the optimizer)
----
NB1 : Some schedulers like torch.optim.lr_scheduler.ReduceLROnPlateau depend on the evolution of a metric, pytorch-tabnet will use the early stopping metric you asked (the last eval_metric, see 2-) to perform the schedulers updates
EX1 :
```
scheduler_fn=torch.optim.lr_scheduler.ReduceLROnPlateau
scheduler_params={"mode":'max', # max because default eval metric for binary is AUC
"factor":0.1,
"patience":1}
```
-----
NB2 : Some schedulers require updates at batch level, they can be used very easily the only thing to do is to add `is_batch_level` to True in your `scheduler_params`
EX2:
```
scheduler_fn=torch.optim.lr_scheduler.CyclicLR
scheduler_params={"is_batch_level":True,
"base_lr":1e-3,
"max_lr":1e-2,
"step_size_up":100
}
```
-----
NB3: Note that you can also customize your optimizer function, any torch optimizer should work
```
# Network parameters
max_epochs = 20 if not os.getenv("CI", False) else 2
batch_size = 1024
clf = TabNetClassifier(cat_idxs=cat_idxs,
cat_dims=cat_dims,
cat_emb_dim=1,
optimizer_fn=torch.optim.Adam, # Any optimizer works here
optimizer_params=dict(lr=2e-2),
scheduler_fn=torch.optim.lr_scheduler.OneCycleLR,
scheduler_params={"is_batch_level":True,
"max_lr":5e-2,
"steps_per_epoch":int(train.shape[0] / batch_size)+1,
"epochs":max_epochs
},
mask_type='entmax', # "sparsemax",
)
```
### Training
```
X_train = train[features].values[train_indices]
y_train = train[target].values[train_indices]
X_valid = train[features].values[valid_indices]
y_valid = train[target].values[valid_indices]
X_test = train[features].values[test_indices]
y_test = train[target].values[test_indices]
```
# 2 - Use your own loss function
The default loss for classification is torch.nn.functional.cross_entropy
The default loss for regression is torch.nn.functional.mse_loss
Any derivable loss function of the type lambda y_pred, y_true : loss(y_pred, y_true) should work if it uses torch computation (to allow gradients computations).
In particular, any pytorch loss function should work.
Once your loss is defined simply pass it loss_fn argument when defining your model.
/!\ : One important thing to keep in mind is that when computing the loss for TabNetClassifier and TabNetMultiTaskClassifier you'll need to apply first torch.nn.Softmax() to y_pred as the final model prediction is softmaxed automatically.
NB : Tabnet also has an internal loss (the sparsity loss) which is summed to the loss_fn, the importance of the sparsity loss can be mitigated using `lambda_sparse` parameter
```
def my_loss_fn(y_pred, y_true):
"""
Dummy example similar to using default torch.nn.functional.cross_entropy
"""
softmax_pred = torch.nn.Softmax(dim=-1)(y_pred)
logloss = (1-y_true)*torch.log(softmax_pred[:,0])
logloss += y_true*torch.log(softmax_pred[:,1])
return -torch.mean(logloss)
```
# 3 - Customizing your evaluation metric and evaluations sets
When calling the `fit` method you can speficy:
- eval_set : a list of tuples like (X_valid, y_valid)
Note that the last value of this list will be used for early stopping
- eval_name : a list to name each eval set
default will be val_0, val_1 ...
- eval_metric : a list of default metrics or custom metrics
Default : "auc", "accuracy", "logloss", "balanced_accuracy", "mse", "rmse"
NB : If no eval_set is given no early stopping will occure (patience is then ignored) and the weights used will be the last epoch's weights
NB2 : If `patience<=0` this will disable early stopping
NB3 : Setting `patience` to `max_epochs` ensures that training won't be early stopped, but best weights from the best epochs will be used (instead of the last weight if early stopping is disabled)
```
from pytorch_tabnet.metrics import Metric
class my_metric(Metric):
"""
2xAUC.
"""
def __init__(self):
self._name = "custom" # write an understandable name here
self._maximize = True
def __call__(self, y_true, y_score):
"""
Compute AUC of predictions.
Parameters
----------
y_true: np.ndarray
Target matrix or vector
y_score: np.ndarray
Score matrix or vector
Returns
-------
float
AUC of predictions vs targets.
"""
return 2*roc_auc_score(y_true, y_score[:, 1])
clf.fit(
X_train=X_train, y_train=y_train,
eval_set=[(X_train, y_train), (X_valid, y_valid)],
eval_name=['train', 'val'],
eval_metric=["auc", my_metric],
max_epochs=max_epochs , patience=0,
batch_size=batch_size,
virtual_batch_size=128,
num_workers=0,
weights=1,
drop_last=False,
loss_fn=my_loss_fn
)
# plot losses
plt.plot(clf.history['loss'])
# plot auc
plt.plot(clf.history['train_auc'])
plt.plot(clf.history['val_auc'])
# plot learning rates
plt.plot(clf.history['lr'])
```
## Predictions
```
preds = clf.predict_proba(X_test)
test_auc = roc_auc_score(y_score=preds[:,1], y_true=y_test)
preds_valid = clf.predict_proba(X_valid)
valid_auc = roc_auc_score(y_score=preds_valid[:,1], y_true=y_valid)
print(f"FINAL VALID SCORE FOR {dataset_name} : {clf.history['val_auc'][-1]}")
print(f"FINAL TEST SCORE FOR {dataset_name} : {test_auc}")
# check that last epoch's weight are used
assert np.isclose(valid_auc, clf.history['val_auc'][-1], atol=1e-6)
```
# Save and load Model
```
# save tabnet model
saving_path_name = "./tabnet_model_test_1"
saved_filepath = clf.save_model(saving_path_name)
# define new model with basic parameters and load state dict weights
loaded_clf = TabNetClassifier()
loaded_clf.load_model(saved_filepath)
loaded_preds = loaded_clf.predict_proba(X_test)
loaded_test_auc = roc_auc_score(y_score=loaded_preds[:,1], y_true=y_test)
print(f"FINAL TEST SCORE FOR {dataset_name} : {loaded_test_auc}")
assert(test_auc == loaded_test_auc)
```
# Global explainability : feat importance summing to 1
```
clf.feature_importances_
```
# Local explainability and masks
```
explain_matrix, masks = clf.explain(X_test)
fig, axs = plt.subplots(1, 3, figsize=(20,20))
for i in range(3):
axs[i].imshow(masks[i][:50])
axs[i].set_title(f"mask {i}")
```
# XGB
```
from xgboost import XGBClassifier
clf_xgb = XGBClassifier(max_depth=8,
learning_rate=0.1,
n_estimators=1000,
verbosity=0,
silent=None,
objective='binary:logistic',
booster='gbtree',
n_jobs=-1,
nthread=None,
gamma=0,
min_child_weight=1,
max_delta_step=0,
subsample=0.7,
colsample_bytree=1,
colsample_bylevel=1,
colsample_bynode=1,
reg_alpha=0,
reg_lambda=1,
scale_pos_weight=1,
base_score=0.5,
random_state=0,
seed=None,)
clf_xgb.fit(X_train, y_train,
eval_set=[(X_valid, y_valid)],
early_stopping_rounds=40,
verbose=10)
preds = np.array(clf_xgb.predict_proba(X_valid))
valid_auc = roc_auc_score(y_score=preds[:,1], y_true=y_valid)
print(valid_auc)
preds = np.array(clf_xgb.predict_proba(X_test))
test_auc = roc_auc_score(y_score=preds[:,1], y_true=y_test)
print(test_auc)
```
| github_jupyter |
# ANDES Demonstration of `DGPRCTExt` on IEEE 14-Bus System
Prepared by Jinning Wang. Last revised 12 September 2021.
## Background
Voltage signal is set manually to demonstrate `DGPRCTExt`.
In the modified IEEE 14-bus system, 10 `PVD1` are conencted to `Bus4`, and 1 `DGPRCTExt` is added aiming at `PVD1_2`.
## Conclusion
`DGPRCTExt` can be used to implement protection on `DG` models, where the voltage signal can be manipulated manually. This feature allows co-simulation where you can input the external voltage signal into ADNES by `set` function.
```
import andes
from andes.utils.paths import get_case
andes.config_logger(stream_level=30)
ss = andes.load(get_case('ieee14/ieee14_dgprctext.xlsx'),
setup=False,
no_output=True)
ss.setup()
# use constant power model for PQ
ss.PQ.config.p2p = 1
ss.PQ.config.q2q = 1
ss.PQ.config.p2z = 0
ss.PQ.config.q2z = 0
# turn off under-voltage PQ-to-Z conversion
ss.PQ.pq2z = 0
ss.PFlow.run()
```
## Simulation
Let's run the simulation and manipulate the voltage signal manually.
1) run the TDS to 1s.
```
ss.TDS.config.tf = 1
ss.TDS.run()
```
2) store initial Bus4 voltage value.
```
bus4v0 = ss.Bus.v.v[3]
```
3) set the external voltage at 0.7 manually.
```
ss.DGPRCTExt.set(src='v', idx='DGPRCTExt_1', attr='v', value=0.7)
```
4) continue the TDS to 5s.
```
ss.TDS.config.tf = 5
ss.TDS.run()
```
5) reset the external voltage back to normal manually.
```
ss.DGPRCTExt.set(src='v', idx='DGPRCTExt_1', attr='v', value=bus4v0)
```
6) continue the TDS to 10s.
```
ss.TDS.config.tf = 10
ss.TDS.run()
```
## Results
### system frequency
```
ss.TDS.plt.plot(ss.GENROU.omega,
ycalc=lambda x:60*x,
title='Generator Speed $\omega$')
```
### Lock flag
The lock flag is raised at after `TVl1` when the voltage drop below `Vl1`.
```
ss.TDS.plt.plot(ss.DGPRCTExt.ue,
title='DGPRCTExt\_1 lock flag (applied on PVD1\_2)')
```
### PVD1_2 read frequency and frequency signal source
The `PVD1_2` read frequency is locked, but the signal source (in the `BusFreq 4`) remains unchanged
```
ss.TDS.plt.plot(ss.PVD1.fHz,
a=(0,1),
title='PVD1 Read f')
ss.TDS.plt.plot(ss.DGPRCTExt.fHz,
title='BusFreq 4 Output f')
```
### PVD1_2 power command
`PVD1_2` power commands are locked to 0 **immediately**.
Once the protection was released, they returned to normal **immediately**.
```
ss.TDS.plt.plot(ss.PVD1.Psum,
a=(0,1),
title='PVD1 $P_{tot}$ (active power command)')
ss.TDS.plt.plot(ss.PVD1.Qsum,
a=(0,1),
title='PVD1 $Q_{tot}$ (reactive power command)')
```
### PVD1_2 current command
Consequently, `PVD1_2` current commands are locked to 0 **immediately**.
Once the protection was released, they returned to normal **immediately**.
```
ss.TDS.plt.plot(ss.PVD1.Ipul,
a=(0,1),
title='PVD1 $I_{p,ul}$ (current command before hard limit)')
ss.TDS.plt.plot(ss.PVD1.Iqul,
a=(0,1),
title='PVD1 $I_{q,ul}$ (current command before hard limit)')
```
### PVD1_2 output current
As a result, `PVD1_2` output current decreased to 0 **gradually**.
When the protection was released, they returned to normal **gradually**.
Here, the `PVD1` output current `Lag` time constant (`tip` and `tiq`) are modified to 0.5, which is only for observation.
Usually, power electronic device can response in ms level.
```
ss.TDS.plt.plot(ss.PVD1.Ipout_y,
a=(0,1),
title='PVD1 $I_{p,out}$ (actual output current)')
ss.TDS.plt.plot(ss.PVD1.Iqout_y,
a=(0,1),
title='PVD1 $I_{q,out}$ (actual output current)')
```
## Cleanup
```
!andes misc -C
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn as sk
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Набор данных взят с https://www.kaggle.com/aashi20/top-50-spotify-songs
# Top-50 песен в Spotify в 2019 году.
data = pd.read_csv("datasets/top50.csv")
print(data)
data = pd.read_csv("datasets/top50.csv")
data.head(15)
NewData = pd.DataFrame()
Dta = pd.get_dummies(data['Genre'])
NewData = pd.concat([NewData, Dta])
NewData.head()
dat = data['Beats.Per.Minute']
plt.hist(np.log(dat), bins=50)
plt.show()
dat = np.array(np.log(dat)).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Beats.Per.Minute'] = dat
NewData.head(15)
dat = data['Energy']
plt.hist(np.log(dat), bins=50)
plt.show()
dat = np.array(np.log(dat)).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Energy'] = dat
NewData.head(15)
dat = data['Danceability']
dat = np.clip(dat, 55, 100)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Danceability'] = dat
NewData.head(15)
dat = data['Loudness..dB..']
dat = np.clip(dat, -10, 0)
dat = np.array(dat).reshape(-1, 1)
dat= MinMaxScaler().fit_transform(dat).flatten()
NewData['Loudness..dB..'] = dat
NewData.head(15)
dat = data['Liveness']
dat = np.clip(dat, 0, 25)
plt.hist(np.log(dat), bins=50)
plt.show()
dat = np.array(np.log(dat)).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Liveness'] = dat
NewData.head(15)
dat = data['Valence.']
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Valence.'] = dat
NewData.head(15)
dat = data['Length.']
dat = np.clip(dat, 140, 500)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Length.'] = dat
NewData.head(15)
dat = data['Acousticness..']
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Acousticness..'] = dat
NewData.head(15)
dat = data['Speechiness.']
dat = np.clip(dat, 0, 40)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Speechiness.'] = dat
NewData.head(15)
dat = data['Popularity']
dat = np.clip(dat, 77, 100)
plt.hist(dat**0.5, bins=50)
plt.show()
dat = np.array(dat**0.5).reshape(-1, 1)
dat = MinMaxScaler().fit_transform(dat).flatten()
NewData['Popularity'] = dat
NewData.head(15)
```
| github_jupyter |
# CH. 7 - TOPIC MODELS
## Activities
#### Activity 1
```
# not necessary
# added to suppress warnings coming from pyLDAvis
import warnings
warnings.filterwarnings('ignore')
import langdetect # language detection
import matplotlib.pyplot # plotting
import nltk # natural language processing
import numpy # arrays and matrices
import pandas # dataframes
import pyLDAvis # plotting
import pyLDAvis.sklearn # plotting
import regex # regular expressions
import sklearn # machine learning
# define path
path = '~/packt-data/topic-model-health-tweets/latimeshealth.txt'
# load data
df = pandas.read_csv(path, sep="|", header=None)
df.columns = ["id", "datetime", "tweettext"]
# define quick look function for data frame
def dataframe_quick_look(df, nrows):
print("SHAPE:\n{shape}\n".format(shape=df.shape))
print("COLUMN NAMES:\n{names}\n".format(names=df.columns))
print("HEAD:\n{head}\n".format(head=df.head(nrows)))
dataframe_quick_look(df, nrows=2)
# view final data that will be carried forward
raw = df['tweettext'].tolist()
print("HEADLINES:\n{lines}\n".format(lines=raw[:5]))
print("LENGTH:\n{length}\n".format(length=len(raw)))
# define function for checking language of tweets
# filter to english only
def do_language_identifying(txt):
try:
the_language = langdetect.detect(txt)
except:
the_language = 'none'
return the_language
# define function to perform lemmatization
def do_lemmatizing(wrd):
out = nltk.corpus.wordnet.morphy(wrd)
return (wrd if out is None else out)
# define function to cleaning tweet data
def do_tweet_cleaning(txt):
# identify language of tweet
# return null if language not english
lg = do_language_identifying(txt)
if lg != 'en':
return None
# split the string on whitespace
out = txt.split(' ')
# identify screen names
# replace with SCREENNAME
out = ['SCREENNAME' if i.startswith('@') else i for i in out]
# identify urls
# replace with URL
out = [
'URL' if bool(regex.search('http[s]?://', i))
else i for i in out
]
# remove all punctuation
out = [regex.sub('[^\\w\\s]|\n', '', i) for i in out]
# make all non-keywords lowercase
keys = ['SCREENNAME', 'URL']
out = [i.lower() if i not in keys else i for i in out]
# remove keywords
out = [i for i in out if i not in keys]
# remove stopwords
list_stop_words = nltk.corpus.stopwords.words('english')
list_stop_words = [regex.sub('[^\\w\\s]', '', i) for i in list_stop_words]
out = [i for i in out if i not in list_stop_words]
# lemmatizing
out = [do_lemmatizing(i) for i in out]
# keep words 4 or more characters long
out = [i for i in out if len(i) >= 5]
return out
# apply cleaning function to every tweet
clean = list(map(do_tweet_cleaning, raw))
# remove none types
clean = list(filter(None.__ne__, clean))
print("HEADLINES:\n{lines}\n".format(lines=clean[:5]))
print("LENGTH:\n{length}\n".format(length=len(clean)))
# turn tokens back into strings
# concatenate using whitespaces
clean_sentences = [" ".join(i) for i in clean]
print(clean_sentences[0:10])
```
#### Activity 2
```
# define global variables
number_words = 10
number_docs = 10
number_features = 1000
# bag of words conversion
# count vectorizer (raw counts)
vectorizer1 = sklearn.feature_extraction.text.CountVectorizer(
analyzer="word",
max_df=0.95,
min_df=10,
max_features=number_features
)
clean_vec1 = vectorizer1.fit_transform(clean_sentences)
print(clean_vec1[0])
feature_names_vec1 = vectorizer1.get_feature_names()
# define function to calculate perplexity based on number of topics
def perplexity_by_ntopic(data, ntopics):
output_dict = {
"Number Of Topics": [],
"Perplexity Score": []
}
for t in ntopics:
lda = sklearn.decomposition.LatentDirichletAllocation(
n_components=t,
learning_method="online",
random_state=0
)
lda.fit(data)
output_dict["Number Of Topics"].append(t)
output_dict["Perplexity Score"].append(lda.perplexity(data))
output_df = pandas.DataFrame(output_dict)
index_min_perplexity = output_df["Perplexity Score"].idxmin()
output_num_topics = output_df.loc[
index_min_perplexity, # index
"Number Of Topics" # column
]
return (output_df, output_num_topics)
# execute function on vector of numbers of topics
# takes several minutes
df_perplexity, optimal_num_topics = perplexity_by_ntopic(
clean_vec1,
ntopics=[i for i in range(1, 21) if i % 2 == 0]
)
print(df_perplexity)
# define and fit lda model
lda = sklearn.decomposition.LatentDirichletAllocation(
n_components=optimal_num_topics,
learning_method="online",
random_state=0
)
lda.fit(clean_vec1)
# define function to format raw output into nice tables
def get_topics(mod, vec, names, docs, ndocs, nwords):
# word to topic matrix
W = mod.components_
W_norm = W / W.sum(axis=1)[:, numpy.newaxis]
# topic to document matrix
H = mod.transform(vec)
W_dict = {}
H_dict = {}
for tpc_idx, tpc_val in enumerate(W_norm):
topic = "Topic{}".format(tpc_idx)
# formatting w
W_indices = tpc_val.argsort()[::-1][:nwords]
W_names_values = [
(round(tpc_val[j], 4), names[j])
for j in W_indices
]
W_dict[topic] = W_names_values
# formatting h
H_indices = H[:, tpc_idx].argsort()[::-1][:ndocs]
H_names_values = [
(round(H[:, tpc_idx][j], 4), docs[j])
for j in H_indices
]
H_dict[topic] = H_names_values
W_df = pandas.DataFrame(
W_dict,
index=["Word" + str(i) for i in range(nwords)]
)
H_df = pandas.DataFrame(
H_dict,
index=["Doc" + str(i) for i in range(ndocs)]
)
return (W_df, H_df)
# get nice tables
W_df, H_df = get_topics(
mod=lda,
vec=clean_vec1,
names=feature_names_vec1,
docs=raw,
ndocs=number_docs,
nwords=number_words
)
# word-topic table
print(W_df)
# document-topic table
print(H_df)
# iteractive plot
# pca biplot and histogram
lda_plot = pyLDAvis.sklearn.prepare(lda, clean_vec1, vectorizer1, R=10)
pyLDAvis.display(lda_plot)
```
#### Activity 3
```
# bag of words conversion
# tf-idf method
vectorizer2 = sklearn.feature_extraction.text.TfidfVectorizer(
analyzer="word",
max_df=0.5,
min_df=20,
max_features=number_features,
smooth_idf=False
)
clean_vec2 = vectorizer2.fit_transform(clean_sentences)
print(clean_vec2[0])
feature_names_vec2 = vectorizer2.get_feature_names()
# define and fit nmf model
nmf = sklearn.decomposition.NMF(
n_components=optimal_num_topics,
init="nndsvda",
solver="mu",
beta_loss="frobenius",
random_state=0,
alpha=0.1,
l1_ratio=0.5
)
nmf.fit(clean_vec2)
# get nicely formatted result tables
W_df, H_df = get_topics(
mod=nmf,
vec=clean_vec2,
names=feature_names_vec2,
docs=raw,
ndocs=number_docs,
nwords=number_words
)
# word-topic table
print(W_df)
# document-topic table
print(H_df)
```
| github_jupyter |
# `pymdptoolbox` demo
```
import warnings
from mdptoolbox import mdp
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
```
## The problem
* You have a 20-sided die, and you get to roll repeatedly until the sum of your rolls either gets as close as possible to 21 or you bust.
* Your score is the numerical value of the sum of your rolls; if you bust, you get zero.
* What is the optimal strategy?

## The solution
Let's look at what we have to deal with:
* State space is 23-dimensional (sum of rolls can be 0-21 inclusive, plus the terminal state)
* Action space is 2-dimensional (roll/stay)
* State transitions are stochastic; requires transition matrix $T(s^\prime;s,a)$
* $T$ is mildly sparse (some transitions like 9->5 or 0->21 are impossible)
* Rewards depend on both state and action taken from that state, but are not stochastic (only ever get positive reward when choosing "stay")
We're going to use the [*value iteration*](https://pymdptoolbox.readthedocs.io/en/latest/api/mdp.html#mdptoolbox.mdp.ValueIteration) algorithm. Looking at the documentation, we can see that it requires as input a transition matrx, a reward matrix, and a discount factor (we will use $\gamma = 1$).
Let's first specify the transition "matrix". It's going to be a 3-dimensional tensor of shape $(|\mathcal{A}|,|\mathcal{S}|,|\mathcal{S}|) = (2, 23, 23)$. Most entries are probably zero, so let's start with a zero matrix and fill in the blanks. I'm going reserve the very last state (the 23rd entry) for the terminal state.
```
def make_transition_matrix(n_sides=20, max_score=21):
"""Constructs the transition matrix for the MDP
Arguments:
n_sides: number of sides on the die being rolled
max_score: the maximum score of the game before going bust
Returns:
np.ndarray: array of shape (A,S,S), where A=2, and S=max_score+2
representing the transition matrix for the MDP
"""
A = 2
S = max_score + 2
T = np.zeros(shape=(A, S, S))
p = 1/n_sides
# All the "roll" action transitions
# First, the transition from state s to any non terminal state s' has probability
# 1/n_sides unless s' <= s or s' > s + n_sides
for s in range(0, S-1):
for sprime in range(s+1, S-1):
if sprime <= s + n_sides:
T[0,s,sprime] = p
# The rows of T[0] must all sum to one, so all the remaining probability goes to
# the terminal state
for s in range(0, S-1):
T[0,s,S-1] = 1 - T[0,s].sum()
# It is impossible to transition out of the terminal state; it is "absorbing"
T[0,S-1,S-1] = 1
# All the "stay" action transitions
# This one is simple - all "stay" transitions dump you in the terminal state,
# regardless of starting state
T[1,:,S-1] = 1
T[T<0] = 0 # There may be some very small negative probabilities due to rounding
# errors - this fixes errythang
return T
# Take a peek at a smaller version
T = make_transition_matrix(n_sides=4, max_score=5)
print("roll transitions:")
print(T[0])
print("\nstay transitions:")
print(T[1])
```
Now let's build the reward matrix. This is going to be a tensor of shape $(|\mathcal{S}|,|\mathcal{A}|) = (23,2)$. This one is even simpler than the transition matrix because only "stay" actions generate nonzero rewards, which are equal to the index of the state itself.
```
def make_reward_matrix(max_score=21):
"""Create the reward matrix for the MDP.
Arguments:
max_score: the maximum score of the game before going bust
Returns:
np.ndarray: array of shape (S,A), where A=2, and S=max_score+2
representing the reward matrix for the MDP
"""
A = 2
S = max_score + 2
R = np.zeros(shape=(S, A))
# Only need to create rewards for the "stay" action
# Rewards are equal to the state index, except for the terminal state, which
# always returns zero
for s in range(0, S-1):
R[s,1] = s
return R
# Take a peek at a smaller version
R = make_reward_matrix(max_score=5)
print("roll rewards:")
print(R[:,0])
print("\nstay rewards:")
print(R[:,1])
```
## The algorithm
Alright, now that we have the transition and reward matrices, our MDP is completely defined, and we can use the `pymdptoolbox` to help us figure out the optimal policy/strategy.
```
n_sides = 20
max_score = 21
T = make_transition_matrix(n_sides, max_score)
R = make_reward_matrix(max_score)
model = mdp.ValueIteration(
transitions=T,
reward=R,
discount=1,
epsilon=0.001,
max_iter=1000,
)
model.setVerbose()
model.run()
print(f"Algorithm finished running in {model.time:.2e} seconds")
```
That ran pretty fast, didn't it? Unfortunately most realistic MDP problems have millions or billions of possible states (or more!), so this doesn't really scale very well. But it works for our small problem very well.
## The results
Now let's analyze the results. The `ValueIteration` object gives us easy access to the optimal value function and policy.
```
plt.plot(model.V, marker='o')
x = np.linspace(0, max_score, 10)
plt.plot(x, x, linestyle="--", color='black')
ticks = list(range(0, max_score+1, 5)) + [max_score+1]
labels = [str(x) for x in ticks[:-1]] + ["\u2205"]
plt.xticks(ticks, labels)
plt.xlim(-1, max_score+2)
plt.xlabel("State sum of rolls $s$")
plt.ylabel("State value $V$")
plt.title("MDP optimal value function $V^*(s)$")
plt.show()
plt.plot(model.policy, marker='o')
ticks = list(range(0, max_score+1, 5)) + [max_score+1]
labels = [str(x) for x in ticks[:-1]] + ["\u2205"]
plt.xticks(ticks, labels)
plt.xlim(-1, max_score+2)
ticks = [0, 1]
labels = ["roll", "stay"]
plt.yticks(ticks, labels)
plt.ylim(-0.25, 1.25)
plt.xlabel("State sum of rolls $s$")
plt.ylabel("Policy $\pi$")
plt.title("MDP optimal policy $\pi^*(s)$")
plt.show()
```
Looks like the optimal policy is to keep rolling until the sum gets to 10. This is why $V(s) = s$ for $s>=10$ (black dashed line); because that's the score you end up with when following this policy. For $s<10$, it's actually a bit higher than $s$ because you get an opportunity to roll again to get a higher score, and the sum is low enough that your chances of busting are relatively low. We can see the slope is positive for $s \le 21 - 20 = 1$ because it's impossible to bust below that point, but the slope becomes negative between $1 \le s \le 10$ because you're more likely to bust the higher you get.
We can also calculate the state distribution $\rho_\pi(s_0 \rightarrow s,t)$, which tells us the probability to be in any one of the states $s$ after a time $t$ when starting from state $s_0$:
$$
\rho_\pi(s_0 \rightarrow s,t) = \sum_{s^\prime} T(s;s^\prime,\pi(s^\prime)) \rho_\pi(s_0 \rightarrow s^\prime, t-1) \\
\text{where }\rho_\pi(s_0 \rightarrow s, 0) = \delta_{s, s_0}
$$
```
def calculate_state_distribution(policy, T, t_max=10):
S = len(policy)
# Reduce transition matrix to T(s';s) since policy is fixed
T_ = np.zeros(shape=(S, S))
for s in range(S):
for sprime in range(S):
T_[s,sprime] = T[policy[s],s,sprime]
T = T_
# Initialize rho
rho = np.zeros(shape=(S, S, t_max+1))
for s in range(0, S):
rho[s,s,0] = 1
# Use the iterative update equation
for t in range(1, t_max+1):
rho[:,:,t] = np.einsum("ji,kj->ki", T, rho[:,:,t-1])
return rho
rho = calculate_state_distribution(model.policy, T, 5)
with warnings.catch_warnings():
warnings.simplefilter('ignore') # Ignore the divide by zero error from taking log(0)
plt.imshow(np.log10(rho[0].T), cmap='viridis')
cbar = plt.colorbar(shrink=0.35, aspect=9)
cbar.ax.set_title(r"$\log_{10}(\rho)$")
ticks = list(range(0, max_score+1, 5)) + [max_score+1]
labels = [str(x) for x in ticks[:-1]] + ["\u2205"]
plt.xticks(ticks, labels)
plt.xlabel("State sum of rolls $s$")
plt.ylabel("Number of rolls/turns $t$")
plt.title(r"Optimal state distribution $\rho_{\pi^*}(s_0\rightarrow s;t)$")
plt.subplots_adjust(right=2, top=2)
plt.show()
```
| github_jupyter |
<p style = "font-size : 50px; color : #532e1c ; font-family : 'Comic Sans MS'; text-align : center; background-color : #bedcfa; border-radius: 5px 5px;"><strong>Titanic EDA and Prediction</strong></p>
<img style="float: center; border:5px solid #ffb037; width:100%" src = https://sn56.scholastic.com/content/dam/classroom-magazines/sn56/issues/2018-19/020419/the-titanic-sails-again/SN56020919_Titanic-Hero.jpg>
<a id = '0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong>Table of Contents</strong></p>
* [Data Description](#1.0)
* [EDA](#2.0)
* [Survived Column](#2.1)
* [Pclass Column](#2.2)
* [Name Column](#2.3)
* [Sex Column](#2.4)
* [Age Column](#2.5)
* [Fare Column](#2.6)
* [SibSp Column](#2.7)
* [Parch Column](#2.8)
* [Ticket Column](#2.9)
* [Embarked Column](#2.10)
* [Findings From EDA](#3.0)
* [Data Preprocessing](#4.0)
* [Models](#5.0)
* [Logistic Regression](#5.1)
* [Knn](#5.2)
* [Decision Tree Classifier](#5.3)
* [Random Forest Classifier](#5.4)
* [Ada Boost Classifier](#5.5)
* [Gradient Boosting Classifier](#5.6)
* [Stochastic Gradient Boosting (SGB)](#5.7)
* [XgBoost](#5.8)
* [Cat Boost Classifier](#5.9)
* [Extra Trees Classifier](#5.10)
* [LGBM Classifier](#5.11)
* [Voting Classifier](#5.12)
* [Models Comparison](#6.0)
<a id = '1.0'></a>
<p style = "font-size : 30px; color : #4e8d7c ; font-family : 'Comic Sans MS'; "><strong>Data Description :-</strong></p>
<ul>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Survival : 0 = No, 1 = Yes</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>pclass(Ticket Class) : 1 = 1st, 2 = 2nd, 3 = 3rd</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Sex(Gender) : Male, Female</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Age : Age in years</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>SibSp : Number of siblings/spouses abroad the titanic</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Parch : Number of parents/children abrod the titanic</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Ticket : Ticket Number</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Fare : Passenger fare</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Cabin : Cabin Number</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Embarked : Port of Embarkation, C = Cherbourg, Q = Queenstown, S = Southampton</strong></li>
</ul>
```
# necessary imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
plt.style.use('fivethirtyeight')
%matplotlib inline
train_df = pd.read_csv('../input/titanic/train.csv')
train_df.head()
train_df.describe()
train_df.var()
train_df.info()
# Checking for null values
train_df.isna().sum()
```
<a id = '2.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong>Exploratory Data Analysis (EDA)</strong></p>
```
# visualizing null values
import missingno as msno
msno.bar(train_df)
plt.show()
# heatmap
plt.figure(figsize = (18, 8))
corr = train_df.corr()
mask = np.triu(np.ones_like(corr, dtype = bool))
sns.heatmap(corr, mask = mask, annot = True, fmt = '.2f', linewidths = 1, annot_kws = {'size' : 15})
plt.show()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Heatmap is not useful in case of categorical variables, so we will analyse each column to check how each column is contributing in prediction.</strong></p>
<a id = '2.1'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Survived Column</strong></p>
```
plt.figure(figsize = (12, 7))
sns.countplot(y = 'Survived', data = train_df)
plt.show()
values = train_df['Survived'].value_counts()
labels = ['Not Survived', 'Survived']
fig, ax = plt.subplots(figsize = (5, 5), dpi = 100)
explode = (0, 0.06)
patches, texts, autotexts = ax.pie(values, labels = labels, autopct = '%1.2f%%', shadow = True,
startangle = 90, explode = explode)
plt.setp(texts, color = 'grey')
plt.setp(autotexts, size = 12, color = 'white')
autotexts[1].set_color('black')
plt.show()
```
<a id = '2.2'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Pclass Column</strong></p>
```
train_df.Pclass.value_counts()
train_df.groupby(['Pclass', 'Survived'])['Survived'].count()
plt.figure(figsize = (16, 8))
sns.countplot('Pclass', hue = 'Survived', data = train_df)
plt.show()
values = train_df['Pclass'].value_counts()
labels = ['Third Class', 'Second Class', 'First Class']
explode = (0, 0, 0.08)
fig, ax = plt.subplots(figsize = (5, 6), dpi = 100)
patches, texts, autotexts = ax.pie(values, labels = labels, autopct = '%1.2f%%', shadow = True,
startangle = 90, explode = explode)
plt.setp(texts, color = 'grey')
plt.setp(autotexts, size = 13, color = 'white')
autotexts[2].set_color('black')
plt.show()
sns.catplot('Pclass', 'Survived', kind = 'point', data = train_df, height = 6, aspect = 2)
plt.show()
```
<a id = '2.3'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Name Column</strong></p>
```
train_df.Name.value_counts()
len(train_df.Name.unique()), train_df.shape
```
<a id = '2.4'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Sex Column</strong></p>
```
train_df.Sex.value_counts()
train_df.groupby(['Sex', 'Survived'])['Survived'].count()
plt.figure(figsize = (16, 7))
sns.countplot('Sex', hue = 'Survived', data = train_df)
plt.show()
sns.catplot(x = 'Sex', y = 'Survived', data = train_df, kind = 'bar', col = 'Pclass')
plt.show()
sns.catplot(x = 'Sex', y = 'Survived', data = train_df, kind = 'point', height = 6, aspect =2)
plt.show()
plt.figure(figsize = (15, 6))
sns.catplot(x = 'Pclass', y = 'Survived', kind = 'point', data = train_df, hue = 'Sex', height = 6, aspect = 2)
plt.show()
```
<a id = '2.5'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Age Column</strong></p>
```
plt.figure(figsize = (15, 6))
plt.style.use('ggplot')
sns.distplot(train_df['Age'])
plt.show()
sns.catplot(x = 'Sex', y = 'Age', kind = 'box', data = train_df, height = 5, aspect = 2)
plt.show()
sns.catplot(x = 'Sex', y = 'Age', kind = 'box', data = train_df, col = 'Pclass')
plt.show()
```
<a id = '2.6'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Fare Column</strong></p>
```
plt.figure(figsize = (14, 6))
plt.hist(train_df.Fare, bins = 60, color = 'orange')
plt.xlabel('Fare')
plt.show()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>We can see that lot of zero values are there in Fare column so we will replace zero values with mean value of Fare column later.</strong></p>
```
sns.catplot(x = 'Sex', y = 'Fare', data = train_df, kind = 'box', col = 'Pclass')
plt.show()
```
<a id = '2.7'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>SibSp Column</strong></p>
```
train_df['SibSp'].value_counts()
plt.figure(figsize = (16, 5))
sns.countplot(x = 'SibSp', data = train_df, hue = 'Survived')
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'bar', data = train_df, height = 5, aspect =2)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'bar', hue = 'Sex', data = train_df, height = 6, aspect = 2)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'bar', col = 'Sex', data = train_df)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', col = 'Pclass', kind = 'bar', data = train_df)
plt.show()
sns.catplot(x = 'SibSp', y = 'Survived', kind = 'point', hue = 'Sex', data = train_df, height = 6, aspect = 2)
plt.show()
```
<a id = '2.8'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Parch Column</strong></p>
```
train_df.Parch.value_counts()
sns.catplot(x = 'Parch', y = 'Survived', data = train_df, hue = 'Sex', kind = 'bar', height = 6, aspect = 2)
plt.show()
```
<a id = '2.9'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Ticket Column</strong></p>
```
train_df.Ticket.value_counts()
len(train_df.Ticket.unique())
```
<a id = '2.10'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Embarked Column</strong></p>
```
train_df['Embarked'].value_counts()
plt.figure(figsize = (14, 6))
sns.countplot('Embarked', hue = 'Survived', data = train_df)
plt.show()
sns.catplot(x = 'Embarked', y = 'Survived', kind = 'bar', data = train_df, col = 'Sex')
plt.show()
```
<a id = '3.0'></a>
<p style = "font-size : 30px; color : #4e8d7c ; font-family : 'Comic Sans MS';"><strong>Findings From EDA :-</strong></p>
<ul>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Females Survived more than Males.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Passengers Travelling in Higher Class Survived More than Passengers travelling in Lower Class.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Name column is having all unique values so this column is not suitable for prediction, we have to drop it.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>In First Class Females were more than Males, that's why Fare of Females Passengers were high.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Survival Rate is higher for those who were travelling with siblings or spouses.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Passengers travelling with parents or children have higher survival rate.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Ticket column is not useful and does not have an impact on survival.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Cabin column have a lot of null values , it will be better to drop this column.</strong></li>
<li style = "color : #03506f; font-size : 18px; font-family : 'Comic Sans MS';"><strong>Passengers travelling from Cherbourg port survived more than passengers travelling from other two ports.</strong></li>
</ul>
<a id = '4.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong>Data Pre-Processing</strong></p>
```
# dropping useless columns
train_df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis = 1, inplace = True)
train_df.head()
train_df.isna().sum()
# replacing Zero values of "Fare" column with mean of column
train_df['Fare'] = train_df['Fare'].replace(0, train_df['Fare'].mean())
# filling null values of "Age" column with mean value of the column
train_df['Age'].fillna(train_df['Age'].mean(), inplace = True)
# filling null values of "Embarked" column with mode value of the column
train_df['Embarked'].fillna(train_df['Embarked'].mode()[0], inplace = True)
# checking for null values after filling null values
train_df.isna().sum()
train_df.head()
train_df['Sex'] = train_df['Sex'].apply(lambda val: 1 if val == 'male' else 0)
train_df['Embarked'] = train_df['Embarked'].map({'S' : 0, 'C': 1, 'Q': 2})
train_df.head()
train_df.describe()
train_df.var()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Variance in "Fare" column is very high so we have to normalize these columns.</strong></p>
```
train_df['Age'] = np.log(train_df['Age'])
train_df['Fare'] = np.log(train_df['Fare'])
train_df.head()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Now training data looks much better let's take a look at test data.</strong></p>
```
test_df = pd.read_csv('../input/titanic/test.csv')
test_df.head()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Performing same steps on test data.</strong></p>
```
# dropping useless columns
test_df.drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis = 1, inplace = True)
# replacing Zero values of "Fare" column with mean of column
test_df['Fare'] = test_df['Fare'].replace(0, test_df['Fare'].mean())
# filling null values of "Age" column with mean value of the column
test_df['Age'].fillna(test_df['Age'].mean(), inplace = True)
# filling null values of "Embarked" column with mode value of the column
test_df['Embarked'].fillna(test_df['Embarked'].mode()[0], inplace = True)
test_df.isna().sum()
# filling null values of "Fare" column with mean value of the column
test_df['Fare'].fillna(test_df['Fare'].mean(), inplace = True)
test_df['Sex'] = test_df['Sex'].apply(lambda val: 1 if val == 'male' else 0)
test_df['Embarked'] = test_df['Embarked'].map({'S' : 0, 'C': 1, 'Q': 2})
test_df.head()
test_df['Age'] = np.log(test_df['Age'])
test_df['Fare'] = np.log(test_df['Fare'])
test_df.var()
test_df.isna().any()
test_df.head()
```
<p style = "font-size : 20px; color : #34656d ; font-family : 'Comic Sans MS'; "><strong>Now both training and test data is cleaned and preprocessed, let's start with model building.</strong></p>
```
# creating X and y
X = train_df.drop('Survived', axis = 1)
y = train_df['Survived']
# splitting data intp training and test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30, random_state = 0)
```
<a id = '5.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong> Models</strong></p>
<a id = '5.1'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Logistic Regression</strong></p>
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of logistic regression
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report
lr_acc = accuracy_score(y_test, lr.predict(X_test))
print(f"Training Accuracy of Logistic Regression is {accuracy_score(y_train, lr.predict(X_train))}")
print(f"Test Accuracy of Logistic Regression is {lr_acc}")
print(f"Confusion Matrix :- \n {confusion_matrix(y_test, lr.predict(X_test))}")
print(f"Classofocation Report : -\n {classification_report(y_test, lr.predict(X_test))}")
# hyper parameter tuning of logistic regression
from sklearn.model_selection import GridSearchCV
grid_param = {
'penalty': ['l1', 'l2'],
'C' : [0.001, 0.01, 0.1, 0.005, 0.5, 1, 10]
}
grid_search_lr = GridSearchCV(lr, grid_param, cv = 5, n_jobs = -1, verbose = 1)
grid_search_lr.fit(X_train, y_train)
# best parameters and best score
print(grid_search_lr.best_params_)
print(grid_search_lr.best_score_)
# best estimator
lr = grid_search_lr.best_estimator_
# accuracy score, confusion matrix and classification report of logistic regression
lr_acc = accuracy_score(y_test, lr.predict(X_test))
print(f"Training Accuracy of Logistic Regression is {accuracy_score(y_train, lr.predict(X_train))}")
print(f"Test Accuracy of Logistic Regression is {lr_acc}")
print(f"Confusion Matrix :- \n {confusion_matrix(y_test, lr.predict(X_test))}")
print(f"Classofocation Report : -\n {classification_report(y_test, lr.predict(X_test))}")
```
<a id = '5.2'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>KNN</strong></p>
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of knn
knn_acc = accuracy_score(y_test, knn.predict(X_test))
print(f"Training Accuracy of KNN is {accuracy_score(y_train, knn.predict(X_train))}")
print(f"Test Accuracy of KNN is {knn_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, knn.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, knn.predict(X_test))}")
```
<a id = '5.3'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Decision Tree Classifier</strong></p>
```
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier()
dtc.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of decision tree
dtc_acc = accuracy_score(y_test, dtc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, dtc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {dtc_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, dtc.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, dtc.predict(X_test))}")
# hyper parameter tuning of decision tree
grid_param = {
'criterion' : ['gini', 'entropy'],
'max_depth' : [3, 5, 7, 10],
'splitter' : ['best', 'random'],
'min_samples_leaf' : [1, 2, 3, 5, 7],
'min_samples_split' : [1, 2, 3, 5, 7],
'max_features' : ['auto', 'sqrt', 'log2']
}
grid_search_dtc = GridSearchCV(dtc, grid_param, cv = 5, n_jobs = -1, verbose = 1)
grid_search_dtc.fit(X_train, y_train)
# best parameters and best score
print(grid_search_dtc.best_params_)
print(grid_search_dtc.best_score_)
# best estimator
dtc = grid_search_dtc.best_estimator_
# accuracy score, confusion matrix and classification report of decision tree
dtc_acc = accuracy_score(y_test, dtc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, dtc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {dtc_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, dtc.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, dtc.predict(X_test))}")
```
<a id = '5.4'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Random Forest Classifier</strong></p>
```
from sklearn.ensemble import RandomForestClassifier
rd_clf = RandomForestClassifier()
rd_clf.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of random forest
rd_clf_acc = accuracy_score(y_test, rd_clf.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, rd_clf.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {rd_clf_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, rd_clf.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, rd_clf.predict(X_test))}")
```
<a id = '5.5'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Ada Boost Classifier</strong></p>
```
from sklearn.ensemble import AdaBoostClassifier
ada = AdaBoostClassifier(base_estimator = dtc)
ada.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of ada boost
ada_acc = accuracy_score(y_test, ada.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, ada.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {ada_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, ada.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, ada.predict(X_test))}")
# hyper parameter tuning ada boost
grid_param = {
'n_estimators' : [100, 120, 150, 180, 200],
'learning_rate' : [0.01, 0.1, 1, 10],
'algorithm' : ['SAMME', 'SAMME.R']
}
grid_search_ada = GridSearchCV(ada, grid_param, cv = 5, n_jobs = -1, verbose = 1)
grid_search_ada.fit(X_train, y_train)
# best parameter and best score
print(grid_search_ada.best_params_)
print(grid_search_ada.best_score_)
ada = grid_search_ada.best_estimator_
# accuracy score, confusion matrix and classification report of ada boost
ada_acc = accuracy_score(y_test, ada.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, ada.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {ada_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, ada.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, ada.predict(X_test))}")
```
<a id = '5.6'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Gradient Boosting Classifier</strong></p>
```
from sklearn.ensemble import GradientBoostingClassifier
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of gradient boosting classifier
gb_acc = accuracy_score(y_test, gb.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, gb.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {gb_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, gb.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, gb.predict(X_test))}")
```
<a id = '5.7'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Stochastic Gradient Boosting (SGB)</strong></p>
```
sgb = GradientBoostingClassifier(subsample = 0.90, max_features = 0.70)
sgb.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of stochastic gradient boosting classifier
sgb_acc = accuracy_score(y_test, sgb.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, sgb.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {sgb_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, sgb.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, sgb.predict(X_test))}")
```
<a id = '5.8'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>XgBoost</strong></p>
```
from xgboost import XGBClassifier
xgb = XGBClassifier(booster = 'gbtree', learning_rate = 0.1, max_depth = 5, n_estimators = 180)
xgb.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of xgboost
xgb_acc = accuracy_score(y_test, xgb.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, xgb.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {xgb_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, xgb.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, xgb.predict(X_test))}")
```
<a id = '5.9'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Cat Boost Classifier</strong></p>
```
from catboost import CatBoostClassifier
cat = CatBoostClassifier(iterations=10)
cat.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of cat boost
cat_acc = accuracy_score(y_test, cat.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, cat.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {cat_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, cat.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, cat.predict(X_test))}")
```
<a id = '5.10'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Extra Trees Classifier</strong></p>
```
from sklearn.ensemble import ExtraTreesClassifier
etc = ExtraTreesClassifier()
etc.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of extra trees classifier
etc_acc = accuracy_score(y_test, etc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, etc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {etc_acc} \n")
print(f"Confusion Matrix :- \n{confusion_matrix(y_test, etc.predict(X_test))}\n")
print(f"Classification Report :- \n {classification_report(y_test, etc.predict(X_test))}")
```
<a id = '5.11'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>LGBM Classifier</strong></p>
```
from lightgbm import LGBMClassifier
lgbm = LGBMClassifier(learning_rate = 1)
lgbm.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of lgbm classifier
lgbm_acc = accuracy_score(y_test, lgbm.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, lgbm.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {lgbm_acc} \n")
print(f"{confusion_matrix(y_test, lgbm.predict(X_test))}\n")
print(classification_report(y_test, lgbm.predict(X_test)))
```
<a id = '5.12'></a>
<p style = "font-size : 25px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #fbc6a4; border-radius: 5px 5px;"><strong>Voting Classifier</strong></p>
```
from sklearn.ensemble import VotingClassifier
classifiers = [('Gradient Boosting Classifier', gb), ('Stochastic Gradient Boosting', sgb), ('Cat Boost Classifier', cat),
('XGboost', xgb), ('Decision Tree', dtc), ('Extra Tree', etc), ('Light Gradient', lgbm),
('Random Forest', rd_clf), ('Ada Boost', ada), ('Logistic', lr)]
vc = VotingClassifier(estimators = classifiers)
vc.fit(X_train, y_train)
# accuracy score, confusion matrix and classification report of voting classifier
vc_acc = accuracy_score(y_test, vc.predict(X_test))
print(f"Training Accuracy of Decision Tree Classifier is {accuracy_score(y_train, vc.predict(X_train))}")
print(f"Test Accuracy of Decision Tree Classifier is {vc_acc} \n")
print(f"{confusion_matrix(y_test, vc.predict(X_test))}\n")
print(classification_report(y_test, vc.predict(X_test)))
```
<a id = '6.0'></a>
<p style = "font-size : 35px; color : #34656d ; font-family : 'Comic Sans MS'; text-align : center; background-color : #f9b208; border-radius: 5px 5px;"><strong> Models Comparison</strong></p>
```
models = pd.DataFrame({
'Model' : ['Logistic Regression', 'KNN', 'Decision Tree Classifier', 'Random Forest Classifier','Ada Boost Classifier',
'Gradient Boosting Classifier', 'Stochastic Gradient Boosting', 'XgBoost', 'Cat Boost', 'Extra Trees Classifier', 'Voting Classifier'],
'Score' : [lr_acc, knn_acc, dtc_acc, rd_clf_acc, ada_acc, gb_acc, sgb_acc, xgb_acc, cat_acc, etc_acc, vc_acc]
})
models.sort_values(by = 'Score', ascending = False)
plt.figure(figsize = (15, 10))
sns.barplot(x = 'Score', y = 'Model', data = models)
plt.show()
final_prediction = sgb.predict(test_df)
prediction = pd.DataFrame(final_prediction)
submission = pd.read_csv('../input/titanic/gender_submission.csv')
submission['Survived'] = prediction
submission.to_csv('Submission.csv', index = False)
```
<p style = "font-size : 25px; color : #f55c47 ; font-family : 'Comic Sans MS'; "><strong>If you like my work, please do Upvote.</strong></p>
| github_jupyter |
# Find Pairwise Interactions
This notebook demonstrates how to calculate pairwise intra- and inter-molecular interactions at specified levels of granularity within biological assemblies and asymmetric units.
```
from pyspark.sql import SparkSession
from mmtfPyspark.io import mmtfReader
from mmtfPyspark.utils import ColumnarStructure
from mmtfPyspark.interactions import InteractionExtractorPd
```
### Start a Spark Session
```
spark = SparkSession.builder.appName("Interactions").getOrCreate()
```
## Define Interaction Partners
Interactions are defined by specifing two subsets of atoms, named **query** and **target**. Once defined, interactions can calculated between these two subsets.
### Use Pandas Dataframes to Create Subsets
The InteractionExtractorPd internally uses Pandas dataframe queries to create query and target atom sets. Any of the Pandas column names below can be used to create subsets.
Example of a structure represented in a Pandas dataframe.
```
structures = mmtfReader.download_mmtf_files(["1OHR"]).cache()
# get first structure from Spark RDD (keys = PDB IDs, value = mmtf structures)
first_structure = structures.values().first()
# convert to a Pandas dataframe
df = ColumnarStructure(first_structure).to_pandas()
df.head(5)
```
### Create a subset of atoms using boolean expressions
The following query creates a subset of ligand (non-polymer) atoms that are not water (HOH) or heavy water (DOD).
```
query = "not polymer and (group_name not in ['HOH','DOD'])"
df_lig = df.query(query)
df_lig.head(5)
```
## Calculate Interactions
The following boolean expressions specify two subsets: ligands (query) and polymer groups (target). In this example, interactions within a distance cutoff of 4 Å are calculated.
```
query = "not polymer and (group_name not in ['HOH','DOD'])"
target = "polymer"
distance_cutoff = 4.0
# the result is a Spark dataframe
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target)
# get the first 5 rows of the Spark dataframe and display it as a Pandas dataframe
interactions.limit(5).toPandas()
```
## Calculate all interactions
If query and target are not specified, all interactions are calculated. By default, intermolecular interactions are calculated.
```
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff)
interactions.limit(5).toPandas()
```
## Aggregate Interactions at Different Levels of Granularity
Pairwise interactions can be listed at different levels of granularity by setting the **level**:
* **level='coord'**: pairwise atom interactions, distances, and coordinates
* **level='atom'**: pairwise atom interactions and distances
* **level='group'**: pairwise atom interactions aggregated at the group (residue) level (default)
* **level='chain'**: pairwise atom interactions aggregated at the chain level
The next example lists the interactions at the **coord** level, the level of highest granularity. You need to scroll in the dataframe to see all columns.
```
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, level='coord')
interactions.limit(5).toPandas()
```
## Calculate Inter- vs Intra-molecular Interactions
Inter- and intra-molecular interactions can be calculated by explicitly setting the **inter** and **intra** flags.
* **inter=True** (default)
* **intra=False** (default)
### Find intermolecular salt-bridges
This example uses the default settings, i.e., finds intramolecular salt-bridges.
```
query = "polymer and (group_name in ['ASP', 'GLU']) and (atom_name in ['OD1', 'OD2', 'OE1', 'OE2'])"
target = "polymer and (group_name in ['ARG', 'LYS', 'HIS']) and (atom_name in ['NH1', 'NH2', 'NZ', 'ND1', 'NE2'])"
distance_cutoff = 3.5
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, level='atom')
interactions.limit(5).toPandas()
```
### Find intramolecular hydrogen bonds
In this example, the inter and intra flags have been set to find intramolecular hydrogen bonds.
```
query = "polymer and element in ['N','O']"
target = "polymer and element in ['N','O']"
distance_cutoff = 3.5
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target,
inter=False, intra=True,
level='atom')
interactions.limit(5).toPandas()
```
## Calculate Interaction in the Biological Assembly vs. Asymmetric Unit
```
structures = mmtfReader.download_mmtf_files(["1STP"]).cache()
```
By default, interactions in the first biological assembly are calculated. The **bio** parameter specifies the biological assembly number. Most PDB structure have only one biological assembly (bio=1), a few have more than one.
* **bio=1** use first biological assembly (default)
* **bio=2** use second biological assembly
* **bio=None** use the asymmetric unit
```
query = "not polymer and (group_name not in ['HOH','DOD'])"
target = "polymer"
distance_cutoff = 4.0
# The asymmetric unit is a monomer (1 ligand, 1 protein chain)
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, bio=None)
print("Ligand interactions in asymmetric unit (monomer) :", interactions.count())
# The first biological assembly is a tetramer (4 ligands, 4 protein chain)
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, bio=1)
print("Ligand interactions in 1st bio assembly (tetramer) :", interactions.count())
# There is no second biological assembly, in that case zero interactions are returned
interactions = InteractionExtractorPd.get_interactions(structures, distance_cutoff,
query, target, bio=2)
print("Ligand interactions in 2st bio assembly (does not exist):", interactions.count())
```
The 1st biological unit contains 68 - 4x16 = 4 additional interactions not found in the asymmetric unit.
## Stop Spark!
```
spark.stop()
```
| github_jupyter |
[](https://colab.research.google.com/github/real-itu/modern-ai-course/blob/master/lecture-02/lab.ipynb)
# Lab 2 - Adversarial Search
[Connect 4](https://en.wikipedia.org/wiki/Connect_Four) is a classic board game in which 2 players alternate placing markers in columns, and the goal is to get 4 in a row, either horizontally, vertically or diagonally. See the short video below
```
from IPython.display import YouTubeVideo
YouTubeVideo("ylZBRUJi3UQ")
```
The game is implemented below. It will play a game where both players take random (legal) actions. The MAX player is represented with a X and the MIN player with an O. The MAX player starts. Execute the code.
```
import random
from copy import deepcopy
from typing import Sequence
NONE = '.'
MAX = 'X'
MIN = 'O'
COLS = 7
ROWS = 6
N_WIN = 4
class ArrayState:
def __init__(self, board, heights, n_moves):
self.board = board
self.heights = heights
self.n_moves = n_moves
@staticmethod
def init():
board = [[NONE] * ROWS for _ in range(COLS)]
return ArrayState(board, [0] * COLS, 0)
def result(state: ArrayState, action: int) -> ArrayState:
"""Insert in the given column."""
assert 0 <= action < COLS, "action must be a column number"
if state.heights[action] >= ROWS:
raise Exception('Column is full')
player = MAX if state.n_moves % 2 == 0 else MIN
board = deepcopy(state.board)
board[action][ROWS - state.heights[action] - 1] = player
heights = deepcopy(state.heights)
heights[action] += 1
return ArrayState(board, heights, state.n_moves + 1)
def actions(state: ArrayState) -> Sequence[int]:
return [i for i in range(COLS) if state.heights[i] < ROWS]
def utility(state: ArrayState) -> float:
"""Get the winner on the current board."""
board = state.board
def diagonalsPos():
"""Get positive diagonals, going from bottom-left to top-right."""
for di in ([(j, i - j) for j in range(COLS)] for i in range(COLS + ROWS - 1)):
yield [board[i][j] for i, j in di if i >= 0 and j >= 0 and i < COLS and j < ROWS]
def diagonalsNeg():
"""Get negative diagonals, going from top-left to bottom-right."""
for di in ([(j, i - COLS + j + 1) for j in range(COLS)] for i in range(COLS + ROWS - 1)):
yield [board[i][j] for i, j in di if i >= 0 and j >= 0 and i < COLS and j < ROWS]
lines = board + \
list(zip(*board)) + \
list(diagonalsNeg()) + \
list(diagonalsPos())
max_win = MAX * N_WIN
min_win = MIN * N_WIN
for line in lines:
str_line = "".join(line)
if max_win in str_line:
return 1
elif min_win in str_line:
return -1
return 0
def terminal_test(state: ArrayState) -> bool:
return state.n_moves >= COLS * ROWS or utility(state) != 0
def printBoard(state: ArrayState):
board = state.board
"""Print the board."""
print(' '.join(map(str, range(COLS))))
for y in range(ROWS):
print(' '.join(str(board[x][y]) for x in range(COLS)))
print()
if __name__ == '__main__':
s = ArrayState.init()
while not terminal_test(s):
a = random.choice(actions(s))
s = result(s, a)
printBoard(s)
print(utility(s))
```
The last number 0, -1 or 1 is the utility or score of the game. 0 means it was a draw, 1 means MAX player won and -1 means MIN player won.
### Exercise 1
Modify the code so that you can play manually as the MIN player against the random AI.
### Exercise 2
Implement standard minimax with a fixed depth search. Modify the utility function to handle non-terminal positions using heuristics. Find a value for the depth such that moves doesn't take longer than approx. 1s to evaluate. See if you can beat your connect4 AI.
### Exercise 3
Add alpha/beta pruning to your minimax. Change your depth so that moves still takes approx 1 second to evaluate. How much deeper can you search? See if you can beat your connect4 AI.
### Exercise 4
Add move ordering. The middle columns are often "better" since there's more winning positions that contain them. Evaluate the moves in this order: [3,2,4,1,5,0,6]. How much deeper can you search now? See if you can beat your connect4 AI
### Exercise 5 - Optional
Improve your AI somehow. Consider
* Better heuristics
* Faster board representations (look up bitboards)
* Adding a transposition table (see class below)
* Better move ordering
```
class TranspositionTable:
def __init__(self, size=1_000_000):
self.size = size
self.vals = [None] * size
def board_str(self, state: ArrayState):
return ''.join([''.join(c) for c in state.board])
def put(self, state: ArrayState, utility: float):
bstr = self.board_str(state)
idx = hash(bstr) % self.size
self.vals[idx] = (bstr, utility)
def get(self, state: ArrayState):
bstr = self.board_str(state)
idx = hash(bstr) % self.size
stored = self.vals[idx]
if stored is None:
return None
if stored[0] == bstr:
return stored[1]
else:
return None
```
| github_jupyter |
# Building deep retrieval models
**Learning Objectives**
1. Converting raw input examples into feature embeddings.
2. Splitting the data into a training set and a testing set.
3. Configuring the deeper model with losses and metrics.
## Introduction
In [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) we incorporated multiple features into our models, but the models consist of only an embedding layer. We can add more dense layers to our models to increase their expressive power.
In general, deeper models are capable of learning more complex patterns than shallower models. For example, our [user model](fhttps://www.tensorflow.org/recommenders/examples/featurization#user_model) incorporates user ids and timestamps to model user preferences at a point in time. A shallow model (say, a single embedding layer) may only be able to learn the simplest relationships between those features and movies: a given movie is most popular around the time of its release, and a given user generally prefers horror movies to comedies. To capture more complex relationships, such as user preferences evolving over time, we may need a deeper model with multiple stacked dense layers.
Of course, complex models also have their disadvantages. The first is computational cost, as larger models require both more memory and more computation to fit and serve. The second is the requirement for more data: in general, more training data is needed to take advantage of deeper models. With more parameters, deep models might overfit or even simply memorize the training examples instead of learning a function that can generalize. Finally, training deeper models may be harder, and more care needs to be taken in choosing settings like regularization and learning rate.
Finding a good architecture for a real-world recommender system is a complex art, requiring good intuition and careful [hyperparameter tuning](https://en.wikipedia.org/wiki/Hyperparameter_optimization). For example, factors such as the depth and width of the model, activation function, learning rate, and optimizer can radically change the performance of the model. Modelling choices are further complicated by the fact that good offline evaluation metrics may not correspond to good online performance, and that the choice of what to optimize for is often more critical than the choice of model itself.
Each learning objective will correspond to a _#TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/deep_recommenders.ipynb)
## Preliminaries
We first import the necessary packages.
```
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
```
**NOTE: Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.**
```
!pip install tensorflow==2.5.0
```
**NOTE: Please ignore any incompatibility warnings and errors.**
**NOTE: Restart your kernel to use updated packages.**
```
import os
import tempfile
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
plt.style.use('seaborn-whitegrid')
```
This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
```
In this tutorial we will use the models from [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) to generate embeddings. Hence we will only be using the user id, timestamp, and movie title features.
```
ratings = tfds.load("movielens/100k-ratings", split="train")
movies = tfds.load("movielens/100k-movies", split="train")
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"],
"timestamp": x["timestamp"],
})
movies = movies.map(lambda x: x["movie_title"])
```
We also do some housekeeping to prepare feature vocabularies.
```
timestamps = np.concatenate(list(ratings.map(lambda x: x["timestamp"]).batch(100)))
max_timestamp = timestamps.max()
min_timestamp = timestamps.min()
timestamp_buckets = np.linspace(
min_timestamp, max_timestamp, num=1000,
)
unique_movie_titles = np.unique(np.concatenate(list(movies.batch(1000))))
unique_user_ids = np.unique(np.concatenate(list(ratings.batch(1_000).map(
lambda x: x["user_id"]))))
```
## Model definition
### Query model
We start with the user model defined in [the featurization tutorial](https://www.tensorflow.org/recommenders/examples/featurization) as the first layer of our model, tasked with converting raw input examples into feature embeddings.
```
class UserModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.user_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_user_ids, mask_token=None),
tf.keras.layers.Embedding(len(unique_user_ids) + 1, 32),
])
self.timestamp_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.Discretization(timestamp_buckets.tolist()),
tf.keras.layers.Embedding(len(timestamp_buckets) + 1, 32),
])
self.normalized_timestamp = tf.keras.layers.experimental.preprocessing.Normalization()
self.normalized_timestamp.adapt(timestamps)
def call(self, inputs):
# Take the input dictionary, pass it through each input layer,
# and concatenate the result.
return tf.concat([
self.user_embedding(inputs["user_id"]),
self.timestamp_embedding(inputs["timestamp"]),
self.normalized_timestamp(inputs["timestamp"]),
], axis=1)
```
Defining deeper models will require us to stack mode layers on top of this first input. A progressively narrower stack of layers, separated by an activation function, is a common pattern:
```
+----------------------+
| 128 x 64 |
+----------------------+
| relu
+--------------------------+
| 256 x 128 |
+--------------------------+
| relu
+------------------------------+
| ... x 256 |
+------------------------------+
```
Since the expressive power of deep linear models is no greater than that of shallow linear models, we use ReLU activations for all but the last hidden layer. The final hidden layer does not use any activation function: using an activation function would limit the output space of the final embeddings and might negatively impact the performance of the model. For instance, if ReLUs are used in the projection layer, all components in the output embedding would be non-negative.
We're going to try something similar here. To make experimentation with different depths easy, let's define a model whose depth (and width) is defined by a set of constructor parameters.
```
class QueryModel(tf.keras.Model):
"""Model for encoding user queries."""
def __init__(self, layer_sizes):
"""Model for encoding user queries.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
# We first use the user model for generating embeddings.
# TODO 1a -- your code goes here
# Then construct the layers.
# TODO 1b -- your code goes here
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
```
The `layer_sizes` parameter gives us the depth and width of the model. We can vary it to experiment with shallower or deeper models.
### Candidate model
We can adopt the same approach for the movie model. Again, we start with the `MovieModel` from the [featurization](https://www.tensorflow.org/recommenders/examples/featurization) tutorial:
```
class MovieModel(tf.keras.Model):
def __init__(self):
super().__init__()
max_tokens = 10_000
self.title_embedding = tf.keras.Sequential([
tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=unique_movie_titles,mask_token=None),
tf.keras.layers.Embedding(len(unique_movie_titles) + 1, 32)
])
self.title_vectorizer = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=max_tokens)
self.title_text_embedding = tf.keras.Sequential([
self.title_vectorizer,
tf.keras.layers.Embedding(max_tokens, 32, mask_zero=True),
tf.keras.layers.GlobalAveragePooling1D(),
])
self.title_vectorizer.adapt(movies)
def call(self, titles):
return tf.concat([
self.title_embedding(titles),
self.title_text_embedding(titles),
], axis=1)
```
And expand it with hidden layers:
```
class CandidateModel(tf.keras.Model):
"""Model for encoding movies."""
def __init__(self, layer_sizes):
"""Model for encoding movies.
Args:
layer_sizes:
A list of integers where the i-th entry represents the number of units
the i-th layer contains.
"""
super().__init__()
self.embedding_model = MovieModel()
# Then construct the layers.
self.dense_layers = tf.keras.Sequential()
# Use the ReLU activation for all but the last layer.
for layer_size in layer_sizes[:-1]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size, activation="relu"))
# No activation for the last layer.
for layer_size in layer_sizes[-1:]:
self.dense_layers.add(tf.keras.layers.Dense(layer_size))
def call(self, inputs):
feature_embedding = self.embedding_model(inputs)
return self.dense_layers(feature_embedding)
```
### Combined model
With both `QueryModel` and `CandidateModel` defined, we can put together a combined model and implement our loss and metrics logic. To make things simple, we'll enforce that the model structure is the same across the query and candidate models.
```
class MovielensModel(tfrs.models.Model):
def __init__(self, layer_sizes):
super().__init__()
self.query_model = QueryModel(layer_sizes)
self.candidate_model = CandidateModel(layer_sizes)
self.task = tfrs.tasks.Retrieval(
metrics=tfrs.metrics.FactorizedTopK(
candidates=movies.batch(128).map(self.candidate_model),
),
)
def compute_loss(self, features, training=False):
# We only pass the user id and timestamp features into the query model. This
# is to ensure that the training inputs would have the same keys as the
# query inputs. Otherwise the discrepancy in input structure would cause an
# error when loading the query model after saving it.
query_embeddings = self.query_model({
"user_id": features["user_id"],
"timestamp": features["timestamp"],
})
movie_embeddings = self.candidate_model(features["movie_title"])
return self.task(
query_embeddings, movie_embeddings, compute_metrics=not training)
```
## Training the model
### Prepare the data
We first split the data into a training set and a testing set.
```
tf.random.set_seed(42)
shuffled = ratings.shuffle(100_000, seed=42, reshuffle_each_iteration=False)
# Split the data into a training set and a testing set
# TODO 2a -- your code goes here
```
### Shallow model
We're ready to try out our first, shallow, model!
**NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
```
num_epochs = 300
model = MovielensModel([32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
one_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
```
This gives us a top-100 accuracy of around 0.27. We can use this as a reference point for evaluating deeper models.
### Deeper model
What about a deeper model with two layers?
**NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
```
model = MovielensModel([64, 32])
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.1))
two_layer_history = model.fit(
cached_train,
validation_data=cached_test,
validation_freq=5,
epochs=num_epochs,
verbose=0)
accuracy = two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"][-1]
print(f"Top-100 accuracy: {accuracy:.2f}.")
```
The accuracy here is 0.29, quite a bit better than the shallow model.
We can plot the validation accuracy curves to illustrate this:
```
num_validation_runs = len(one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"])
epochs = [(x + 1)* 5 for x in range(num_validation_runs)]
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
```
Even early on in the training, the larger model has a clear and stable lead over the shallow model, suggesting that adding depth helps the model capture more nuanced relationships in the data.
However, even deeper models are not necessarily better. The following model extends the depth to three layers:
**NOTE: The below cell will take approximately 15~20 minutes to get executed completely.**
```
# Model extends the depth to three layers
# TODO 3a -- your code goes here
```
In fact, we don't see improvement over the shallow model:
```
plt.plot(epochs, one_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="1 layer")
plt.plot(epochs, two_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="2 layers")
plt.plot(epochs, three_layer_history.history["val_factorized_top_k/top_100_categorical_accuracy"], label="3 layers")
plt.title("Accuracy vs epoch")
plt.xlabel("epoch")
plt.ylabel("Top-100 accuracy");
plt.legend()
```
This is a good illustration of the fact that deeper and larger models, while capable of superior performance, often require very careful tuning. For example, throughout this tutorial we used a single, fixed learning rate. Alternative choices may give very different results and are worth exploring.
With appropriate tuning and sufficient data, the effort put into building larger and deeper models is in many cases well worth it: larger models can lead to substantial improvements in prediction accuracy.
## Next Steps
In this tutorial we expanded our retrieval model with dense layers and activation functions. To see how to create a model that can perform not only retrieval tasks but also rating tasks, take a look at [the multitask tutorial](https://www.tensorflow.org/recommenders/examples/multitask).
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../')
import wtools
#%matplotlib notebook
# Make the random numbers predictable for testing
np.random.seed(0)
```
# Making Gridded/Mesh Data
## A simple case
First, create a dictionary of your model data. For this example, we create a uniformly discretized 3D volume of data. The first array is some random data `nx` by `ny` by `nz` (10 by 10 by 10 for the snippet below) and the second array include some spatial data ranging from 0 to 1000 which we restructure in a contiguous fashion (x then y then z) which we can use for reference when checking how our data is displayed.
```
models = {
'rand': np.random.randn(10,10,10),
'spatial': np.linspace(0, 1000, 1000).reshape((10,10,10)),
}
```
Once you have your model dictionary created, create a `Grid` object and feed it your models like below. Note that we print this object to ensure it was constructed properly and if not, fill out the parts that are missing. On the backend, this print/output of the object calls `grid.validate()` which ensures the grid is ready for use!
```
grid = wtools.Grid(models=models)
grid
```
Now let's use this new `Grid` object. Please reference `Grid`'s code docs on https://wtools.readthedocs.io/en/latest/ to understand what attributes and methods are present.
```
grid.keys
grid.x0
grid.hx
_ = grid.save('output/simple.json')
grid.plot_3d_slicer('spatial', yslice=3.5)
```
## Spatially Referenced Grids
Now, what if you know the spatial reference of your grid? Then go ahead and pass the origin and cell spacings to the `Grid` object upon intialization. For this example, we will recreate some volumetric data and build a spatial reference frame.
```
nx, ny, nz = 12, 20, 15
models = {
'rand': np.random.randn(nx,ny,nz),
'spatial': np.linspace(0, nx*ny*nz, nx*ny*nz).reshape((nx,ny,nz)),
}
```
Now lets build the cell spacings along each axis for our gridded data. It is very important to note that the cell sizes do NOT have to be uniformly spaced.
```
origin = (100.0, 350.0, -1000.0)
xs = np.array([100, 50] + [10]*(nx-4) + [50, 100])
ys = np.array([100, 50] + [10]*(ny-4) + [50, 100])
zs = np.array([10]*(nz-6) + [25, 50, 75, 100, 150, 200])
grid = wtools.Grid(models=models, x0=origin, h=[xs, ys, zs])
grid
```
Now lets display this meshed data with a plotting resolution that represents the model discretization.
```
grid.plot_3d_slicer('spatial')
```
## Now Check that File I/O works both ways
```
_ = grid.save('output/advanced.json')
load = wtools.Grid.load_mesh('output/advanced.json')
load
load.equal(grid), grid.equal(load)
```
# PVGeo
Note that we have also overridden the toVTK method so that serialized `Grid` objects can be loaded directly into ParaVIew using the `wplugins.py` file delivered in this repo.
```
#type(load.toVTK())
```
| github_jupyter |
# 9. Archivos
## 9.1 ¿Qué es un archivo?
Un archivo es un contenedor de información. En un archivo la información se almacena como un conjunto de bytes consecutivos. En el interior del archivo, la información se organiza acorde a un formato concreto (texto, binario, executable, etc.).
Los archivos se representan como series de unos (1) y ceros (0) para ser procesador por el sistema (computador).
Un archivo se organiza en tres partes:
1. Encabezado - tiene la metadata del contenido del archivo (nombre, tamaño, tipo, etc).
2. Datos - contenido del archivo
3. Fin del archivo - EOF (End-Of-File).
## 9.2 Operaciones básicas sobre un archivo
Ejemplo 9.2.1: Obtener la ruta actual del archivo en edición.
```
import pathlib
resultado = pathlib.Path().resolve()
resultado
```
Ejemplo 9.2.2: Obtener el nombre del archivo actual.
```
%%javascript
IPython.notebook.kernel.execute(`notebookname = '${window.document.getElementById("notebook_name").innerHTML}'`)
notebookname
nombre_archivo = notebookname + '.ipynb'
nombre_archivo
```
Ejemplo 9.2.3: Preguntar si un archivo existe.
```
dir(resultado)
resultado.absolute
resultado.absolute()
resultado = str(resultado)
resultado = str(resultado)
resultado
nombre_archivo
import os
ruta_absoluta = os.path.join(resultado, nombre_archivo)
ruta_absoluta
os.path.exists(ruta_absoluta)
ruta_absoluta_no_existente = os.path.join(resultado, 'taller01_archivos.ipynb')
ruta_absoluta_no_existente
os.path.exists(ruta_absoluta_no_existente)
```
**Ejemplo 9.2.4**:
Leer el contenido de un archivo.
```
ruta_absoluta
def leer_contenido_archivo(ruta_archivo):
"""
Lee el contenido de un archivo especificado en un ruta.
:param ruta_archivo:string: Ruta del archivo a leer.
:return NoneType.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l)
else:
print('ERROR: La ruta indicada no corresponde a un archivo.')
else:
print('ERROR: El archivo no existe.')
help(leer_contenido_archivo)
leer_contenido_archivo(ruta_absoluta_no_existente)
leer_contenido_archivo(resultado)
leer_contenido_archivo(ruta_absoluta)
```
**Ejemplo 9.2.5**
Acceder al contenido de un archivo de texto plano ya existente.
```
ruta_archivo_paises = 'T001-09-paises.txt'
ruta_archivo_paises
os.path.exists(ruta_archivo_paises)
os.path.isdir(ruta_archivo_paises)
os.path.isfile(ruta_archivo_paises)
with open(ruta_archivo_paises, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l, end='')
```
## 9.3 Proceso de escritura de archivos
**Ejemplo 9.3.1**
Solicitar al usuario el ingreso de diez números y guardarlos en una lista. Después de capturados esos valores procederemos a guardar ese contenido en un archivo de texto plano.
```
numeros = []
for i in range(10):
while True:
try:
numero = float(input('Digite un número: '))
break
except:
print()
print('MENSAJE: Debe digitar un valor que corresponda con un número.')
print()
print()
numeros.append(numero)
print()
ruta_archivo_numeros = 'T001-09-numeros.txt'
with open(ruta_archivo_numeros, 'wt', encoding='utf-8') as f:
for n in numeros:
f.write(f'{n}\n')
with open(ruta_archivo_numeros, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l, end='')
with open(ruta_archivo_numeros, 'rt', encoding='utf-8') as f:
linea = f.readline()
print(linea, end='')
with open(ruta_archivo_numeros, 'rt', encoding='utf-8') as f:
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
print()
linea = f.readline()
print(linea, end='')
len(linea)
```
**Ejemplo 9.3.2**
Sumar el contenido del archivo que contiene número (`T001-09-numeros.txt`).
```
def leer_contenido_archivo(ruta_archivo):
"""
Lee el contenido de un archivo.
:param ruta_archivo:str: Ruta absoluta o relativa del archivo a leer.
:return list: Contenido del archivo.
"""
contenido = []
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
for l in f.readlines():
contenido.append(l.strip())
return contenido
help(leer_contenido_archivo)
ruta_archivo_numeros
resultado = leer_contenido_archivo(ruta_archivo_numeros)
resultado
help(sum)
# suma_numeros = sum(resultado) # TypeError
type(resultado[0])
type(resultado[-1])
suma_numeros = sum(float(e) for e in resultado)
suma_numeros
type(suma_numeros)
```
**Ejemplo 9.3.3**
Solicitar al usuario que digite nombres de países de cualquier parte del mundo.
El programa termina cuando el usuario haya escrito la palabra `FIN`.
Después de esa tarea crearemos un archivo para guardar todos los países que el usuario digitó.
```
paises = []
pais = ''
while pais != 'FIN':
while True:
pais = input('Digite el nombre de un país (FIN para terminar): ')
pais = pais.strip()
if len(pais):
break
else:
print()
print('MENSAJE: Debe escribir una cadena que no contenga sólo espacios.')
print()
if pais != 'FIN':
paises.append(pais)
print()
paises
len(paises)
archivo_paises = 'T001-09-paises.txt'
with open(archivo_paises, 'wt', encoding='utf-8') as f:
for p in paises:
f.write(f'{p}\n')
with open(archivo_paises, 'rt', encoding='utf-8', newline='') as f:
for l in f.readlines():
# print(l.replace('\n', ''))
print(l, end='')
help(open)
# El modo de trabajo por defecto al abrir o escribir un archivo es de texto (t):
with open(archivo_paises, 'r', encoding='utf-8', newline='') as f:
for l in f.readlines():
print(l, end='')
otros_paises = ['Guatemala', 'España', 'India', 'Grecia', 'El Congo', 'Sur África', 'Panamá', 'Uruguay', 'Canadá']
otros_paises
len(otros_paises)
archivo_paises
with open(archivo_paises, 'at', encoding='utf-8') as f:
for p in otros_paises:
f.write(f'{p}\n')
```
**Ejemplo 9.3.4**
Seleccionar un directorio del sistema (o una carpeta de archivos), y guardar el listado de archivos y carpetas (directorios) en un archivo de texto plano.
```
help(os.listdir)
os.listdir()
ruta_directorio = r'C:\Windows'
ruta_archivos_directorio = 'T001-09-archivos.txt'
if os.path.exists(ruta_directorio):
with open(ruta_archivos_directorio, 'wt', encoding='utf-8') as f:
for a in os.listdir(ruta_directorio):
f.write(f'{a}\n')
```
**Ejemplo 9.3.5**
Leer las n primeras líneas de un archivo de texto. Se debe crear una función.
```
from itertools import islice
def leer_n_lineas_archivo(ruta_archivo, n):
"""
Lee una cantidad arbitraria de líneas de un archivo de texto plano.
ruta_archivo: Ruta del archivo a leer.
n: cantidad de líneas a leer.
"""
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
for l in islice(f, n):
print(l)
help(leer_n_lineas_archivo)
ruta_archivos_directorio
leer_n_lineas_archivo(ruta_archivos_directorio, 5)
leer_n_lineas_archivo(archivo_paises, 3)
leer_n_lineas_archivo(archivo_paises, 10)
leer_n_lineas_archivo(ruta_archivos_directorio, 20)
```
**Ejemplo 9.3.6**
Leer las n últimas líneas de un archivo de texto. Se debe definir una función.
```
import os
def leer_n_ultimas_lineas(ruta_archivo, n):
"""
Lee una cantidad arbitraria de líneas de un archivo de texto plano. Se leen las n últimas líneas.
ruta_archivo: Ruta del archivo a leer.
n: cantidad de líneas a leer.
"""
tamagnio_bufer = 8192
tamagnio_archivo = os.stat(ruta_archivo).st_size
contador = 0
datos = []
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
if tamagnio_bufer > tamagnio_archivo:
tamagnio_bufer = tamagnio_archivo - 1
while True:
contador += 1
f.seek(tamagnio_archivo - tamagnio_bufer * contador)
datos.extend(f.readlines())
if len(datos) >= n or f.tell() == 0:
break
return datos[-n:]
help(leer_n_ultimas_lineas)
resultado = leer_n_ultimas_lineas(ruta_archivos_directorio, 5)
resultado = [r.strip() for r in resultado]
resultado
resultado = leer_n_ultimas_lineas(archivo_paises, 5)
resultado = [r.strip() for r in resultado]
resultado
```
**Ejemplo 9.3.7**
Leer un archivo de palabras y determinar cuál es la palabra más extensa (mayor número de caracteres).
```
def palabra_mas_extensa(ruta_archivo):
"""
Obtiene la palabra más extensa en un archivo de texto.
ruta_archivo: Ruta del archivo a leer.
return: La palabra más larga. Si el archivo no existe, retorna None.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
with open(ruta_archivo, 'rt', encoding='utf-8') as f:
palabras = f.read().split('\n')
mayor_longitud = len(max(palabras, key=len))
return [p for p in palabras if len(p) == mayor_longitud]
else:
return None
else:
return None
help(palabra_mas_extensa)
archivo_paises
resultado = palabra_mas_extensa(archivo_paises)
resultado
```
**Ejemplo 9.3.8**
Escribir una función para obtener el tamaño (en bytes) de un archivo de texto plano.
```
import os
def obtener_tamagnio_archivo(ruta_archivo):
"""
Obtiene la cantidad de bytes que ocupa un archivo.
ruta_archivo: Ruta del archivo.
return: Cantidad bytes que ocupa el archivo.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
metadata_archivo = os.stat(ruta_archivo)
return metadata_archivo.st_size
else:
return None
else:
return None
help(obtener_tamagnio_archivo)
obtener_tamagnio_archivo(archivo_paises)
obtener_tamagnio_archivo(ruta_archivos_directorio)
```
## 9.4 Escritura y lectura de archivos binarios con el módulo `pickle`
El módulo `pickle` nos permite escribir datos en una representación binaria.
**Ejemplo 9.4.1**
Crear un diccionario con nombres de países (llaves) y sus respectivas capitales (valores).
Luego crear un archivo binario utilizando el módulo `pickle`.
Al final se debe leer ese archivo para reestablecer el contenido del diccionario `paises`.
```
paises = {
'Colombia': 'Bogotá',
'Perú': 'Lima',
'Alemania': 'Berlín',
'Argentina': 'Buenos Aires',
'Estados Unidos': 'Washington',
'Rusia': 'Moscú',
'Ecuador': 'Quito'
}
type(paises)
len(paises)
paises
def es_ruta_valida(ruta):
"""
Verfica si una ruta especifica es válida.
ruta: Ruta a validar.
return: True si la ruta es válida, False en caso contrario.
"""
try:
archivo = open(ruta, 'w')
archivo.close()
return True
except IOError:
return False
import os
import pickle
def guardar_datos_archivo_binario(ruta_archivo, contenido):
"""
Guardar los datos de un objeto Python en un archivo.
ruta_archivo: Ruta del archivo donde se van a guardar los datos.
contenido: Objeto Python con la información a escribir.
return: True cuando el contenido se haya escrito en el disco.
raises: Cuando la ruta no corresponde con un archivo
"""
if es_ruta_valida(ruta_archivo):
with open(ruta_archivo, 'wb') as f:
pickle.dump(contenido, f)
return True
else:
raise Exception(f'La ruta ({ruta_archivo}) no corresponde con un archivo.')
help(guardar_datos_archivo_binario)
archivo_objeto_paises = 'T001-09-objeto-paises.pkl'
guardar_datos_archivo_binario(archivo_objeto_paises, paises)
import pickle
def leer_contenido_archivo_binario(ruta_archivo):
"""
Lee el contenido de un archivo binario.
ruta_archivo: Ruta del archivo binario a leer.
return: Objeto Python recuperado desde un archivo binario.
"""
if os.path.exists(ruta_archivo):
if os.path.isfile(ruta_archivo):
with open(ruta_archivo, 'rb') as f:
return pickle.load(f)
else:
return None
else:
return None
help(leer_contenido_archivo_binario)
archivo_objeto_paises
resultado = leer_contenido_archivo_binario(archivo_objeto_paises)
type(resultado)
len(resultado)
resultado
```
## 9.5 Lectura y Escritura de Archivos CSV
En un archivo CSV (*Comma Separated Values*) el contenido (a los registros o filas) se encuentro estructurado u organizado con la especificación de un carácter de separación de los diferentes datos que integran un registro (fila).
id;marca;cpu;ram;ssd<br>
1001;MSi;Intel;32;500<br>
1002;Apple;Intel;16;720<br>
1003;Clone;Intel;128;10000
### 9.5.1 Lectura de un archivo CSV
Un archivo CSV se puede abrir directamente con la función `open()` y explorar el contenido por medio de un cilo `for` invocando la función `readlines()`.
```
archivo_computadores = 'T001-09-computadores.csv'
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
for l in f.readlines():
print(l, end='')
import csv
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
archivo_csv = csv.reader(f, delimiter=';')
for r in archivo_csv:
print(r)
```
Lectura de un archivo CSV con la clase `DictReader`:
```
import csv
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
registros = csv.DictReader(f, delimiter=';')
for r in registros:
print(r['id'], r['marca'])
import csv
with open(archivo_computadores, 'rt', encoding='utf-8') as f:
registros = csv.DictReader(f, delimiter=';')
total_ssd = 0
for r in registros:
total_ssd += int(r['ssd'])
print('Almacenamiento total entre los tres computadores:', total_ssd, 'GB.')
```
### 9.5.2 Uso del argumento `quotechar` de la función `csv.reader()`
El argumento `quotechar` permite especificar el cáracter que encierra texto que incluya el cáracter delimitador.
documento,nombre_completo,direccion<br>
123456789,Daniela Ortiz,Carrera 10 #75-43, Casa 38<br>
654987321,Julio Ordoñez,Vereda El Mortiño
```
encabezado = ['documento','nombre_completo','direccion']
datos =[
['123456789','Daniela Ortiz','Carrera 10 #75-43, Casa 38'],
['654987321','Julio Ordoñez','Vereda El Mortiño']
]
```
Escribir el contenido de varias listas sobre un archivo CSV:
```
import csv
personas = 'T001-09-personas.csv'
with open(personas, 'wt', encoding='utf-8', newline='') as f:
escritura_csv = csv.writer(f, delimiter=',')
escritura_csv.writerow(encabezado)
for d in datos:
escritura_csv.writerow(d)
```
Abrir el archivo CSV recién creado:
```
with open(personas, 'rt', encoding='utf-8') as f:
registros = csv.DictReader(f, quotechar='"')
for r in registros:
print(r)
```
## 9.6 Lectura de archivos CSV con la librería Pandas.
```
import pandas as pd
pd.__version__
help(pd.read_csv)
df = pd.read_csv(personas)
df
df.info()
df = pd.read_csv(archivo_computadores)
df
df = pd.read_csv(archivo_computadores, sep=None)
df
df = pd.read_csv(archivo_computadores, sep=';')
df
```
Lectura de un archivo CSV desde una URL:
```
df = pd.read_csv('https://raw.githubusercontent.com/favstats/demdebates2020/master/data/debates.csv')
df.head()
df.tail()
df.info()
df.head(20)
df.tail(30)
df.describe()
```
## 9.7 Escritura de archivos CSV con la librería Pandas
```
help(df.to_csv)
type(df)
df.to_csv('T001-09-debate.csv', index=False)
```
| github_jupyter |
<img src="../../../images/qiskit_header.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" align="middle">
## Purity Randomized Benchmarking
- Last Updated: July 25, 2019
- Requires: qiskit-terra 0.9, qiskit-ignis 0.2, qiskit-aer 0.3
## Introduction
**Purity Randomized Benchmarking** is a variant of the Randomized Benchmarking (RB) method, which quantifies how *coherent* the errors are. The protocol executes the RB sequneces containing of Clifford gates, and then calculates the *purity* $Tr(\rho^2)$, and fits the purity result to an exponentially decaying curve.
This notebook gives an example for how to use the ``ignis.verification.randomized_benchmarking`` module in order to perform purity RB.
```
#Import general libraries (needed for functions)
import numpy as np
import matplotlib.pyplot as plt
from IPython import display
#Import the RB functions
import qiskit.ignis.verification.randomized_benchmarking as rb
#Import the measurement mitigation functions
import qiskit.ignis.mitigation.measurement as mc
#Import Qiskit classes
import qiskit
from qiskit.providers.aer import noise
from qiskit.providers.aer.noise.errors.standard_errors import depolarizing_error, coherent_unitary_error
from qiskit.quantum_info import state_fidelity
```
## Select the Parameters of the Purity RB Run
First, wee need to choose the regular RB parameters:
- **nseeds**: The number of seeds. For each seed there you will get a separate list of output circuits.
- **length_vector**: The length vector of Clifford lengths. Must be in ascending order. RB sequences of increasing length grow on top of the previous sequences.
- **rb_pattern**: A list of the form [[i],[j],[k],...] or [[i,j],[k,l],...], etc. which will make simultaneous RB sequences. All the patterns should have the same dimetion, namely only 1-qubit sequences Qk or only 2-qubit sequences Qi,Qj, etc. The number of qubits is the sum of the entries.
- **length_multiplier = None**: No length_multiplier for purity RB.
- **seed_offset**: What to start the seeds at (e.g. if we want to add more seeds later).
- **align_cliffs**: If true adds a barrier across all qubits in rb_pattern after each set of cliffords.
As well as another parameter for purity RB:
- **is_purity = True**
In this example we run 2Q purity RB (on qubits Q0,Q1).
```
# Example of 2-qubits Purity RB
#Number of qubits
nQ = 2
#Number of seeds (random sequences)
nseeds = 3
#Number of Cliffords in the sequence (start, stop, steps)
nCliffs = np.arange(1,200,20)
#2Q RB on Q0,Q1
rb_pattern = [[0,1]]
```
## Generate Purity RB sequences
We generate purity RB sequences. We start with a small example (so it doesn't take too long to run).
In order to generate the purity RB sequences **rb_purity_circs**, which is a list of lists of lists of quantum circuits, we run the function rb.randomized_benchmarking_seq.
This function returns:
- **rb_purity_circs**: A list of lists of lists of circuits for the purity rb sequences (separate list for each of the $3^n$ options and for each seed).
- **xdata**: The Clifford lengths (with multiplier if applicable).
- **rb_opts_dict**: Option dictionary back out with default options appended.
As well as:
- **npurity**: the number of purity rb circuits (per seed) which equals to $3^n$, where $n$ is the dimension, e.g npurity=3 for 1-qubit RB, npurity=9 for 2-qubit RB.
In order to generate each of the $3^n$ circuits, we need to do (per each of the $n$ qubits) either:
- nothing (Pauli-$Z$), or
- $\pi/2$-rotation around $x$ (Pauli-$X$), or
- $\pi/2$-rotation around $y$ (Pauli-$Y$),
and then measure the result.
```
rb_opts = {}
rb_opts['length_vector'] = nCliffs
rb_opts['nseeds'] = nseeds
rb_opts['rb_pattern'] = rb_pattern
rb_opts['is_purity'] = True
rb_purity_circs, xdata, npurity = rb.randomized_benchmarking_seq(**rb_opts)
print (npurity)
```
To illustrate, we print the circuit names for purity RB (for length=0 and seed=0)
```
for j in range(len(rb_purity_circs[0])):
print (rb_purity_circs[0][j][0].name)
```
As an example, we print the circuit corresponding to the first RB sequences, for the first and last parameter.
```
for i in {0, npurity-1}:
print ("circ no. ", i)
print (rb_purity_circs[0][i][0])
```
## Define a non-coherent noise model
We define a non-coherent noise model for the simulator. To simulate decay, we add depolarizing error probabilities to the CNOT gate.
```
noise_model = noise.NoiseModel()
p2Q = 0.01
noise_model.add_all_qubit_quantum_error(depolarizing_error(p2Q, 2), 'cx')
```
We can execute the purity RB sequences either using Qiskit Aer Simulator (with some noise model) or using IBMQ provider, and obtain a list of results result_list.
```
#Execute purity RB circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
purity_result_list = []
import time
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
print('Executing seed %d purity %d length %d'%(rb_seed, d, len(nCliffs)))
new_circ = rb_purity_circs[rb_seed][d]
job = qiskit.execute(new_circ, backend=backend, noise_model=noise_model, shots=shots, basis_gates=['u1','u2','u3','cx'])
purity_result_list.append(job.result())
print("Finished Simulating Purity RB Circuits")
```
## Fit the results
Calculate the *purity* $Tr(\rho^2)$ as the sum $\sum_k \langle P_k \rangle ^2/2^n$, and fit the purity result into an exponentially decaying function to obtain $\alpha$.
```
rbfit_purity = rb.PurityRBFitter(purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
```
Print the fit result (seperately for each pattern)
```
print ("fit:", rbfit_purity.fit)
```
## Plot the results and the fit
```
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_purity.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Purity RB'%(nQ), fontsize=18)
plt.show()
```
## Standard RB results
For comparison, we also print the standard RB fit results
```
standard_result_list = []
count = 0
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
if d==0:
standard_result_list.append(purity_result_list[count])
count += 1
rbfit_standard = rb.RBFitter(standard_result_list, xdata, rb_opts['rb_pattern'])
print (rbfit_standard.fit)
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_standard.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Standard RB'%(nQ), fontsize=18)
plt.show()
```
## Measurement noise model and measurement error mitigation
Since part of the noise might be due to measurement errors and not only due to coherent errors, we repeat the example with measurement noise and demonstrate a mitigation of measurement errors before calculating the purity rb fitter.
```
#Add measurement noise
for qi in range(nQ):
read_err = noise.errors.readout_error.ReadoutError([[0.75, 0.25],[0.1,0.9]])
noise_model.add_readout_error(read_err,[qi])
#Generate the calibration circuits
meas_calibs, state_labels = mc.complete_meas_cal(qubit_list=[0,1])
backend = qiskit.Aer.get_backend('qasm_simulator')
shots = 200
#Execute the calibration circuits
job_cal = qiskit.execute(meas_calibs, backend=backend, shots=shots, noise_model=noise_model)
meas_result = job_cal.result()
#Execute the purity RB circuits
meas_purity_result_list = []
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
print('Executing seed %d purity %d length %d'%(rb_seed, d, len(nCliffs)))
new_circ = rb_purity_circs[rb_seed][d]
job_pur = qiskit.execute(new_circ, backend=backend, shots=shots, noise_model=noise_model, basis_gates=['u1','u2','u3','cx'])
meas_purity_result_list.append(job_pur.result())
#Fitters
meas_fitter = mc.CompleteMeasFitter(meas_result, state_labels)
rbfit_purity = rb.PurityRBFitter(meas_purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
#no correction
rho_pur = rbfit_purity.fit
print('Fit (no correction) =', rho_pur)
#correct data
correct_purity_result_list = []
for meas_result in meas_purity_result_list:
correct_purity_result_list.append(meas_fitter.filter.apply(meas_result))
#with correction
rbfit_cor = rb.PurityRBFitter(correct_purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
rho_pur = rbfit_cor.fit
print('Fit (w/ correction) =', rho_pur)
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_purity.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Purity RB'%(nQ), fontsize=18)
plt.show()
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_cor.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Mitigated Purity RB'%(nQ), fontsize=18)
plt.show()
```
## Define a coherent noise model
We define a coherent noise model for the simulator. In this example we expect the purity RB to measure no errors, but standard RB will still measure a non-zero error.
```
err_unitary = np.zeros([2, 2], dtype=complex)
angle_err = 0.1
for i in range(2):
err_unitary[i, i] = np.cos(angle_err)
err_unitary[i, (i+1) % 2] = np.sin(angle_err)
err_unitary[0, 1] *= -1.0
error = coherent_unitary_error(err_unitary)
noise_model = noise.NoiseModel()
noise_model.add_all_qubit_quantum_error(error, 'u3')
#Execute purity RB circuits
backend = qiskit.Aer.get_backend('qasm_simulator')
basis_gates = ['u1','u2','u3','cx'] # use U,CX for now
shots = 200
coherent_purity_result_list = []
import time
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
print('Executing seed %d purity %d length %d'%(rb_seed, d, len(nCliffs)))
new_circ = rb_purity_circs[rb_seed][d]
job = qiskit.execute(new_circ, backend=backend, shots=shots, noise_model=noise_model, basis_gates=['u1','u2','u3','cx'])
coherent_purity_result_list.append(job.result())
print("Finished Simulating Purity RB Circuits")
rbfit_purity = rb.PurityRBFitter(coherent_purity_result_list, npurity, xdata, rb_opts['rb_pattern'])
```
Print the fit result (seperately for each pattern)
```
print ("fit:", rbfit_purity.fit)
```
## Plot the results and the fit
```
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_purity.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Purity RB'%(nQ), fontsize=18)
plt.show()
```
## Standard RB results
For comparison, we also print the standard RB fit results
```
standard_result_list = []
count = 0
for rb_seed in range(len(rb_purity_circs)):
for d in range(npurity):
if d==0:
standard_result_list.append(coherent_purity_result_list[count])
count += 1
rbfit_standard = rb.RBFitter(standard_result_list, xdata, rb_opts['rb_pattern'])
print (rbfit_standard.fit)
plt.figure(figsize=(8, 6))
ax = plt.subplot(1, 1, 1)
# Plot the essence by calling plot_rb_data
rbfit_standard.plot_rb_data(0, ax=ax, add_label=True, show_plt=False)
# Add title and label
ax.set_title('%d Qubit Standard RB'%(nQ), fontsize=18)
plt.show()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
| github_jupyter |
```
import cv2
import numpy as np
import dlib
from tkinter import *
import time
from PIL import Image, ImageTk
cap = cv2.VideoCapture(0)
ret,frame=cap.read()
detector = dlib.get_frontal_face_detector()
count =0
marks =0
root = Tk()
root.geometry("975x585")
root.title("Exam Cheating Identifier v1.1")
root.iconbitmap('fav.ico')
#i = StrVar()
#i = 0
#j = StrVar()
#j = 0
tbt= StringVar()
#functions
def tick():
time_string = time.strftime("%H:%M:%S")
clock.config(text=time_string)
fd()
clock.after(200, tick)
def fd():
global count
global marks
ret,frame=cap.read()
gray=cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
faces=detector(gray)
x = 0
y = 0
for face in faces:
x,y=face.left(),face.top()
w,h=face.right(),face.bottom()
cv2.rectangle(frame,(x,y),(w,h),(0,225,0),3)
if (x == 0) and (y == 0):
print((x,y),"No Face")
count = count - 1
print(count)
if count == -5:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -10:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -15:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -20:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
if count == -25:
marks = marks - 1
txt2.insert(0.0,marks)
txt2.delete(0.0,'end')
txt2.insert(0.0,marks)
else:
print((x,y),"Face")
txt.insert(0.0,count)
txt.delete(0.0,'end')
txt.insert(0.0,count)
im1 = Image.fromarray(frame)
photo_root = ImageTk.PhotoImage(im1)
img_root.config(image = photo_root)
img_root.image = photo_root
txt.insert(0.0,count)
txt.delete(0.0,'end')
txt.insert(0.0,count)
f2 = Frame(root, bg = "silver", borderwidth = 6 , relief = GROOVE)
f2.pack(side = TOP, fill="x")
f3 = Frame(root, bg = "silver", borderwidth = 6 , relief = GROOVE)
f3.pack(side = BOTTOM, fill="x")
f1 = Frame(root, bg = "silver", borderwidth = 6 , relief = GROOVE)
f1.pack(side = RIGHT, fill="y")
#Labels
l1 = Label(f1, text = " "*2, bg = "silver" )
l1.pack()
l1a = Label(f1, text = " STUDENTS RECORD ",
bg = "silver" , fg = "black" , font = ("Berlin Sans FB Demi",20,"bold") )
l1a.pack()
l1b = Label(f1, text = " Marks Deduction ",
bg = "silver" , fg = "black" , font = ("Arial",10,"bold") )
l1b.pack()
l1c = Label(f1, text = " ",
bg = "silver" , fg = "black" , font = ("Arial",10,"bold") )
l1c.pack()
l2 = Label(f2, text = " EXAM CHEATING IDENTIFIER ",
bg = "silver" , fg = "black" , font = ("Berlin Sans FB Demi",30,"bold") )
l2.pack()
l3 = Label(f3, text = "Members: M.Hamza, Fouzan, Waqas, Haris, Zeeshan ", bg = "silver" )
l3.pack(side=LEFT)
l3a = Label(f3, text = "Instructor: Sir Roohan ❤ ", bg = "silver" )
l3a.pack(side=RIGHT)
clock=Label(f3, font=("times", 10, "bold"), fg="green", bg="silver")
clock.pack(anchor=S,side=BOTTOM )
# Student images icon
photo = PhotoImage(file="2.png")
img1 = Label(f1, image=photo, bg="silver")
img1.pack(pady=2,padx=15)
seat1 = Label(f1, text="Deducted Points",bg="silver",font=("Arial",10,"italic")).pack(pady=0,padx=18)
img_root = Label(root, text = "Live Streaming")
img_root.pack()
### Text
txt = Text(f1, height = 1, width = 3,bg = "silver",fg = "red", font=("Arial",30,"bold"))
txt.pack()
l1c = Label(f1, text = " Marks Deducted ",
bg = "silver" , fg = "black" , font = ("Arial",10,"bold") )
l1c.pack()
txt2 = Text(f1, height = 1, width = 3,bg = "silver",fg = "red", font=("Arial",30,"bold"))
txt2.pack(pady = 5)
#Buttons
B2 = Button(f1,text="Start", bg="gray", fg="white", height=2 , width=15,font=("Arial",10,"bold"), command=tick)
B2.pack(side=LEFT,pady=15, padx=15, anchor="se")
B3 = Button(f1,text="Quit", bg="gray", fg="white", height=2 , width=15,font=("Arial",10,"bold"),command=root.destroy)
B3.pack(side=RIGHT,pady=15,padx=15,anchor="sw")
root.mainloop()
cap.release()
cap.release()
```
| github_jupyter |
# KNeighborsClassifier with MaxAbsScaler
This Code template is for the Classification task using a simple KNeighborsClassifier based on the K-Nearest Neighbors algorithm using MaxAbsScaler technique.
### Required Packages
```
!pip install imblearn
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import LabelEncoder,MaxAbsScaler
from sklearn.metrics import classification_report,plot_confusion_matrix
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
file_path= ""
```
List of features which are required for model training .
```
features = []
```
Target feature for prediction.
```
target = ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X=df[features]
Y=df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
def EncodeY(df):
if len(df.unique())<=2:
return df
else:
un_EncodedT=np.sort(pd.unique(df), kind='mergesort')
df=LabelEncoder().fit_transform(df)
EncodedT=[xi for xi in range(len(un_EncodedT))]
print("Encoded Target: {} to {}".format(un_EncodedT,EncodedT))
return df
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=EncodeY(NullClearner(Y))
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
#### Distribution Of Target Variable
```
plt.figure(figsize = (10,6))
se.countplot(Y)
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
```
#### Handling Target Imbalance
The challenge of working with imbalanced datasets is that most machine learning techniques will ignore, and in turn have poor performance on the minority class, although typically it is performing on the minority class that is most important.
One approach to address imbalanced datasets is to oversample the minority class. The simplest approach involves duplicating examples in the minority class.We will perform overspampling using imblearn library.
```
x_train,y_train = RandomOverSampler(random_state=123).fit_resample(x_train, y_train)
```
### Model
KNN is one of the easiest Machine Learning algorithms based on Supervised Machine Learning technique. The algorithm stores all the available data and classifies a new data point based on the similarity. It assumes the similarity between the new data and data and put the new case into the category that is most similar to the available categories.KNN algorithm at the training phase just stores the dataset and when it gets new data, then it classifies that data into a category that is much similar to the available data.
#### Model Tuning Parameters
> - **n_neighbors** -> Number of neighbors to use by default for kneighbors queries.
> - **weights** -> weight function used in prediction. {**uniform,distance**}
> - **algorithm**-> Algorithm used to compute the nearest neighbors. {**‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’**}
> - **p** -> Power parameter for the Minkowski metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. For arbitrary p, minkowski_distance (l_p) is used.
> - **leaf_size** -> Leaf size passed to BallTree or KDTree. This can affect the speed of the construction and query, as well as the memory required to store the tree. The optimal value depends on the nature of the problem.
## Data Rescaling
MaxAbsScaler scales each feature by its maximum absolute value.
This estimator scales and translates each feature individually such that the maximal absolute value of each feature in the training set will be 1.0. It does not shift/center the data, and thus does not destroy any sparsity.
[For More Reference](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MaxAbsScaler.html)
```
model=make_pipeline(MaxAbsScaler(),KNeighborsClassifier(n_jobs=-1))
model.fit(x_train,y_train)
```
#### Model Accuracy
score() method return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
```
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
```
#### Confusion Matrix
A confusion matrix is utilized to understand the performance of the classification model or algorithm in machine learning for a given test set where results are known.
```
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
```
#### Classification Report
A Classification report is used to measure the quality of predictions from a classification algorithm. How many predictions are True, how many are False.
* where:
- Precision:- Accuracy of positive predictions.
- Recall:- Fraction of positives that were correctly identified.
- f1-score:- percent of positive predictions were correct
- support:- Support is the number of actual occurrences of the class in the specified dataset.
```
print(classification_report(y_test,model.predict(x_test)))
```
#### Creator: Vikas Mishra, Github: [Profile](https://github.com/Vikaas08)
| github_jupyter |
# Load data and libraries
```
from google.colab import drive
drive.mount('/content/drive')
!pip install shap
!pip install pyitlib
import os
os.path.abspath(os.getcwd())
os.chdir('/content/drive/My Drive/Protein project')
os.path.abspath(os.getcwd())
from __future__ import division ###for float operation
from collections import Counter
import numpy as np
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score ##tp / (tp + fn)
from sklearn.metrics import precision_score #tp / (tp + fp)
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import KFold, StratifiedKFold
from pyitlib import discrete_random_variable as drv
import time
import timeit
import networkx as nx
import matplotlib.pyplot as plt
def readData(filename):
fr = open(filename)
returnData = []
headerLine=fr.readline()###move cursor
for line in fr.readlines():
lineStrip = line.strip().replace('"','')
lineList = lineStrip.split('\t')
returnData.append(lineList)###['3','2',...]
return returnData
"""first case P450 = [['1','1',....],[],[].....,[]] second case P450 = array([['1','1',....],[],[].....,[]]), third case P450 = """
P450 = readData('P450.txt') ### [[],[],[],....[]]
P450 = np.array(P450) ### either [['1','1',....],[],[].....,[]] or array([['1','1',....],[],[].....,[]]) works, but note that keys are '1', '0'
#P450 = P450.astype(int) ### for shap array [[1,1,....],[],[].....,[]], keys are 1, 0
M=np.matrix([[245, 9, 0, 3, 0, 2, 65, 8],
[9, 218, 17, 17, 49, 10, 50, 17],
[0, 17, 175, 16, 25, 13, 0, 46],
[3, 17, 16, 194, 19, 0, 0, 3],
[0, 49, 25, 19, 199, 10, 0, 3],
[2, 10, 13, 0, 10, 249, 50, 74],
[65, 50, 0, 0, 0, 50, 262, 11],
[8, 17, 46, 3, 3, 74, 11, 175]])
X = P450[:,0:8]
y = P450[:,-1]
def readData2(filename):
fr = open(filename)
returnData = []
headerLine=fr.readline()###move cursor
for line in fr.readlines():
linestr = line.strip().replace(', ','')
lineList = list(linestr)
returnData.append(lineList)###['3','2',...]
return returnData
lactamase = readData2('lactamase.txt')
lactamase = np.array(lactamase)
#lactamase = lactamase.astype(int)
M2 = np.matrix([[101, 5, 0, 2, 0, 14, 4, 37],
[5 ,15, 14 ,1 ,7 ,7, 0 ,19],
[0, 14, 266, 15, 14, 2, 26, 4],
[2, 1, 15, 28, 2 ,15, 4, 0],
[0, 7, 14, 2, 32, 9 ,0, 8],
[14, 7, 2 ,15, 9, 29, 7, 9],
[4, 0, 26, 4 ,0 ,7 ,72, 21],
[37, 19, 4, 0, 8, 9, 21, 211]])
X2 = lactamase[:,0:8]
y2 = lactamase[:,-1]
import numpy as np
from sklearn.base import BaseEstimator, ClassifierMixin
from sklearn.utils.validation import check_X_y, check_array, check_is_fitted ### Checks if the estimator is fitted by verifying the presence of fitted attributes (ending with a trailing underscore)
#from sklearn.utils.multiclass import unique_labels, not necessary, can be replaced by array(list(set()))
```
# Bayesian network
```
"""
Bayesian network implementation
API inspired by SciKit-learn.
"""
class Bayes_net(BaseEstimator, ClassifierMixin):
def fit(self,X,y,M = None):
raise NotImplementedError
def predict_proba(self, X): ### key prediction methods, all other prediction methods will use it first.
raise NotImplementedError
def predict_binary(self,X):
"""
Perform classification on an array of test vectors X, predict P(C1|X), works only for binary classifcation
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
C : ndarray of shape (n_samples,)
Predicted P(C1|X)
"""
Prob_C = self.predict_proba(X) ### Prob_C is n*|C| np.array
return(Prob_C[:,0])
def predict(self, X):
"""
Perform classification on an array of test vectors X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
C : ndarray of shape (n_samples,)
Predicted target values for X
"""
Prob_C = self.predict_proba(X) ## Prob_C is |C|*n np.array ,C is self.C
return( np.array([self.classes_[ele] for ele in np.argmax(Prob_C, axis=1)] ) )
def Conditional_log_likelihood_general(self,y_true,y_pred_prob,C):
"""Calculate the conditional log likelihood.
:param y_true: The true class labels. e.g ['1','1',.....'0','0']
:param y_pred_prob: np.array shows prob of each class for each instance. ith column is the predicted prob for class C[i]
:param C: Class labels e.x array(['1','0']), C has to use same labels as y_true.
:return: CLL. A scalar.
"""
C = list(C) ## only list can use .index
cll = []
for i in range(len(y_true)):
cll.append( y_pred_prob[i,C.index(y_true[i])] ) ## \hat p(c_true|c_true)
cll = [np.log2(ele) for ele in cll]
cll = np.array(cll)
return(sum(cll))
def plot_tree_structure(self,mapping = None,figsize = (5,5)):
check_is_fitted(self)
parent = self.parent_
egdes = [(k,v) for v,k in parent.items() if k is not None]
G = nx.MultiDiGraph()
G.add_edges_from(egdes)
#mapping=dict(zip(range(8),['b0','b1','b2','b3','b4','b5','b6','b7']))
plt.figure(figsize=figsize)
nx.draw_networkx(G,nx.shell_layout(G))
```
## Naive Bayes
```
class NB(Bayes_net):
name = "NB"
def __init__(self, alpha = 1):
self.alpha = alpha
def fit(self,X, y, M = None):
""" Implementation of a fitting function.
Parameters
----------
X : {array-like, sparse matrix}, shape (n_samples, n_features)
The training input samples.
y : array-like, shape (n_samples,) or (n_samples, n_outputs)
The target values (class labels in classification, real numbers in
regression).
Returns
-------
self : object
Returns self.
"""
# countDict_, classes_, p_ , P_class_prior_, Dict_C_, K_ ,training_time_, is_fitted_ are fitted "coef_"
# coef_ has to been refreshed each fitting.
X, y = check_X_y(X, y)
t = time.process_time()
#start timing
countDict = Counter(y) ## {c1:n1,c2:n2,c3:n3} sorted by counts
C = list(countDict.keys()) ### [class1 , class2, class3] in appearing order
n,p = X.shape ## num of features 8 ### values same order as .keys()
P_class_prior = [(ele+self.alpha)/ ( n + self.alpha*len(C) ) for ele in countDict.values()] ### prior for each class [p1,p2,p3]
P_class_prior = dict(zip(C, P_class_prior)) ## {c1:p1,c2:p2,c3:p3} ## should in correct order, .keys .values.
Dict_C = {} ### {c1:[counter1, ....counter8], c2:[counter1, ....counter8], c3: [counter1, ....counter8]}
K = {} ## [x1 unique , x2 unique .... x8unique]
for c in C:
ListCounter_c = []
for i in range(p):
row_inx_c = [row for row in range(n) if y[row] == c]
x_i_c = X[row_inx_c,i]
ListCounter_c.append(Counter(x_i_c))
if c == C[0]:
x_i = X[:,i]
K[i] = len(Counter(x_i))
Dict_C[c] = ListCounter_c
CP_time = np.array(time.process_time() - t)
self.is_fitted_ = True
self.Dict_C_,self.p_,self.P_class_prior_,self.K_,self.classes_,self.countDict_,self.training_time_ = Dict_C,p,P_class_prior,K,np.array(C),countDict,CP_time
return self
def predict_proba(self,X):
"""
Return probability estimates for the test vector X.
Parameters
----------
X : array-like of shape (n_samples, n_features)
Returns
-------
C : array-like of shape (n_samples, n_classes)
Returns the probability of the samples for each class in
the model. The columns correspond to the classes in sorted
order, as they appear in the attribute :term:`classes_`.
"""
check_is_fitted(self)
X = check_array(X)
Prob_C = []
for ins in X:
P_class = self.P_class_prior_.copy() ### {c1:p1, c2:p2} #### !!!! dict1 = dict2 , change both simultaneously!!!
for c in self.classes_:
ListCounter_c = self.Dict_C_[c]
for i in range(self.p_):
P_class[c] = P_class[c] * (ListCounter_c[i][ins[i]]+self.alpha) / (self.countDict_[c] + self.alpha*self.K_[i])
## normalize P_class
P_class = {key: P_class[key]/sum(list(P_class.values())) for key in P_class.keys()}
Prob_C.append(list(P_class.values())) ### check the class order is correct
Prob_C = np.array(Prob_C) ### for shap !!!!
return Prob_C
nb = NB()
nb.fit(X,y)
nb.predict_proba(X)
#nb.get_params()
#nb.classes_
print(nb.name)
print(nb.predict_proba(X))
nb.score(X,y)
```
## TAN
```
class TAN(Bayes_net):
name = "TAN"
def __init__(self, alpha = 1,starting_node = 0):
self.starting_node = starting_node
self.alpha = alpha
def To_CAT(self, X_i):
"""For using CMI purpose, convert X_i e.g ['a','b','a']/['0','1','0'] to [0,1,0].
:param X_i: one feature column.
:return: list(type int)
"""
X_i_list = list(set(X_i));X_i_dict = dict(zip(X_i_list, range(len(X_i_list)) ))
return([X_i_dict[ele] for ele in X_i])
def get_mutual_inf(self,X,Y):
"""get conditional mutual inf of all pairs of features, part of training
:return: np.array matrix.
"""
t = time.process_time()
n,p = X.shape
M = np.zeros((p,p))
Y = self.To_CAT(Y)
for i in range(p):
X_i = X[:,i]
X_i = self.To_CAT(X_i)
for j in range(p):
X_j = X[:,j]
X_j = self.To_CAT(X_j)
M[i,j] = drv.information_mutual_conditional(X_i,X_j,Y)
mutual_inf_time = time.process_time() - t
return M, mutual_inf_time
def Findparent(self,X,Y):
M,mutual_inf_time = self.get_mutual_inf(X,Y)
t = time.process_time()
np.fill_diagonal(M,0)
p = int(M.shape[0])
V = range(p) #### . set of all nodes
st = self.starting_node
Vnew = [st] #### vertex that already found their parent. intitiate it with starting node. TAN randomly choose one
parent = {st:None} ## use a dict to show nodes' interdepedency
while set(Vnew) != set(V): ### when their are still nodes whose parents are unknown.
index_i = [] ### after for loop, has same length as Vnew, shows the closest node that not in Vnew with Vnew.
max_inf = [] ### corresponding distance
for i in range(len(Vnew)): ## can be paralelled
vnew = Vnew[i]
ListToSorted = [int(e) for e in M[:,vnew]]###
index = sorted(range(len(ListToSorted)),key = lambda k: ListToSorted[k],reverse = True)
index_i.append([ele for ele in index if ele not in Vnew][0])
max_inf.append(M[index_i[-1],vnew])
index1 = sorted(range(len(max_inf)),key = lambda k: max_inf[k],reverse = True)[0] ## relative position, Vnew[v1,v2] index_i[v4,v5] max_inf[s1,s2] index1 is the position in those 3 list
Vnew.append(index_i[index1]) ### add in that node
parent[index_i[index1]] = Vnew[index1] ## add direction, it has to be that the new added node is child, otherwise some nodes has 2 parents which is wrong.
prim_time = time.process_time() - t
return parent,mutual_inf_time,prim_time
def fit(self,X,y,M = None): ### this is based on trainning data !!!
X, y = check_X_y(X, y)
parent,mutual_inf_time,prim_time = self.Findparent(X,y)
t = time.process_time()
countDict = Counter(y)
C = list(countDict.keys()) ### [class1 , class2, class3] in appearing order
n,p = X.shape
P_class = [(ele+self.alpha)/( n + self.alpha*len(C) ) for ele in list(countDict.values())] ### prior for each class [p1,p2,p3], ### .values same order as .keys()
P_class = dict(zip(C, P_class)) ## {c1:p1,c2:p2,c3:p3} ## should in correct order, .keys .values.
Dict_C = {} ### {c1:[counter1, ....counter8], c2:[counter1, ....counter8], c3: [counter1, ....counter8]}
K = {}
root_i = self.starting_node ## 0 ,1 ,2 shows the position, thus int
x_i = X[:,root_i]
K[root_i] = len(Counter(x_i))
for c in C: ### c origianl class label '1' not 1
ListCounter_c = {}
row_inx_c = [row for row in range(n) if y[row] == c]
x_i_c = X[row_inx_c,root_i]
ListCounter_c[root_i] = Counter(x_i_c) ### list_counter_c keys are 0,1,2,3... showing position hence int. Counter(x_i_c) keys are original values of x, not position. hence not necesarily int
for i in [e for e in range(0,p) if e != root_i]:
if c == C[0]:
x_i = X[:,i]
K[i] =len(Counter(x_i))
x_parent = X[:,parent[i]]
x_parent_counter = Counter(x_parent)
x_parent_counter_length = len(x_parent_counter)
x_parent_value = list(x_parent_counter.keys())
dict_i_c = {}
for j in range(x_parent_counter_length):
row_inx_c_parent_j = [row for row in range(n) if y[row] == c and x_parent[row] == x_parent_value[j]]
x_i_c_p_j = X[row_inx_c_parent_j, i]
dict_i_c[x_parent_value[j]] = Counter(x_i_c_p_j) ### x_parent_value[j] can make sure it is right key.
ListCounter_c[i] = dict_i_c
Dict_C[c] = ListCounter_c
CP_time = time.process_time() - t
self.is_fitted_ = True
self.Dict_C_,self.p_,self.P_class_prior_,self.K_,self.classes_,self.countDict_, self.parent_ = Dict_C,p,P_class,K,np.array(C),countDict,parent
self.training_time_ = np.array([mutual_inf_time,prim_time,CP_time])
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = []
root_i = self.starting_node
for ins in X:
P_class = self.P_class_prior_.copy()
for c in self.classes_:
ListCounter_c = self.Dict_C_[c]
P_class[c] = P_class[c] * (ListCounter_c[root_i][ins[root_i]]+self.alpha) / (self.countDict_[c]+self.alpha*self.K_[root_i])
for i in [e for e in range(0,self.p_) if e != root_i]:
pValue = ins[self.parent_[i]] ### replicate C times
try:### ListCounter_c[i][pValue],pavlue does show in training
Deno = sum(list(ListCounter_c[i][pValue].values() )) ## number of y =1, xparent = pvalue , ListCounter_c[i][pValue], pavlue does not show in training , keyerror
P_class[c] = P_class[c] * (ListCounter_c[i][pValue][ins[i]] + self.alpha) / (Deno + self.alpha*self.K_[i]) ## ListCounter1[i][pValue][ins[i]] = number of y =1 xparent = pvalue, xi = xi
except: ##ListCounter_c[i][pValue],pavlue does not show in training
Deno = 0 ## ListCounter_c[i] this is when class == c, ith feature, >> {parent(i) == value1: Counter, parent(i) == value2: Counter }, counter shows the distribution of x_i when class ==c and parent == pvalue
P_class[c] = P_class[c] * (0 + self.alpha) / (Deno + self.alpha*self.K_[i])
P_class = {key: P_class[key]/sum(list(P_class.values())) for key in P_class.keys()} ### normalize p_class
Prob_C.append(list(P_class.values())) ### check the class order is correct
Prob_C = np.array(Prob_C) ### for shap !!!!
return Prob_C
tan = TAN()
tan.get_params()
tan.fit(X,y)
#tan.fit(X,y)
print(tan.predict_proba(X))
tan.score(X,y)
```
## STAN
```
class STAN(Bayes_net):
name = "STAN"
def __init__(self,alpha = 1,starting_node = 0):
self.starting_node = starting_node
self.alpha = alpha
def Findparent(self,M):
M = M.copy()
np.fill_diagonal(M,0)
p = int(M.shape[0])
V = range(p) #### . set of all nodes
st = self.starting_node
Vnew = [st] #### vertex that already found their parent. intitiate it with starting node. TAN randomly choose one
parent = {st:None} ## use a dict to show nodes' interdepedency
while set(Vnew) != set(V): ### when their are still nodes whose parents are unknown.
index_i = [] ### after for loop, has same length as Vnew, shows the closest node that not in Vnew with Vnew.
max_inf = [] ### corresponding distance
for i in range(len(Vnew)): ## can be paralelled
vnew = Vnew[i]
ListToSorted = [int(e) for e in M[:,vnew]]###
index = sorted(range(len(ListToSorted)),key = lambda k: ListToSorted[k],reverse = True)
index_i.append([ele for ele in index if ele not in Vnew][0])
max_inf.append(M[index_i[-1],vnew])
index1 = sorted(range(len(max_inf)),key = lambda k: max_inf[k],reverse = True)[0] ## relative position, Vnew[v1,v2] index_i[v4,v5] max_inf[s1,s2] index1 is the position in those 3 list
Vnew.append(index_i[index1]) ### add in that node
parent[index_i[index1]] = Vnew[index1] ## add direction, it has to be that the new added node is child, otherwise some nodes has 2 parents which is wrong.
return parent
def fit(self,X,y,M): ### this is based on trainning data !!!
X, y = check_X_y(X, y)
parent = self.Findparent(M)
t = time.process_time()
countDict = Counter(y)
C = list(countDict.keys()) ### [class1 , class2, class3] in appearing order
n,p = X.shape
P_class = [(ele+self.alpha)/( n + self.alpha*len(C) ) for ele in list(countDict.values())] ### prior for each class [p1,p2,p3], ### .values same order as .keys()
P_class = dict(zip(C, P_class))
Dict_C = {} ### {c1:[counter1, ....counter8], c2:[counter1, ....counter8], c3: [counter1, ....counter8]}
K = {}
root_i = self.starting_node ## 0 ,1 ,2 shows the position, thus int
x_i = X[:,root_i]
K[root_i] = len(Counter(x_i))
for c in C: ### c origianl class label '1' not 1
ListCounter_c = {}
row_inx_c = [row for row in range(n) if y[row] == c]
x_i_c = X[row_inx_c,root_i]
ListCounter_c[root_i] = Counter(x_i_c) ### list_counter_c keys are 0,1,2,3... showing position hence int. Counter(x_i_c) keys are original values of x, not position. hence not necesarily int
for i in [e for e in range(0,p) if e != root_i]:
if c == C[0]:
x_i = X[:,i]
K[i] =len(Counter(x_i))
x_parent = X[:,parent[i]] ## will duplicate C times.
x_parent_counter = Counter(x_parent)
x_parent_counter_length = len(x_parent_counter)
x_parent_value = list(x_parent_counter.keys())
dict_i_c = {}
for j in range(x_parent_counter_length):
row_inx_c_parent_j = [row for row in range(n) if y[row] == c and x_parent[row] == x_parent_value[j]]
x_i_c_p_j = X[row_inx_c_parent_j, i]
dict_i_c[x_parent_value[j]] = Counter(x_i_c_p_j) ### x_parent_value[j] can make sure it is right key.
ListCounter_c[i] = dict_i_c
Dict_C[c] = ListCounter_c
CP_time = np.array(time.process_time() - t)
self.is_fitted_ = True
self.Dict_C_,self.p_,self.P_class_prior_,self.K_,self.classes_,self.countDict_,self.parent_ = Dict_C,p,P_class,K,np.array(C),countDict,parent
self.training_time_ = CP_time
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = []
root_i = self.starting_node
for ins in X:
P_class = self.P_class_prior_.copy()
for c in self.classes_:
ListCounter_c = self.Dict_C_[c]
P_class[c] = P_class[c] * (ListCounter_c[root_i][ins[root_i]]+self.alpha) / (self.countDict_[c]+self.alpha*self.K_[root_i])
for i in [e for e in range(0,self.p_) if e != root_i]:
pValue = ins[self.parent_[i]] ### replicate C times
try:### ListCounter_c[i][pValue],pavlue does show in training
Deno = sum(list(ListCounter_c[i][pValue].values() )) ## number of y =1, xparent = pvalue , ListCounter_c[i][pValue], pavlue does not show in training , keyerror
P_class[c] = P_class[c] * (ListCounter_c[i][pValue][ins[i]] + self.alpha) / (Deno + self.alpha*self.K_[i]) ## ListCounter1[i][pValue][ins[i]] = number of y =1 xparent = pvalue, xi = xi
except: ##ListCounter_c[i][pValue],pavlue does not show in training
Deno = 0 ## ListCounter_c[i] this is when class == c, ith feature, >> {parent(i) == value1: Counter, parent(i) == value2: Counter }, counter shows the distribution of x_i when class ==c and parent == pvalue
P_class[c] = P_class[c] * (0 + self.alpha) / (Deno + self.alpha*self.K_[i])
P_class = {key: P_class[key]/sum(list(P_class.values())) for key in P_class.keys()} ### normalize p_class
Prob_C.append(list(P_class.values())) ### check the class order is correct
Prob_C = np.array(Prob_C) ### for shap !!!!
return Prob_C
stan = STAN()
stan.get_params()
stan.fit(X,y,M)
print(stan.predict_proba(X))
print(stan.name)
stan.score(X,y)
from sklearn.utils.estimator_checks import check_estimator
#check_estimator(NB)
```
## TAN_bagging
```
class TAN_bagging(Bayes_net):
name = "TAN_bagging"
def __init__(self, alpha = 1):
self.alpha = alpha
def fit(self,X,y,M = None):
"""initialize model = [] . and training time."""
X,y = check_X_y(X,y)
n,p = X.shape ### number of features
"""fit base models"""
training_time = 0
models = []
for i in range(p):
model = TAN(self.alpha, starting_node= i)
model.fit(X,y)
models.append(model)
training_time += model.training_time_
self.models_ , self.p_= models,p
self.training_time_ = training_time/p ### the fitting can be paralelled, hence define averge training time for this bagging
self.is_fitted_ = True
self.classes_ = model.classes_
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = 0
for model in self.models_:
Prob_C += model.predict_proba(X) ### get np array here
Prob_C = Prob_C/self.p_
return(Prob_C)
tan_bag = TAN_bagging()
print(tan_bag.name)
tan_bag.fit(X,y)
tan_bag.predict_proba(X)
```
## STAN bagging
```
class STAN_bagging(Bayes_net):
name = "STAN_bagging"
def __init__(self,alpha = 1):
self.alpha = alpha
def fit(self,X,y,M):
X,y = check_X_y(X,y)
n,p = X.shape
training_time = 0
models = []
for i in range(p):
model = STAN(self.alpha, starting_node= i)
model.fit(X,y,M)
models.append(model)
training_time += model.training_time_
self.models_, self.p_ = models,p
self.training_time_ = training_time/p ### the fitting can be paralelled, hence define averge training time for this bagging
self.is_fitted_ = True
self.classes_ = model.classes_
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = 0
for model in self.models_:
Prob_C += model.predict_proba(X) ### get np array here
Prob_C = Prob_C/self.p_
return(Prob_C)
stan_bag = STAN_bagging()
stan_bag.fit(X,y,M)
stan_bag.fit(X,y,M)
stan_bag.predict_proba(X)
```
## Ensemble TAN (STAN_TAN_bagging)
```
class STAN_TAN_bagging(Bayes_net):
name = "STAN_TAN_bagging"
def __init__(self,alpha = 1):
self.alpha = alpha
def fit(self,X,y,M):
X,y = check_X_y(X,y)
n,p = X.shape
training_time = 0
models = []
## train p TAN base models
for i in range(p):
model = TAN(self.alpha, starting_node= i)
model.fit(X,y)
models.append(model)
training_time += model.training_time_
#append STAN
model = STAN(self.alpha, starting_node = 0) ### starting node not importance for TAN, very robust
model.fit(X,y,M)
models.append(model)
self.models_, self.p_ = models, p
self.training_time_ = training_time/p ### after paralell, only consider average of p TAN_MT, ignore TAN since it takes less time than TAN_MT
self.is_fitted_ = True
self.classes_ = model.classes_
return self
def predict_proba(self,X):
check_is_fitted(self)
X = check_array(X)
Prob_C = 0
for model in self.models_:
Prob_C += model.predict_proba(X) ### get np array here
Prob_C = Prob_C/(self.p_+ 1)
return(Prob_C)
stan_tan_bag = STAN_TAN_bagging()
stan_tan_bag.fit(X,y,M)
stan_tan_bag.predict_proba(X)
```
# Cross validation
```
import warnings
warnings.filterwarnings("ignore")
def get_cv(cls,X,Y,M,n_splits=10,cv_type = "KFold",verbose = True):
""" Cross validation to get CLL and accuracy and training time and precision and recall.
"""
if cv_type == "StratifiedKFold":
cv = StratifiedKFold(n_splits= n_splits, shuffle=True, random_state=42)##The folds are made by preserving the percentage of samples for each class.
else:
cv = KFold(n_splits=n_splits, shuffle=True, random_state=42)
model = cls()
X,Y = check_X_y(X,Y)
binarizer = MultiLabelBinarizer() ## for using recall and precision score
binarizer.fit(Y)
Accuracy = []
Precision = []
Recall = []
CLL = []
training_time = []
for folder, (train_index, val_index) in enumerate(cv.split(X, Y)):#### X,Y are array, data is list
X_train,X_val = X[train_index],X[val_index]
y_train,y_val = Y[train_index],Y[val_index]
model.fit(X_train,y_train,M) ### whether data is list or array does not matter, only thing matters is label has to be same.
training_time.append(model.training_time_)
y_pred_prob= model.predict_proba(X_val)
y_pred_class = model.predict(X_val)
accuracy = accuracy_score(y_val, y_pred_class)
precision = precision_score(binarizer.transform(y_val),
binarizer.transform(y_pred_class),
average='macro')
recall = recall_score(binarizer.transform(y_val),
binarizer.transform(y_pred_class),
average='macro')
cll = model.Conditional_log_likelihood_general(y_val,y_pred_prob,model.classes_)
if verbose:
print("accuracy in %s fold is %s" % (folder+1,accuracy))
print("CLL in %s fold is %s" % (folder+1,cll))
print("precision in %s fold is %s" % (folder+1,precision))
print("recall in %s fold is %s" % (folder+1,recall))
print("training time in %s fold is %s" % (folder+1,training_time[-1]))
print(10*'__')
CLL.append(cll)
Accuracy.append(accuracy)
Recall.append(recall)
Precision.append(precision)
return Accuracy, CLL, training_time,Precision,Recall
Accuracy, CLL, training_time,Precision,Recall= get_cv(NB,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(TAN,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(STAN,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(TAN_bagging,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(STAN_bagging,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
Accuracy, CLL, training_time,Precision,Recall= get_cv(STAN_TAN_bagging,X2,y2,M2)
print(np.mean(Accuracy))
print(np.mean(CLL))
print(np.mean(Precision))
print(np.mean(Recall))
print(np.mean(np.array(training_time)))
```
# plot the Bayesian network
```
tan0 = TAN(starting_node=0)
tan0.fit(X,y)
tan0.plot_tree_structure()
tan1 = TAN(starting_node=1)
tan1.fit(X,y)
tan1.plot_tree_structure()
tan4 = TAN(starting_node=4)
tan4.fit(X,y)
tan4.plot_tree_structure()
tan7 = TAN(starting_node=7)
tan7.fit(X,y)
tan7.plot_tree_structure()
stan0 = STAN(starting_node = 0)
stan0.fit(X,y,M)
stan0.plot_tree_structure()
stan1 = STAN(starting_node = 1)
stan1.fit(X,y,M)
stan1.plot_tree_structure()
stan4 = STAN(starting_node = 4)
stan4.fit(X,y,M)
stan4.plot_tree_structure()
```
| github_jupyter |
# Connect 4 sur un SenseHat
---
## Introduction
### Règles du Jeu
Le Connect 4, Four in a Row, ou Puissance 4 en français est un jeu se déroulant sur une grille de 6 rangées et 7 colonnes. En insérant tour à tour un jeton coloré dans la dernière rangée, qui tombe ensuite dans le plus bas emplacement disponible, les joueurs tentent d'avoir quatre jetons de leur couleur alignés horizontalement, verticalement, ou diagonalement.
Si toutes les cases sont remplies sans gagnant, la partie est déclarée nulle.
### Mise en place sur SenseHat
L'écran du SenseHat étant fait de 8\*8 pixels, il a été décidé d'utiliser cette surface de la manière suivante :
- Une **zone de jeu**, de 6*7 pixels bleus
- Un espace de sélection, avec un **curseur** de la couleur du joueur en train de jouer

## Installation
### 1. Importer SenseHat & autres modules
La première étape de la programmation de ce jeu est l'importation du module Sense_Hat afin de pouvoir communiquer avec le SenseHat.
```
from sense_hat import SenseHat
#from sense_emu import SenseHat
from time import sleep, time
from gamelib import *
sense = SenseHat()
```
```from sense_hat import SenseHat``` permet l'intéraction avec le module SenseHat. <br/>
```#from sense_emu import SenseHat``` permet d'utiliser l'émulateur SenseHat si la ligne est décommentée <br/>
```from time import sleep, time``` permet d'utiliser la fonction sleep(time) afin de pouvoir ralentir le programme <br/>
```from gamelib import *``` importe les couleurs de ```gamelib``` <br/>
<br/>
```sense = SenseHat()``` permet d'appeler les fonctions liées au SenseHat.
### 2. Définir et initialiser les variables générales
Ces variables seront cruciales au bon fonctionnement du jeu.
```
repeat = 1 # Repeats the program if launched as standalone
playerScore = [0, 0] # Score of the players
turns = 0 # Amount of turns passed
gameOver = 0 # Is the game over?
stopGame = 0 # =1 makes main() stop the game
# Creates two lists of 4 pixels to make winning streaks detection easier
fourYellow = [[248, 252, 0]] * 4
fourRed = [[248, 0, 0]] * 4
# Puts BLUE, RED and YELLOW from gamelib into a list
colors = (BLUE, RED, YELLOW)
```
### 3. Fonction ```main()```
La fonction ```main()``` est la fonction principale du jeu, qui le fait démarrer, continuer, où l'arrête.
```
def main():
"""
Main function, initialises the game, starts it, and stops it when needed.
"""
global gameOver
global playerScore
global stopGame
global turns
turns = 0 # Resets the turns passed
# Stops the game if a player has 2 points or if stop_game() set
# stopGame to 1 and the game is supposed to stop now
if (
repeat == 0 and
(playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1)
):
stopGame = 0 # Resets stopGame
gameOver = 0 # Resets gameOver
return
# If the game should continue, resets gameOver and playerScore to 0
else:
gameOver = 0 # Resets gameOver
if playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1:
stopGame = 0 # Resets stopGame
playerScore = [0, 0] # Resets the playerScore
show() # Resets the display for a new game
turn() # Starts a new turn
```
Le morceau de code <br/>
```
if (
repeat == 0 and
(playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1)
):
```
est indenté spécialement pour suivre le standard PEP8 tout en ne faisant pas plus de 79 caractères de long.
La fonction ```main()``` appèle les fonctions ```show()``` et ```turn()```, décrites ci-dessous en sections 4. et 5.
### 4. Fonction ```show()```
La fonction ```show()``` réinitialise l'affichage, puis y créé la zone de jeu en bleu de 6\*7 pixels.
```
def show():
"""
Sets up the playing field : 6*7 blue pixels
"""
sense.clear() # Resets the pixels
# Creates the 6*7 blue playing field
for y in range(6):
for x in range(7):
sense.set_pixel(x, 7-y, colors[0])
```
### 5. Fonction ```turn()```
La fonction ```turn()``` gère les tours, appèle la fonction ```select_column(p)``` pour que le joueur `p` sélectionne où placer son jeton, et cause un match nul si toutes les cases sont pleines (42 tours écoulés).
```
def turn():
"""
Decides whose turn it is, then calls select_column(p) to allow the player p
to make their selection
"""
global turns
if gameOver == 0: # Checks that the game isn't over
if turns % 2 == 0 and turns != 42: # If the turn is even it's p1's
turns += 1 # Increments turns
select_column(1) # Asks p1 to select a column for their token
elif turns % 2 == 1 and turns != 42: # If the turn is odd, it's p2's
turns += 1 # Increments turns
select_column(2) # Asks p2 to select a column for their token
elif turns == 42: # If 42 turns have passed..
player_scored(0) # ..then it's a draw
```
### 6. Fonction ```player_score(p)```
La fonction ```player_score(p)``` est appelée lorsqu'un joueur ```p``` marque un point, ou lorsqu'il y a match nul (p vaut alors 0). <br/>
Lorsqu'un joueur marque son premier point, son score s'affiche dans sa couleur sur l'écran, avant que le jeu ne soit relancé. <br/>
Lorsqu'un joueur marque son deuxième point, son score s'affiche dans sa couleur, puis l'écran entier, avant que le jeu et les scores ne soient réinitialisés. Si le jeu est appelé comme module, il renvoie à la sélection de jeu, sinon le jeu recommence.
```
def player_scored(p):
"""
Manages the scoring system.
p in player_scored(p) is the player who just scored.
p == 0 -> draw
p == 1 -> p1 scored
p == 2 -> p2 scored
If one of the players won the round, show their score in their color and
prepare the field for the next round. If one of the players has two points,
they win the game, the screen turns to their color and the game is reset.
If it's a draw, no points are given and the field gets prepared for the
next round.
"""
global gameOver
gameOver = 1 # The game has ended
global playerScore
if p != 0: # Checks if it's a draw
playerScore[p - 1] += 1 # Increments the winner's score
sense.show_letter(str(playerScore[p - 1]), colors[p]) # Shows score
# Ends the game if the player already had a point
if playerScore[0] == 2 or playerScore[1] == 2 or stopGame == 1:
sleep(1.5) # Pauses long enough to see the score
sense.clear(colors[p]) # Turns the screen into the winner's color
sleep(1.5) # Pauses long enough to see the winner's screen
sense.clear() # Clears the display
main() # Calls the main game function
```
### 7. Fonction ```select_column(p)```
La fonction ```select_column(p)``` permet au joueur ```p``` de sélectionner dans quelle colonne il veut poser son jeton en déplaçant le joystick à droite ou à gauche. La sélection commence au centre pour l'aspect pratique. <br/>
<br/>
```x = (x + 1) % 7``` permet de s'assurer que `x` reste dans la zone de jeu faisant 7 pixels.<br/>
Lorsque le choix est fait, et que le joueur a appuyé sur le joystick vers le bas, la fonction ```put_down(x, p)``` est appelée, avec ```x``` comme colonne choisie. Cette fonction va vérifier que l'espace est libre, et si ce n'est pas le cas, rappeler ```select_column(p)``` afin que le joueur ne gaspille pas son tour.
```
def select_column(p):
"""
Asks the player to select a column with the joystick, then calls for the
function to drop the token if it is clear.
p is the player whose turn it is.
If the joystick is moved upwards, the game is ended.
The function calls put_down(x,p) in order to drop the token down.
If it turns out the column is full,
put_down(x,p) will call select_column(p) back.
show_selection(x,p) is used to show the current selection.
Returns the selected column with x.
"""
x = 3 # Starts the selection in the middle of the playing field
selection = True # Is the player selecting?
while selection:
for event in sense.stick.get_events(): # Listens for joystick events
if event.action == 'pressed': # When the joystick is moved..
if event.direction == 'right': # ..to the right..
x = (x + 1) % 7 # ..then move the cursor to the right
elif event.direction == 'left': # ..to the left..
x = (x - 1) % 7 # ..then move the cursor to the left
elif event.direction == 'down': # Pressing down confirms
selection = False # Ends selection
put_down(x, p) # Calls the function that drops the token
elif event.direction == 'up': # Pressing up..
global stopGame
stopGame = 1 # ..will make main() end the game..
player_scored(0) # ..and causes a draw
show_selection(x, p) # Calls the function that shows the selection
return x # Returns which column was selected
```
Si le joueur appuie vers le haut, `stopGame` devient `True`, ce qui va faire que le jeu s'arrête à la prochaine invocation de `main()`, qui arrive après que `player_scored(0)` soit appelé. <br/>
<br/>
La fonction renvoie `x`, c'est à dire la coordonée de la colonne choisie, et appèle ```show_selection(x, p)``` afin que le curseur du joueur soit affiché correctement pendant la sélection.
### 8. Fonction ```show_selection(x, p)```
La fonction ```show_selection(x, p)``` affiche l'emplacement du curseur du joueur `p` avec la couleur appropriée, et rend aux pixels leur couleur originelle après le passage du curseur.
```
def show_selection(x, p):
"""
Shows the cursor for the column selection.
x is the currently selected column
p is the player playing
Ensures that the replacement to black stops when the game is over in order
to prevent conflict with the score display.
"""
for i in range(7):
if i == x and gameOver == 0: # Checks that i is in the playing field
# Colors the selection with the player p's color
sense.set_pixel(i, 0, colors[p])
elif gameOver == 0:
# Resets the pixels once the cursor has moved
sense.set_pixel(i, 0, (0, 0, 0))
```
Lorsque le jeu n'est pas en cours (```gameOver =! 0```), la fonction ne fait plus rien, afin d'éviter qu'elle n'interfère avec par exemple l'affichage des résultats.
### 9. Fonction ```put_down(x, p)```
La fonction ```put_down(x, p)``` vérifie que la colonne `x` choisie par le joueur est bien libre, puis trouve le plus bas emplacement libre, appèle la fonction ```animate_down(x, y, p)``` afin d'animer la chute puis y affiche le jeton du joueur.<br/>
Si la colonne n'est pas libre, ```put_down(x, p)``` rappèle ```select_column(p)``` afin d'éviter que le joueur ne gaspille son tour.<br/>
Une fois le jeton placé, la fonction appèle ```check_connectfour(x, y)``` afin de regarder si le jeton posé créé une suite de quatre. S'il n'y a pas de connection, c'est au tour de l'autre joueur avec ```turn()```.
```
def put_down(x, p):
"""
Puts the token down in the selected column.
x is the selected column
p is the player playing
If the selected column is full, select_column(p) is called back to ensure
the player doesn't waste their turn.
The token is animated down with animate_down(x,y,p) before being set.
If the token is not a winning one, calls for the next turn with turn().
"""
# Checks that the column is free (BLUE)
if sense.get_pixel(x, 2) == [0, 0, 248]:
for y in range(7): # Finds the lowest available spot
if sense.get_pixel(x, 7-y) == [0, 0, 248]: # If it's free then..
animate_down(x, y, p) # ..calls for the animation down and..
sense.set_pixel(x, 7 - y, colors[p]) # ..puts the token there
# Checks if it's a winning move
if check_connectfour(x, 7 - y) is False:
turn() # If not, starts the next turn
return
return
else:
select_column(p) # If there is no free spot, restarts selection
return
```
La fonction ```sense.get_pixel(x, y)``` ne renvoie pas la valeur qui a été assignée au pixel directement, mais la fait passer à travers une autre opération, ce qui explique l'utilisation d'une valeur de bleu (```[0,0,248]```) qui n'est pas ```BLUE```.
### 10. Fonction ```animate_down(x, y, p)```
La fonction ```animate_down(x, y, p)``` fait apparaître puis disparaître un pixel de la couleur du joueur `p` dans chaque case de la colonne `x` jusqu'au point `y`, avant de redonner aux pixels leur couleur d'origine (Noire `[0,0,0]` ou `BLUE`)
```
def animate_down(x, y, p):
"""
Creates an animation that makes a pixel move down the selected column to
the lowest available spot.
x is the selected column
y is the lowest available spot
p is the player playing
Ensures that the first two rows stay black, and that the others turn BLUE
again after the animation.
"""
# For each available spot from the top of the column
for z in range(7 - y):
sense.set_pixel(x, z, colors[p]) # Set the pixel to the player's color
sleep(0.03) # Wait long enough for it to be noticeable
if z != 1 and z != 0: # If it's not the first two rows
sense.set_pixel(x, z, colors[0]) # Set the pixel back to BLUE
else: # Otherwise
sense.set_pixel(x, 1, [0, 0, 0]) # Set it to black
```
### 11. Fonction ```check_connectfour(x, y)```
La fonction ```check_connectfour(x, y)``` va faire une série de tests afin de regarder si le jeton posé à l'emplacement `x, y` cause une suite de 4 pixels horizontalement, verticalement et en diagonale.
```
def check_connectfour(x, y):
"""
Checks if there is four same-colored token next to each other.
x is the last played token's column
y is the last played token's row
Returns False if there is no winning move this turn. Return True and thus
makes the game end if it was a winning move.
"""
# First asks if there is a win horizontally and vertically
if check_horizontal(x, y) is False and check_vertical(x, y) is False:
# Then diagonally from the bottom left to the upper right
if check_diagonal_downleft_upright(x, y) is False:
# And then diagonally the other way
if check_diagonal_downright_upleft(x, y) is False:
# If not, then continue playing by returning False
return(False)
```
La fonction appèle d'abord 1) ```check_horizontal(x, y)``` et 2) ```check_vertical(x, y)```, puis regarde pour les deux diagonales 3) ```check_diagonal_downleft_upright(x, y)``` et 4) ```check_diagonal_downright_upleft(x, y)```. <br/>
<br/>
Si le pixel ne fait aucune suite, alors toutes les conditions seront `False`, ce que la fonction retournera, et ce sera le tour de l'autre joueur.

#### 11.1 ```check_horizontal(x, y)```
La fonction ```check_horizontal(x, y)``` va faire une liste `horizontal` de tous les pixels de la rangée `y` où le jeton a été placé, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, quatre fois de suite afin de couvrir l'entièreté de la rangée.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_horizontal(x, y):
"""
Checks if there is four same-colored tokens in the same row.
x is the last played token's column
y is the last played token's row
Returns False if there isn't four same-colored tokens on the same row.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
# Makes a list out of the row
horizontal = sense.get_pixels()[8 * y:8 * y + 7]
for z in range(4): # Checks the row by four groups of four tokens
if horizontal[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if horizontal[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move.
return False # Returns that there were no winning move.
```
#### 11.2 ```check_vertical(x, y)```
La fonction ```check_vertical(x, y)``` va faire une liste `vertical` de tous les pixels de la colonne `x` où le jeton a été placé, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, trois fois de suite afin de couvrir l'entièreté de la colonne.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_vertical(x, y):
"""
Checks if there is four same-colored tokens in the same column.
x is the last played token's column
y is the last played token's row
Returns False if there isn't four same-colored tokens in the column.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
# Makes a list out of the column
vertical = [sense.get_pixel(x, 2), sense.get_pixel(x, 3),
sense.get_pixel(x, 4), sense.get_pixel(x, 5),
sense.get_pixel(x, 6), sense.get_pixel(x, 7)]
for z in range(3): # Checks the column by three groups of four tokens
if vertical[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if vertical[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move
return False # Returns that there were no winning move
```
#### 11.3 ```check_diagonal_downleft_upright(x, y)```
La fonction ```check_diagonal_downleft_upright(x, y)``` va faire une liste `diagonal` de tous les pixels de la diagonale allant d'en bas à gauche à en haut à droite en passant par le point `x, y` où le jeton a été placé grâce à la fonction ```create_diagonal_downleft_upright(diagonal, x, y)```, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, quatre fois de suite afin de couvrir l'entièreté de la rangée.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_diagonal_downleft_upright(x, y):
"""
Checks if there is four same-colored token in the bottom-left to
upper-right diagonal.
x is the last played token's column
y is the last played token's row
Calls create_diagonal_downleft_upright to create a list from the diagonal.
Returns False if there isn't four same-colored tokens in the diagonal.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
diagonal = [] # Resets the list
# Calls a function to create a list from the pixels in a bottom-left to
# upper-right diagonal
create_diagonal_downleft_upright(diagonal, x, y)
for z in range(4): # Checks the diagonal by four groups of four tokens
if diagonal[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if diagonal[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move
return False # Returns that there were no winning move
```
##### 11.3.1 ```create_diagonal_downleft_upright(diagonal, x, y)```
En utilisant `try` et `except`, la fonction ```create_diagonal_downleft_upright(diagonal, x, y)``` tente de créer une liste de 7 pixels passant en diagonale du point `x, y` d'en bas à gauche à en haut à droite.<br/>
L'utilisation de `try` et `except` permet d'éviter que le programme crashe lorsque la fonction tente d'ajouter un pixel hors limites à la liste. <br/><br/>
La fonction retourne la liste `diagonale` aussi grande que ce qu'elle a pu obtenir.
```
def create_diagonal_downleft_upright(diagonal, x, y):
"""
Creates a list of seven pixels in a bottom left to upper right diagonal
centered around the last placed token.
diagonal is the list
x is the last played token's column
y is the last played token's row
As the function might try to take into account pixels that are out of
bounds, there is a try except ValueError in order to prevent out of bounds
errors. The list might be shorter than seven pixels, but the function works
anyway.
Returns the list of diagonal pixels.
"""
for z in range(7): # To have a 7 pixel list
# Tries to get values that might be out of bounds, three pixels down
# left and three pixels up right in a diagonal from the token
try:
diagonal.append(sense.get_pixel(x - z + 3, y + z - 3))
except: # Catches out of bounds errors
ValueError
return(diagonal) # Returns the list of pixels
```
#### 11.4 ```check_diagonal_downright_upleft(x, y)```
La fonction ```check_diagonal_downright_upleft(x, y)``` va faire une liste `diagonal` de tous les pixels de la diagonale allant d'en bas à droite à en haut à gauche en passant par le point `x, y` où le jeton a été placé grâce à la fonction ```create_diagonal_downright_upleft(diagonal, x, y)```, puis va la comparer à `fourYellow` et `fourRed` par groupe de quatre pixels, quatre fois de suite afin de couvrir l'entièreté de la rangée.<br/>
Si l'une des conditions est remplie, le joueur `p` qui a posé le jeton recevra un point à travers la fonction `player_scored(p)`, et la fonction retournera `True`. Sinon, la fonction retournera `False`.
```
def check_diagonal_downright_upleft(x, y):
"""
Checks if there is four same-colored token in the bottom-right to
upper-left diagonal.
x is the last played token's column
y is the last played token's row
Calls create_diagonal_downright_upleft to create a list from the diagonal.
Returns False if there isn't four same-colored tokens in the diagonal.
Returns True if there are, and calls player_scored(p) for the appropriate
player based on color (RED == p1, YELLOW == p2)
"""
diagonal = [] # Resets the list
# Calls a function to create a list from the pixels in a bottom-right to
# upper-left diagonal
create_diagonal_downright_upleft(diagonal, x, y)
for z in range(4): # Checks the diagonal by four groups of four tokens
if diagonal[z:z + 4] == fourYellow: # Is there four yellow tokens?
player_scored(2) # If yes, p2 scored
return True # Returns that there was a winning move
if diagonal[z:z + 4] == fourRed: # Is there four red tokens?
player_scored(1) # If yes, p1 scored
return True # Returns that there was a winning move
return False # Returns that there were no winning move
```
##### 11.4.1 ```create_diagonal_downright_upleft(diagonal, x, y)```
En utilisant `try` et `except`, la fonction ```create_diagonal_downright_upleft(diagonal, x, y)``` tente de créer une liste de 7 pixels passant en diagonale du point `x, y` d'en bas à droite à en haut à gauche.<br/>
L'utilisation de `try` et `except` permet d'éviter que le programme crashe lorsque la fonction tente d'ajouter un pixel hors limites à la liste.<br/>
<br/>
La fonction retourne la liste `diagonale` aussi grande que ce qu'elle a pu obtenir.
```
def create_diagonal_downright_upleft(diagonal, x, y):
"""
Creates a list of seven pixels in a bottom right to upper left diagonal
centered around the last placed token.
diagonal is the list
x is the last played token's column
y is the last played token's row
As the function might try to take into account pixels that are out of
bounds, there is a try except ValueError in order to prevent out of bounds
errors. The list might be shorter than seven pixels, but the function works
anyway.
Returns the list of diagonal pixels.
"""
for z in range(7): # To have a 7 pixel list
# Tries to get values that might be out of bounds, three pixels down
# right and three pixels up left in a diagonal from the token
try:
diagonal.append(sense.get_pixel(x - z + 3, y - z + 3))
except: # Catches out of bounds errors
ValueError
return(diagonal) # Returns the list of pixels
```
### 12. Module ou Standalone?
Ce morceau de code fait en sorte que le jeu se répète s'il est standalone `repeat = 1` mais pas s'il est importé comme module `repeat = 0` afin de permettre de retourner à la sélection de jeux.
```
# Execute the main() function when the file is executed,
# but do not execute when the module is imported as a module.
print('module name =', __name__)
if __name__ == '__main__':
main()
global repeat
repeat = 1 # If the game is played as standalone, make it repeat
else:
global repeat
repeat = 0 # If the game is played as a module, make it quit when over
```
| github_jupyter |
# Text classification with Reuters-21578 datasets
### See: https://kdd.ics.uci.edu/databases/reuters21578/README.txt for more information
```
%pylab inline
import re
import xml.sax.saxutils as saxutils
from BeautifulSoup import BeautifulSoup
from gensim.models.word2vec import Word2Vec
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, LSTM
from multiprocessing import cpu_count
from nltk.corpus import stopwords
from nltk.tokenize import RegexpTokenizer, sent_tokenize
from nltk.stem import WordNetLemmatizer
from pandas import DataFrame
from sklearn.cross_validation import train_test_split
```
## General constants (modify them according to you environment)
```
# Set Numpy random seed
random.seed(1000)
# Newsline folder and format
data_folder = 'd:\\ml_data\\reuters\\'
sgml_number_of_files = 22
sgml_file_name_template = 'reut2-NNN.sgm'
# Category files
category_files = {
'to_': ('Topics', 'all-topics-strings.lc.txt'),
'pl_': ('Places', 'all-places-strings.lc.txt'),
'pe_': ('People', 'all-people-strings.lc.txt'),
'or_': ('Organizations', 'all-orgs-strings.lc.txt'),
'ex_': ('Exchanges', 'all-exchanges-strings.lc.txt')
}
# Word2Vec number of features
num_features = 500
# Limit each newsline to a fixed number of words
document_max_num_words = 100
# Selected categories
selected_categories = ['pl_usa']
```
## Prepare documents and categories
```
# Create category dataframe
# Read all categories
category_data = []
for category_prefix in category_files.keys():
with open(data_folder + category_files[category_prefix][1], 'r') as file:
for category in file.readlines():
category_data.append([category_prefix + category.strip().lower(),
category_files[category_prefix][0],
0])
# Create category dataframe
news_categories = DataFrame(data=category_data, columns=['Name', 'Type', 'Newslines'])
def update_frequencies(categories):
for category in categories:
idx = news_categories[news_categories.Name == category].index[0]
f = news_categories.get_value(idx, 'Newslines')
news_categories.set_value(idx, 'Newslines', f+1)
def to_category_vector(categories, target_categories):
vector = zeros(len(target_categories)).astype(float32)
for i in range(len(target_categories)):
if target_categories[i] in categories:
vector[i] = 1.0
return vector
# Parse SGML files
document_X = {}
document_Y = {}
def strip_tags(text):
return re.sub('<[^<]+?>', '', text).strip()
def unescape(text):
return saxutils.unescape(text)
# Iterate all files
for i in range(sgml_number_of_files):
if i < 10:
seq = '00' + str(i)
else:
seq = '0' + str(i)
file_name = sgml_file_name_template.replace('NNN', seq)
print('Reading file: %s' % file_name)
with open(data_folder + file_name, 'r') as file:
content = BeautifulSoup(file.read().lower())
for newsline in content('reuters'):
document_categories = []
# News-line Id
document_id = newsline['newid']
# News-line text
document_body = strip_tags(str(newsline('text')[0].body)).replace('reuter\n', '')
document_body = unescape(document_body)
# News-line categories
topics = newsline.topics.contents
places = newsline.places.contents
people = newsline.people.contents
orgs = newsline.orgs.contents
exchanges = newsline.exchanges.contents
for topic in topics:
document_categories.append('to_' + strip_tags(str(topic)))
for place in places:
document_categories.append('pl_' + strip_tags(str(place)))
for person in people:
document_categories.append('pe_' + strip_tags(str(person)))
for org in orgs:
document_categories.append('or_' + strip_tags(str(org)))
for exchange in exchanges:
document_categories.append('ex_' + strip_tags(str(exchange)))
# Create new document
update_frequencies(document_categories)
document_X[document_id] = document_body
document_Y[document_id] = to_category_vector(document_categories, selected_categories)
```
## Top 20 categories (by number of newslines)
```
news_categories.sort_values(by='Newslines', ascending=False, inplace=True)
news_categories.head(20)
```
## Tokenize newsline documents
```
# Load stop-words
stop_words = set(stopwords.words('english'))
# Initialize tokenizer
# It's also possible to try with a stemmer or to mix a stemmer and a lemmatizer
tokenizer = RegexpTokenizer('[\'a-zA-Z]+')
# Initialize lemmatizer
lemmatizer = WordNetLemmatizer()
# Tokenized document collection
newsline_documents = []
def tokenize(document):
words = []
for sentence in sent_tokenize(document):
tokens = [lemmatizer.lemmatize(t.lower()) for t in tokenizer.tokenize(sentence) if t.lower() not in stop_words]
words += tokens
return words
# Tokenize
for key in document_X.keys():
newsline_documents.append(tokenize(document_X[key]))
number_of_documents = len(document_X)
```
## Word2Vec Model
### See: https://radimrehurek.com/gensim/models/word2vec.html and https://code.google.com/p/word2vec/ for more information
```
# Load an existing Word2Vec model
w2v_model = Word2Vec.load(data_folder + 'reuters.word2vec')
# Create new Gensim Word2Vec model
w2v_model = Word2Vec(newsline_documents, size=num_features, min_count=1, window=10, workers=cpu_count())
w2v_model.init_sims(replace=True)
w2v_model.save(data_folder + 'reuters.word2vec')
```
## Vectorize each document
```
num_categories = len(selected_categories)
X = zeros(shape=(number_of_documents, document_max_num_words, num_features)).astype(float32)
Y = zeros(shape=(number_of_documents, num_categories)).astype(float32)
empty_word = zeros(num_features).astype(float32)
for idx, document in enumerate(newsline_documents):
for jdx, word in enumerate(document):
if jdx == document_max_num_words:
break
else:
if word in w2v_model:
X[idx, jdx, :] = w2v_model[word]
else:
X[idx, jdx, :] = empty_word
for idx, key in enumerate(document_Y.keys()):
Y[idx, :] = document_Y[key]
```
## Split training and test sets
```
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3)
```
## Create Keras model
```
model = Sequential()
model.add(LSTM(int(document_max_num_words*1.5), input_shape=(document_max_num_words, num_features)))
model.add(Dropout(0.3))
model.add(Dense(num_categories))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
```
## Train and evaluate model
```
# Train model
model.fit(X_train, Y_train, batch_size=128, nb_epoch=5, validation_data=(X_test, Y_test))
# Evaluate model
score, acc = model.evaluate(X_test, Y_test, batch_size=128)
print('Score: %1.4f' % score)
print('Accuracy: %1.4f' % acc)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/technologyhamed/Neuralnetwork/blob/Single/ArticleSummarization/ArticleSummarization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
*قسمت 1: پیدا کردن امتیاز TF IDF هر کلمه
ابتدا فایل خوانده می شود و تمام رشته ها در قالب Pandas DataFrame ذخیره می شوند
```
#Import libraries
%matplotlib inline
import pandas as pd
import numpy as np
import os
import glob
import requests as requests
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn.datasets import load_files
import nltk
nltk.download('stopwords')
# Just making the plots look better
mpl.style.use('ggplot')
mpl.rcParams['figure.figsize'] = (8,6)
mpl.rcParams['font.size'] = 12
url='https://raw.githubusercontent.com/technologyhamed/Neuralnetwork/Single/Datasets/Article_Summarization_project/Article.txt'
filename='../content/sample_data/Article.txt'
df = pd.read_csv(url)
df.to_csv(filename)
str_article = list()
article_files = glob.glob(filename)
d = list()
for article in article_files:
with open(article, encoding='utf-8') as f:
filename = os.path.basename(article.split('.')[0])
lines = (line.rstrip() for line in f) # All lines including the blank ones
lines = list(line for line in lines if line) # Non-blank lines
#str_article.rstrip()
d.append(pd.DataFrame({'article': "اخبار", 'paragraph': lines}))
doc = pd.concat(d)
doc
#doc['article'].value_counts().plot.bar();
```
Importing NLTK corpus to remove stop words from the vector.
```
from nltk.corpus import stopwords
```
Split the lines into sentences/words.
```
doc['sentences'] = doc.paragraph.str.rstrip('.').str.split('[\.]\s+')
doc['words'] = doc.paragraph.str.strip().str.split('[\W_]+')
#This line is used to remove the English stop words
stop = stopwords.words('english')
doc['words'] = doc['words'].apply(lambda x: [item for item in x if item not in stop])
#doc.head()
doc
```
```
# This is formatted as code
```
Split the paragraph into sentences.
```
rows = list()
for row in doc[['paragraph', 'sentences']].iterrows():
r = row[1]
for sentence in r.sentences:
rows.append((r.paragraph, sentence))
sentences = pd.DataFrame(rows, columns=['paragraph', 'sentences'])
#sentences = sentences[sentences.sentences.str.len() > 0]
sentences.head()
```
Split the paragraph into words.
```
rows = list()
for row in doc[['paragraph', 'words']].iterrows():
r = row[1]
for word in r.words:
rows.append((r.paragraph, word))
words = pd.DataFrame(rows, columns=['paragraph', 'words'])
#remove empty spaces and change words to lower case
words = words[words.words.str.len() > 0]
words['words'] = words.words.str.lower()
#words.head()
#words
```
Calculate word counts in the article.
```
rows = list()
for row in doc[['article', 'words']].iterrows():
r = row[1]
for word in r.words:
rows.append((r.article, word))
wordcount = pd.DataFrame(rows, columns=['article', 'words'])
wordcount['words'] = wordcount.words.str.lower()
wordcount.words = wordcount.words.str.replace('\d+', '')
wordcount.words = wordcount.words.str.replace(r'^the', '')
wordcount = wordcount[wordcount.words.str.len() > 2]
counts = wordcount.groupby('article')\
.words.value_counts()\
.to_frame()\
.rename(columns={'words':'n_w'})
#counts.head()
counts
#wordcount
#wordcount.words.tolist()
#counts.columns
```
Plot number frequency graph.
```
def pretty_plot_top_n(series, top_n=20, index_level=0):
r = series\
.groupby(level=index_level)\
.nlargest(top_n)\
.reset_index(level=index_level, drop=True)
r.plot.bar()
return r.to_frame()
pretty_plot_top_n(counts['n_w'])
word_sum = counts.groupby(level=0)\
.sum()\
.rename(columns={'n_w': 'n_d'})
word_sum
tf = counts.join(word_sum)
tf['tf'] = tf.n_w/tf.n_d
tf.head()
#tf
```
Plot top 20 words based on TF
```
pretty_plot_top_n(tf['tf'])
c_d = wordcount.article.nunique()
c_d
idf = wordcount.groupby('words')\
.article\
.nunique()\
.to_frame()\
.rename(columns={'article':'i_d'})\
.sort_values('i_d')
idf.head()
idf['idf'] = np.log(c_d/idf.i_d.values)
idf.head()
#idf
```
IDF values are all zeros because in this example, only 1 article is considered & all unique words appeared in the same article. IDF values are 0 if it appears in all the documents.
```
tf_idf = tf.join(idf)
tf_idf.head()
#tf_idf
tf_idf['tf_idf'] = tf_idf.tf * tf_idf.idf
tf_idf.head()
#tf_idf
```
-------------------------------------------------
**Part 2: Using Hopfield Network to find the most important words**
In this part, the TF scores are treated as the Frequency Vector i.e. the input to Hopfield Network.
Frequency Matrix is constructed to be treated as Hopfield Network weights.
```
freq_matrix = pd.DataFrame(np.outer(tf_idf["tf"], tf_idf["tf"]), tf_idf["tf"].index, tf_idf["tf"].index)
#freq_matrix.head()
freq_matrix
```
Finding the maximum of the frequency vector and matrix
```
vector_max = tf_idf['tf'].max()
print(vector_max)
matrix_max = freq_matrix.max().max()
print(matrix_max)
```
Normalizing the frequency vector
```
tf_idf['norm_freq'] = tf_idf.tf / vector_max
temp_df = tf_idf[['tf', 'norm_freq']]
#temp_df
temp_df.head(20)
#tf_idf.head()
#tf_idf
```
Normalizing the frequency matrix
```
freq_matrix_norm = freq_matrix.div(matrix_max)
freq_matrix_norm
np.fill_diagonal(freq_matrix_norm.values, 0)
freq_matrix_norm
```
.
```
#define sigmoid function
#currently just a placeholder because tanh activation function is selected instead
def sigmoid(x):
beta = 1
return 1 / (1 + np.exp(-x * beta))
tf_idf["hopfield_value"] = np.tanh(freq_matrix_norm @ tf_idf["norm_freq"])
temp = np.tanh(freq_matrix_norm @ tf_idf["hopfield_value"])
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
temp = np.tanh(freq_matrix_norm @ temp)
#temp
#temp
#temp.head()
temp.head(20)
```
# **الگوریتم هابفیلد**
```
#safe limit
itr = 0
zero_itr = 0
max_itr = 5 #maximum iteration where Delta Energy is 0
char_list = []
delta_energy = 0
threshold = 0
energy = 0
init_energy = 0
tf_idf["hopfield_value"] = np.tanh(freq_matrix_norm @ tf_idf["norm_freq"])
while (delta_energy < 0.0001):
itr = itr + 1
#Calculation of output vector from Hopfield Network
#y = activation_function(sum(W * x))
tf_idf["hopfield_value"] = np.tanh(freq_matrix_norm @ tf_idf["hopfield_value"])
#Calculation of Hopfield Energy Function and its Delta
#E = [-1/2 * sum(Wij * xi * xj)] + [sum(threshold*xi)]
energy = (-0.5 * tf_idf["hopfield_value"] @ freq_matrix_norm @ tf_idf["hopfield_value"]) \
+ (np.sum(threshold * tf_idf["hopfield_value"]))
#Append to list for characterization
char_list.append(energy)
#Find Delta for Energy
delta_energy = energy - init_energy
#print ('Energy = {}'.format(energy))
#print ('Init_Energy = {}'.format(init_energy))
#print ('Delta_Energy = {}'.format(delta_energy))
#print ()
init_energy = energy #Setting the current energy to be previous energy in next iteration
#break the loop if Delta Energy reached zero after a certain iteration
if (delta_energy == 0):
zero_itr = zero_itr + 1
if (zero_itr == max_itr):
print("Hopfield Loop exited at Iteration {}".format(itr))
break
big_grid = np.arange(0,itr)
plt.plot(big_grid,char_list, color ='blue')
plt.suptitle('Hopfield Energy Value After Each Iteration')
# Customize the major grid
plt.grid(which='major', linestyle='-', linewidth='0.5', color='red')
# Customize the minor grid
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
plt.minorticks_on()
plt.rcParams['figure.figsize'] = [13, 6]
plt.show()
#tf_idf.head()
#tf_idf
#final_hopfield_output = tf_idf["hopfield_value"]
final_output_vector = tf_idf["hopfield_value"]
final_output_vector.head(10)
#final_output_vector.head()
#final_output_vector
#tf_idf
```
Once again, it is shown that the words <font color=green>***kipchoge***</font> and <font color=green>***marathon***</font> are the the most important word. It is highly likely that it is accurate because the article was about the performance of Eliud Kipchoge running a marathon.
-------------------------------------------------
**Part 3: خلاصه مقاله**
```
txt_smr_sentences = pd.DataFrame({'sentences': sentences.sentences})
txt_smr_sentences['words'] = txt_smr_sentences.sentences.str.strip().str.split('[\W_]+')
rows = list()
for row in txt_smr_sentences[['sentences', 'words']].iterrows():
r = row[1]
for word in r.words:
rows.append((r.sentences, word))
txt_smr_sentences = pd.DataFrame(rows, columns=['sentences', 'words'])
#remove empty spaces and change words to lower case
txt_smr_sentences['words'].replace('', np.nan, inplace=True)
txt_smr_sentences.dropna(subset=['words'], inplace=True)
txt_smr_sentences.reset_index(drop=True, inplace=True)
txt_smr_sentences['words'] = txt_smr_sentences.words.str.lower()
##Initialize 3 new columns
# w_ind = New word index
# s_strt = Starting index of a sentence
# s_stp = Stopping index of a sentence
# w_scr = Hopfield Value for words
txt_smr_sentences['w_ind'] = txt_smr_sentences.index + 1
txt_smr_sentences['s_strt'] = 0
txt_smr_sentences['s_stp'] = 0
txt_smr_sentences['w_scr'] = 0
#Iterate through the rows to check if the current sentence is equal to
#previous sentence. If not equal, determine the "start" & "stop"
start = 0
stop = 0
prvs_string = ""
for i in txt_smr_sentences.index:
#print (i)
if (i == 0):
start = 1
txt_smr_sentences.iloc[i,3] = 1
prvs_string = txt_smr_sentences.iloc[i,0]
else:
if (txt_smr_sentences.iloc[i,0] != prvs_string):
stop = txt_smr_sentences.iloc[i-1,2]
txt_smr_sentences.iloc[i-(stop-start)-1:i,4] = stop
start = txt_smr_sentences.iloc[i,2]
txt_smr_sentences.iloc[i,3] = start
prvs_string = txt_smr_sentences.iloc[i,0]
else:
txt_smr_sentences.iloc[i,3] = start
if (i == len(txt_smr_sentences.index)-1):
last_ind = txt_smr_sentences.w_ind.max()
txt_smr_sentences.iloc[i-(last_ind-start):i+1,4] = last_ind
#New Column for length of sentence
txt_smr_sentences['length'] = txt_smr_sentences['s_stp'] - txt_smr_sentences['s_strt'] + 1
#Rearrange the Columns
txt_smr_sentences = txt_smr_sentences[['sentences', 's_strt', 's_stp', 'length', 'words', 'w_ind', 'w_scr']]
txt_smr_sentences.head(100)
#txt_smr_sentences
```
Check if word has Hopfield Score value, and update *txt_smr_sentences*
```
for index, value in final_output_vector.items():
for i in txt_smr_sentences.index:
if(index[1] == txt_smr_sentences.iloc[i,4]):
txt_smr_sentences.iloc[i,6] = value
#New Column for placeholder of sentences score
txt_smr_sentences['s_scr'] = txt_smr_sentences.w_scr
txt_smr_sentences.head(100)
# three_sigma = 3 * math.sqrt((tf_idf.loc[:,"hopfield_value"].var()))
# three_sigma
# tf_idf["hopfield_value"]
aggregation_functions = {'s_strt': 'first', \
's_stp': 'first', \
'length': 'first', \
's_scr': 'sum'}
tss_new = txt_smr_sentences.groupby(txt_smr_sentences['sentences']).aggregate(aggregation_functions)\
.sort_values(by='s_scr', ascending=False).reset_index()
tss_new
import math
max_word = math.floor(0.1 * tss_new['s_stp'].max())
print("Max word amount for summary: {}\n".format(max_word))
summary = tss_new.loc[tss_new['s_strt'] == 1, 'sentences'].iloc[0] + ". " ##Consider the Title of the Article
length_printed = 0
for i in tss_new.index:
if (length_printed <= max_word):
summary += tss_new.iloc[i,0] + ". "
length_printed += tss_new.iloc[i,3] ##Consider the sentence where max_word appear in the middle
else:
break
class style:
BOLD = '\033[1m'
END = '\033[0m'
print('\n','--------------------------------------------------------')
s = pd.Series([style.BOLD+summary+style.END])
print(s.str.split(' '))
print('\n')
#!jupyter nbconvert --to html ./ArticleSummarization.ipynb
```
| github_jupyter |
```
import numpy as np
from scipy.stats import norm
from stochoptim.scengen.scenario_tree import ScenarioTree
from stochoptim.scengen.scenario_process import ScenarioProcess
from stochoptim.scengen.variability_process import VariabilityProcess
from stochoptim.scengen.figure_of_demerit import FigureOfDemerit
```
We illustrate on a Geometric Brownian Motion (GBM) the two ways (forward vs. backward) to build a scenario tree with **optimized scenarios**.
# Define a `ScenarioProcess` instance for the GBM
```
S_0 = 2 # initial value (at stage 0)
delta_t = 1 # time lag between 2 stages
mu = 0 # drift
sigma = 1 # volatility
```
The `gbm_recurrence` function below implements the dynamic relation of a GBM:
* $S_{t} = S_{t-1} \exp[(\mu - \sigma^2/2) \Delta t + \sigma \epsilon_t\sqrt{\Delta t}], \quad t=1,2,\dots$
where $\epsilon_t$ is a standard normal random variable $N(0,1)$.
The discretization of $\epsilon_t$ is done by quasi-Monte Carlo (QMC) and is implemented by the `epsilon_sample_qmc` method.
```
def gbm_recurrence(stage, epsilon, scenario_path):
if stage == 0:
return {'S': np.array([S_0])}
else:
return {'S': scenario_path[stage-1]['S'] \
* np.exp((mu - sigma**2 / 2) * delta_t + sigma * np.sqrt(delta_t) * epsilon)}
def epsilon_sample_qmc(n_samples, stage, u=0.5):
return norm.ppf(np.linspace(0, 1-1/n_samples, n_samples) + u / n_samples).reshape(-1, 1)
scenario_process = ScenarioProcess(gbm_recurrence, epsilon_sample_qmc)
```
# Define a `VariabilityProcess` instance
A `VariabilityProcess` provides the *variability* of a stochastic problem along the stages and the scenarios. What we call 'variability' is a positive number that indicates how variable the future is given the present scenario.
Mathematically, a `VariabilityProcess` must implement one of the following two methods:
* the `lookback_fct` method which corresponds to the function $\mathcal{V}_{t}(S_{1}, ..., S_{t})$ that provides the variability at stage $t+1$ given the whole past scenario,
* the `looknow_fct` method which corresponds to the function $\mathcal{\tilde{V}}_{t}(\epsilon_t)$ that provides the variability at stage $t+1$ given the present random perturbation $\epsilon_t$.
If the `lookback_fct` method is provided, the scenarios can be optimized using the key world argument `optimized='forward'`.
If the `looknow_fct` method is provided, the scenarios can be optimized using the key world argument `optimized='backward'`.
```
def lookback_fct(stage, scenario_path):
return scenario_path[stage]['S'][0]
def looknow_fct(stage, epsilon):
return np.exp(epsilon[0])
my_variability = VariabilityProcess(lookback_fct, looknow_fct)
```
# Define a `FigureOfDemerit` instance
```
def demerit_fct(stage, epsilons, weights):
return 1 / len(epsilons)
my_demerit = FigureOfDemerit(demerit_fct, my_variability)
```
# Optimized Assignment of Scenarios to Nodes
### `optimized='forward'`
```
scen_tree = ScenarioTree.from_recurrence(last_stage=3, init=3, recurrence={1: (2,), 2: (1,2), 3: (1,2,3)})
scen_tree.fill(scenario_process,
optimized='forward',
variability_process=my_variability,
demerit=my_demerit)
scen_tree.plot('S')
scen_tree.plot_scenarios('S')
```
### `optimized='backward'`
```
scen_tree = ScenarioTree.from_recurrence(last_stage=3, init=3, recurrence={1: (2,), 2: (1,2), 3: (1,2,3)})
scen_tree.fill(scenario_process,
optimized='backward',
variability_process=my_variability,
demerit=my_demerit)
scen_tree.plot('S')
scen_tree.plot_scenarios('S')
```
| github_jupyter |
### 3. Tackle the Titanic dataset
```
# To support both python 2 and python 3
# 让这份笔记同步支持 python 2 和 python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
# 让笔记全程输入稳定
np.random.seed(42)
# To plot pretty figures
# 导入绘图工具
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
# 设定图片保存路径,这里写了一个函数,后面直接调用即可
PROJECT_ROOT_DIR = "F:\ML\Machine learning\Hands-on machine learning with scikit-learn and tensorflow"
CHAPTER_ID = "Classification_MNIST_03"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# Ignore useless warnings (see SciPy issue #5998)
# 忽略无用警告
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
```
**目标**是根据年龄,性别,乘客等级,他们的出发地等属性来预测乘客是否幸存下来。
* 首先,登录Kaggle并前往泰坦尼克号挑战下载train.csv和test.csv。 将它们保存到datasets / titanic目录中。
* 接下来,让我们加载数据:
```
import os
TITANIC_PATH = os.path.join("datasets", "titanic")
import pandas as pd
def load_titanic_data(filename, titanic_path=TITANIC_PATH):
csv_path = os.path.join(titanic_path, filename)
return pd.read_csv(csv_path)
train_data = load_titanic_data("train.csv")
test_data = load_titanic_data("test.csv")
```
数据已经分为训练集和测试集。 但是,测试数据不包含标签:
* 你的目标是使用培训数据培训最佳模型,
* 然后对测试数据进行预测并将其上传到Kaggle以查看最终得分。
让我们来看看训练集的前几行:
```
train_data.head()
```
* **Survived**: 这是目标,0表示乘客没有生存,1表示他/她幸存。
* **Pclass**: 乘客客舱级别
* **Name, Sex, Age**: 这个不需要解释
* **SibSp**:乘坐泰坦尼克号的乘客中有多少兄弟姐妹和配偶。
* **Parch**: 乘坐泰坦尼克号的乘客中有多少孩子和父母。
* **Ticket**: 船票 id
* **Fare**: 支付的价格(英镑)
* **Cabin**: 乘客的客舱号码
* **Embarked**: 乘客登上泰坦尼克号的地点
```
train_data.info()
```
Okay, the **Age, Cabin** and **Embarked** attributes are sometimes null (less than 891 non-null), especially the **Cabin** (77% are null). We will **ignore the Cabin for now and focus on the rest**. The **Age** attribute has about 19% null values, so we will need to decide what to do with them.
* Replacing null values with the median age seems reasonable.
The **Name** and **Ticket** attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will **ignore them**.
Let's take a look at the **numerical attributes**:
**Age,Cabin和Embarked**属性有时为null(小于891非null),尤其是**Cabin**(77%为null)。 我们现在将忽略Cabin并专注于其余部分。 Age属性有大约19%的空值,因此我们需要决定如何处理它们。
* 用年龄中位数替换空值似乎是合理的。
**Name和Ticket**属性可能有一些值,但转换为模型可以使用的有用数字会有点棘手。 所以现在,我们将忽略它们。
我们来看看数值属性:
```
train_data.describe()
# only in a Jupyter notebook
# 另一种快速了解数据的方法是绘制直方图
%matplotlib inline
import matplotlib.pyplot as plt
train_data.hist(bins=50, figsize=(20,15))
plt.show()
```
* 只有38%幸存。 :(这足够接近40%,因此准确度将是评估我们模型的合理指标。
* 平均票价是32.20英镑,这看起来并不那么昂贵(但当时可能还有很多钱)。
* 平均年龄不到30岁。
让我们检查目标是否确实为0或1:
```
train_data["Survived"].value_counts()
```
现在让我们快速浏览所有分类属性:
```
train_data["Pclass"].value_counts()
train_data["Sex"].value_counts()
train_data["Embarked"].value_counts()
```
“ Embarked ”属性告诉我们乘客的出发地点:C = Cherbourg 瑟堡,Q = Queenstown 皇后镇,S = Southampton 南安普敦。
现在让我们构建我们的预处理流水线。 我们将重用我们在前一章中构建的DataframeSelector来从DataFrame中选择特定属性:
```
from sklearn.base import BaseEstimator, TransformerMixin
# A class to select numerical or categorical columns
# since Scikit-Learn doesn't handle DataFrames yet
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names]
```
让我们为数值属性构建管道:
```
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer
imputer = Imputer(strategy="median")
num_pipeline = Pipeline([
("select_numeric", DataFrameSelector(["Age", "SibSp", "Parch", "Fare"])),
("imputer", Imputer(strategy="median")),
])
num_pipeline.fit_transform(train_data)
```
我们还需要一个用于字符串分类列的imputer(常规Imputer不适用于那些):
```
# Inspired from stackoverflow.com/questions/25239958
class MostFrequentImputer(BaseEstimator, TransformerMixin):
def fit(self, X, y=None):
self.most_frequent_ = pd.Series([X[c].value_counts().index[0] for c in X],
index=X.columns)
return self
def transform(self, X, y=None):
return X.fillna(self.most_frequent_)
```
我们可以使用**OneHotEncoder**将每个分类值转换为**单热矢量**。
现在这个类只能处理整数分类输入,但在Scikit-Learn 0.20中它也会处理字符串分类输入(参见PR#10521)。 所以现在我们从future_encoders.py导入它,但是当Scikit-Learn 0.20发布时,你可以从sklearn.preprocessing导入它:
```
from sklearn.preprocessing import OneHotEncoder
```
现在我们可以为分类属性构建管道:
```
cat_pipeline = Pipeline([
("select_cat", DataFrameSelector(["Pclass", "Sex", "Embarked"])),
("imputer", MostFrequentImputer()),
("cat_encoder", OneHotEncoder(sparse=False)),
])
cat_pipeline.fit_transform(train_data)
```
最后,合并数值和分类管道:
```
from sklearn.pipeline import FeatureUnion
preprocess_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
```
现在我们有一个很好的预处理管道,它可以获取原始数据并输出数字输入特征,我们可以将这些特征提供给我们想要的任何机器学习模型。
```
X_train = preprocess_pipeline.fit_transform(train_data)
X_train
```
让我们不要忘记获得标签:
```
y_train = train_data["Survived"]
```
我们现在准备训练分类器。 让我们从SVC开始吧
```
from sklearn.svm import SVC
svm_clf = SVC()
svm_clf.fit(X_train, y_train)
```
模型经过训练,让我们用它来测试测试集:
```
X_test = preprocess_pipeline.transform(test_data)
y_pred = svm_clf.predict(X_test)
```
现在我们可以:
* 用这些预测构建一个CSV文件(尊重Kaggle除外的格式)
* 然后上传它并希望能有好成绩。
可是等等! 我们可以比希望做得更好。 为什么我们不使用交叉验证来了解我们的模型有多好?
```
from sklearn.model_selection import cross_val_score
svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)
svm_scores.mean()
```
好吧,超过73%的准确率,明显优于随机机会,但它并不是一个好成绩。 看看Kaggle泰坦尼克号比赛的排行榜,你可以看到你需要达到80%以上的准确率才能进入前10%的Kagglers。 有些人达到了100%,但由于你可以很容易地找到泰坦尼克号的受害者名单,似乎很少有机器学习涉及他们的表现! ;-)所以让我们尝试建立一个达到80%准确度的模型。
我们来试试**RandomForestClassifier**:
```
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)
forest_scores.mean()
```
这次好多了!
* 让我们为每个模型绘制所有10个分数,而不只是查看10个交叉验证折叠的平均准确度
* 以及突出显示下四分位数和上四分位数的方框图,以及显示分数范围的“whiskers(胡须)”(感谢Nevin Yilmaz建议这种可视化)。
请注意,**boxplot()函数**检测异常值(称为“fliers”)并且不包括它们在whiskers中。 特别:
* 如果下四分位数是$ Q_1 $而上四分位数是$ Q_3 $
* 然后四分位数范围$ IQR = Q_3 - Q_1 $(这是盒子的高度)
* 且任何低于$ Q_1 - 1.5 \ IQR $ 的分数都是一个**异常值**,任何分数都高于$ Q3 + 1.5 \ IQR $也是一个异常值。
```
plt.figure(figsize=(8, 4))
plt.plot([1]*10, svm_scores, ".")
plt.plot([2]*10, forest_scores, ".")
plt.boxplot([svm_scores, forest_scores],
labels=("SVM","Random Forest"))
plt.ylabel("Accuracy", fontsize=14)
plt.show()
```
为了进一步改善这一结果,你可以:比较更多模型并使用交叉验证和网格搜索调整超参数,做更多的特征工程,例如:
* 用他们的总和取代SibSp和Parch,
* 尝试识别与Survived属性相关的名称部分(例如,如果名称包含“Countess”,那么生存似乎更有可能),
* 尝试将数字属性转换为分类属性:例如,
* 不同年龄组的存活率差异很大(见下文),因此可能有助于创建一个年龄段类别并使用它代替年龄。
* 同样,为独自旅行的人设置一个特殊类别可能是有用的,因为只有30%的人幸存下来(见下文)。
```
train_data["AgeBucket"] = train_data["Age"] // 15 * 15
train_data[["AgeBucket", "Survived"]].groupby(['AgeBucket']).mean()
train_data["RelativesOnboard"] = train_data["SibSp"] + train_data["Parch"]
train_data[["RelativesOnboard", "Survived"]].groupby(['RelativesOnboard']).mean()
```
### 4. Spam classifier
Apache SpamAssassin的公共数据集下载spam and ham的示例
* 解压缩数据集并熟悉数据格式。
* 将数据集拆分为训练集和测试集。
* 编写数据preparation pipeline,将每封电子邮件转换为特征向量。您的preparation pipeline应将电子邮件转换为(稀疏)向量,指示每个可能单词的存在或不存在。例如,如果全部电子邮件只包含四个单词:
“Hello,” “how,” “are,” “you,”
then the email“Hello you Hello Hello you” would be converted into a vector [1, 0, 0, 1]
意思是:[“Hello” is present, “how” is absent, “are” is absent, “you” is present]),
或者[3, 0, 0, 2],如果你更喜欢计算每个单词的出现次数。
* 您可能希望在preparation pipeline中添加超参数以对是否剥离电子邮件标题进行控制,将每封电子邮件转换为小写,删除标点符号,将所有网址替换为“URL”用“NUMBER”替换所有数字,甚至执行*stemming*(即,修剪单词结尾;有可用的Python库)。
* 然后尝试几个分类器,看看你是否可以建立一个伟大的垃圾邮件分类器,具有高召回率和高精度。
First, let's fetch the data:
```
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "http://spamassassin.apache.org/old/publiccorpus/"
HAM_URL = DOWNLOAD_ROOT + "20030228_easy_ham.tar.bz2"
SPAM_URL = DOWNLOAD_ROOT + "20030228_spam.tar.bz2"
SPAM_PATH = os.path.join("datasets", "spam")
def fetch_spam_data(spam_url=SPAM_URL, spam_path=SPAM_PATH):
if not os.path.isdir(spam_path):
os.makedirs(spam_path)
for filename, url in (("ham.tar.bz2", HAM_URL), ("spam.tar.bz2", SPAM_URL)):
path = os.path.join(spam_path, filename)
if not os.path.isfile(path):
urllib.request.urlretrieve(url, path)
tar_bz2_file = tarfile.open(path)
tar_bz2_file.extractall(path=SPAM_PATH)
tar_bz2_file.close()
fetch_spam_data()
```
Next, let's load all the emails:
```
HAM_DIR = os.path.join(SPAM_PATH, "easy_ham")
SPAM_DIR = os.path.join(SPAM_PATH, "spam")
ham_filenames = [name for name in sorted(os.listdir(HAM_DIR)) if len(name) > 20]
spam_filenames = [name for name in sorted(os.listdir(SPAM_DIR)) if len(name) > 20]
len(ham_filenames)
len(spam_filenames)
```
We can use Python's email module to parse these emails (this handles headers, encoding, and so on):
```
import email
import email.policy
def load_email(is_spam, filename, spam_path=SPAM_PATH):
directory = "spam" if is_spam else "easy_ham"
with open(os.path.join(spam_path, directory, filename), "rb") as f:
return email.parser.BytesParser(policy=email.policy.default).parse(f)
ham_emails = [load_email(is_spam=False, filename=name) for name in ham_filenames]
spam_emails = [load_email(is_spam=True, filename=name) for name in spam_filenames]
```
Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:
```
print(ham_emails[1].get_content().strip())
```
Your use of Yahoo! Groups is subject to http://docs.yahoo.com/info/terms/
```
print(spam_emails[6].get_content().strip())
```
Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:
```
def get_email_structure(email):
if isinstance(email, str):
return email
payload = email.get_payload()
if isinstance(payload, list):
return "multipart({})".format(", ".join([
get_email_structure(sub_email)
for sub_email in payload
]))
else:
return email.get_content_type()
fromfrom collectionscollectio import Counter
def structures_counter(emails):
structures = Counter()
for email in emails:
structure = get_email_structure(email)
structures[structure] += 1
return structures
structures_counter(ham_emails).most_common()
structures_counter(spam_emails).most_common()
```
It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have.
Now let's take a look at the email headers:
```
for header, value in spam_emails[0].items():
print(header,":",value)
```
There's probably a lot of useful information in there, such as the sender's email address (12a1mailbot1@web.de looks fishy), but we will just focus on the Subject header:
```
spam_emails[0]["Subject"]
```
Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:
```
import numpy as np
from sklearn.model_selection import train_test_split
X = np.array(ham_emails + spam_emails)
y = np.array([0] * len(ham_emails) + [1] * len(spam_emails))
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
Okay, let's start writing the preprocessing functions. First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great BeautifulSoup library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment). The following function first drops the <head> section, then converts all <a> tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as > or ):
```
import re
from html import unescape
def html_to_plain_text(html):
text = re.sub('<head.*?>.*?</head>', '', html, flags=re.M | re.S | re.I)
text = re.sub('<a\s.*?>', ' HYPERLINK ', text, flags=re.M | re.S | re.I)
text = re.sub('<.*?>', '', text, flags=re.M | re.S)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S)
return unescape(text)
```
Let's see if it works. This is HTML spam:
```
html_spam_emails = [email for email in X_train[y_train==1]
if get_email_structure(email) == "text/html"]
sample_html_spam = html_spam_emails[7]
print(sample_html_spam.get_content().strip()[:1000], "...")
```
And this is the resulting plain text:
```
print(html_to_plain_text(sample_html_spam.get_content())[:1000], "...")
```
Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:
```
def email_to_text(email):
html = None
for part in email.walk():
ctype = part.get_content_type()
if not ctype in ("text/plain", "text/html"):
continue
try:
content = part.get_content()
except: # in case of encoding issues
content = str(part.get_payload())
if ctype == "text/plain":
return content
else:
html = content
if html:
return html_to_plain_text(html)
print(email_to_text(sample_html_spam)[:100], "...")
```
Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit (NLTK). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the --user option):
$ pip3 install nltk
```
try:
import nltk
stemmer = nltk.PorterStemmer()
for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
print(word, "=>", stemmer.stem(word))
except ImportError:
print("Error: stemming requires the NLTK module.")
stemmer = None
```
We will also need a way to replace URLs with the word "URL". For this, we could use hard core regular expressions but we will just use the urlextract library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the --user option):
$ pip3 install urlextract
```
try:
import urlextract # may require an Internet connection to download root domain names
url_extractor = urlextract.URLExtract()
print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s"))
except ImportError:
print("Error: replacing URLs requires the urlextract module.")
url_extractor = None
```
We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's split() method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.
```
from sklearn.base import BaseEstimator, TransformerMixin
class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True,
replace_urls=True, replace_numbers=True, stemming=True):
self.strip_headers = strip_headers
self.lower_case = lower_case
self.remove_punctuation = remove_punctuation
self.replace_urls = replace_urls
self.replace_numbers = replace_numbers
self.stemming = stemming
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_transformed = []
for email in X:
text = email_to_text(email) or ""
if self.lower_case:
text = text.lower()
if self.replace_urls and url_extractor is not None:
urls = list(set(url_extractor.find_urls(text)))
urls.sort(key=lambda url: len(url), reverse=True)
for url in urls:
text = text.replace(url, " URL ")
if self.replace_numbers:
text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+))?', 'NUMBER', text)
if self.remove_punctuation:
text = re.sub(r'\W+', ' ', text, flags=re.M)
word_counts = Counter(text.split())
if self.stemming and stemmer is not None:
stemmed_word_counts = Counter()
for word, count in word_counts.items():
stemmed_word = stemmer.stem(word)
stemmed_word_counts[stemmed_word] += count
word_counts = stemmed_word_counts
X_transformed.append(word_counts)
return np.array(X_transformed)
```
Let's try this transformer on a few emails:
```
X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts
```
This looks about right!
Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose fit() method will build the vocabulary (an ordered list of the most common words) and whose transform() method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.
```
from scipy.sparse import csr_matrix
class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
def __init__(self, vocabulary_size=1000):
self.vocabulary_size = vocabulary_size
def fit(self, X, y=None):
total_count = Counter()
for word_count in X:
for word, count in word_count.items():
total_count[word] += min(count, 10)
most_common = total_count.most_common()[:self.vocabulary_size]
self.most_common_ = most_common
self.vocabulary_ = {word: index + 1 for index, (word, count) in enumerate(most_common)}
return self
def transform(self, X, y=None):
rows = []
cols = []
data = []
for row, word_count in enumerate(X):
for word, count in word_count.items():
rows.append(row)
cols.append(self.vocabulary_.get(word, 0))
data.append(count)
return csr_matrix((data, (rows, cols)), shape=(len(X), self.vocabulary_size + 1))
vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
X_few_vectors
X_few_vectors.toarray()
```
What does this matrix mean? Well, the 64 in the third row, first column, means that the third email contains 64 words that are not part of the vocabulary. The 1 next to it means that the first word in the vocabulary is present once in this email. The 2 next to it means that the second word is present twice, and so on. You can look at the vocabulary to know which words we are talking about. The first word is "of", the second word is "and", etc.
```
vocab_transformer.vocabulary_
```
We are now ready to train our first spam classifier! Let's transform the whole dataset:
```
from sklearn.pipeline import Pipeline
preprocess_pipeline = Pipeline([
("email_to_wordcount", EmailToWordCounterTransformer()),
("wordcount_to_vector", WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
log_clf = LogisticRegression(random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()
```
Over 98.7%, not bad for a first try! :) However, remember that we are using the "easy" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.
But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:
```
from sklearn.metrics import precision_score, recall_score
X_test_transformed = preprocess_pipeline.transform(X_test)
log_clf = LogisticRegression(random_state=42)
log_clf.fit(X_train_transformed, y_train)
y_pred = log_clf.predict(X_test_transformed)
print("Precision: {:.2f}%".format(100 * precision_score(y_test, y_pred)))
print("Recall: {:.2f}%".format(100 * recall_score(y_test, y_pred)))
```
| github_jupyter |
```
import os, sys, time, importlib
import geopandas as gpd
import pandas as pd
import networkx as nx
sys.path.append('/home/wb514197/Repos/GOSTnets')
import GOSTnets as gn
import GOSTnets.calculate_od_raw as calcOD
from GOSTnets.load_osm import *
import rasterio as rio
from osgeo import gdal
import numpy as np
from shapely.geometry import Point
sys.path.append('/home/wb514197/Repos/INFRA_SAP')
from infrasap import aggregator
%load_ext autoreload
%autoreload 2
import glob
import os
import numpy as np
import pandas as pd
import geopandas as gpd
import rasterio as rio
from rasterio import features
from rasterstats import zonal_stats
from rasterio.warp import reproject, Resampling
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from infrasap import rasterMisc
country = 'zimbabwe'
iso3 = 'ZWE'
epsg = 32736
base_in = "/home/public/Data/PROJECTS/INFRA_SAP"
in_folder = os.path.join(base_in, iso3)
# define data paths
focal_admin2 = os.path.join(in_folder, "admin.shp")
focal_osm = os.path.join(in_folder, f"{country}-latest.osm.pbf")
pop_name = "WP_2020_1km"
wp_1km = os.path.join(in_folder, f"{pop_name}.tif")
urban_extents = os.path.join(in_folder, "urban_extents.shp")
airports = os.path.join(in_folder, "airports.shp")
ports = os.path.join(in_folder, "ports.shp")
borders = os.path.join(in_folder, "borders.shp")
base_out = "/home/wb514197/data/INFRA_SAP" # GOT permission denied using public
out_folder = os.path.join(base_out, iso3)
if not os.path.exists(out_folder):
os.makedirs(out_folder)
targets_rio = rio.open("/home/wb514197/data/ENERGY/targets.tif") # from GRIDFINDER Paper
# targets = targets_rio.read(1)
wp_100m = rio.open(os.path.join(out_folder, "zwe_ppp_2020_UNadj.tif"))
wp_arr = wp_100m.read(1, masked=True)
pop_fb = rio.open(os.path.join(out_folder, "population_zwe_2019-07-01.tif"))
fb_arr = pop_fb.read(1, masked=True)
wp_arr.shape
fb_arr.shape
rasterMisc.standardizeInputRasters(targets_rio, wp_100m, os.path.join(out_folder, "energy", "targets_ZWE_wp.tif"), data_type='C')
rasterMisc.standardizeInputRasters(targets_rio, pop_fb, os.path.join(out_folder, "energy", "targets_ZWE_fb.tif"), data_type='C')
targets_fb = rio.open(os.path.join(out_folder, "energy", "targets_ZWE_fb.tif"))
targets_fb_arr = targets_fb.read(1)
targets_wp = rio.open(os.path.join(out_folder, "energy", "targets_ZWE_wp.tif"))
targets_wp_arr = targets_wp.read(1)
targets_wp.shape == wp_100m.shape
targets_fb.shape == pop_fb.shape
intersect_wp = targets_wp_arr*wp_arr
intersect_fb = targets_fb_arr*fb_arr
admin = gpd.read_file(focal_admin2)
zs_sum_wp = pd.DataFrame(zonal_stats(admin, wp_arr, affine=wp_100m.transform, stats='sum', nodata=wp_100m.nodata)).rename(columns={'sum':'pop_wp'})
zs_electrified_wp = pd.DataFrame(zonal_stats(admin, intersect_wp, affine=wp_100m.transform, stats='sum', nodata=wp_100m.nodata)).rename(columns={'sum':'pop_electrified_wp'})
zs_sum_fb = pd.DataFrame(zonal_stats(admin, fb_arr, affine=pop_fb.transform, stats='sum', nodata=pop_fb.nodata)).rename(columns={'sum':'pop_fb'})
zs_electrified_fb = pd.DataFrame(zonal_stats(admin, intersect_fb, affine=pop_fb.transform, stats='sum', nodata=pop_fb.nodata)).rename(columns={'sum':'pop_electrified_fb'})
res = pd.concat([admin, zs_sum_wp, zs_electrified_wp, zs_sum_fb, zs_electrified_fb], axis=1)
res['pct_access_wp'] = res['pop_electrified_wp']/res['pop_wp']
res['pct_access_fb'] = res['pop_electrified_fb']/res['pop_fb']
res.columns
res.to_file(os.path.join(out_folder, "energy", "ElectricityAccess.shp"), driver='ESRI Shapefile')
res_table = res.drop(['geometry','Shape_Leng','Shape_Area'], axis=1)
res_table.to_csv(os.path.join(out_folder, "energy", "ElectricityAccess2.csv"), index=False)
transmission = gpd.read_file(os.path.join(in_folder, 'transmission_lines.shp'))
transmission = transmission.to_crs(epsg)
transmission['buffer'] = transmission.buffer(10000)
transmission_buff = transmission.set_geometry('buffer')
transmission_buff = transmission_buff.unary_union
transmission_buff
transmission_buff_gdf = gpd.GeoDataFrame(geometry=[transmission_buff],
crs=epsg)
transmission_buff_gdf
admin_proj = admin.to_crs(epsg)
admin_buff = gpd.overlay(admin_proj, transmission_buff_gdf, how='intersection')
admin_buff.plot('OBJECTID')
admin_buff_wgs = admin_buff.to_crs('EPSG:4326')
zs_10k_trans_wp = pd.DataFrame(zonal_stats(admin_buff_wgs, wp_arr, affine=wp_100m.transform, stats='sum', nodata=wp_100m.nodata)).rename(columns={'sum':'pop_transmission_wp'})
zs_10k_trans_fb = pd.DataFrame(zonal_stats(admin_buff_wgs, fb_arr, affine=pop_fb.transform, stats='sum', nodata=pop_fb.nodata)).rename(columns={'sum':'pop_transmission_fb'})
res = pd.concat([admin, zs_sum_wp, zs_electrified_wp, zs_sum_fb, zs_electrified_fb, zs_10k_trans_wp, zs_10k_trans_fb], axis=1)
res['pct_access_wp'] = res['pop_electrified_wp']/res['pop_wp']
res['pct_access_fb'] = res['pop_electrified_fb']/res['pop_fb']
res['pct_transmission_10k_wp'] = res['pop_transmission_wp']/res['pop_wp']
res['pct_transmission_10k_fb'] = res['pop_transmission_fb']/res['pop_fb']
res.to_file(os.path.join(out_folder, "energy", "ElectricityAccess.shp"), driver='ESRI Shapefile')
res_table = res.drop(['geometry','Shape_Leng','Shape_Area'], axis=1)
res_table.to_csv(os.path.join(out_folder, "energy", "ElectricityAccess3.csv"), index=False)
```
| github_jupyter |
[Binary Tree Tilt](https://leetcode.com/problems/binary-tree-tilt/)。定义倾斜程度,节点的倾斜程度等于左子树节点和与右子树节点和的绝对差,而整棵树的倾斜程度等于所有节点倾斜度的和。求一棵树的倾斜程度。
思路:因为求倾斜程度牵涉到节点的累计和,所以在设计递归函数时返回一个累加和。
```
def findTilt(root: TreeNode) -> int:
res = 0
def rec(root): # 返回累加和的递归函数
if not root:
return 0
nonlocal res
left_sum = rec(root.left)
right_sum = rec(root.right)
res += abs(left_sum-right_sum)
return left_sum+right_sum+root.val
rec(root)
return res
```
京东2019实习笔试题:
体育场突然着火了,现场需要紧急疏散,但是过道真的是太窄了,同时只能容许一个人通过。现在知道了体育场的所有座位分布,座位分布图是一棵树,已知每个座位上都坐了一个人,安全出口在树的根部,也就是1号结点的位置上。其他节点上的人每秒都能向树根部前进一个结点,但是除了安全出口以外,没有任何一个结点可以同时容纳两个及以上的人,这就需要一种策略,来使得人群尽快疏散,问在采取最优策略的情况下,体育场最快可以在多长时间内疏散完成。
示例数据:
6
2 1
3 2
4 3
5 2
6 1
思路:在第二层以下的所有节点,每次均只能移动一个节点,所以散场的时间由第二层以下的节点数决定。找到所有分支中节点数最大的那一支,返回其节点数即可
```
n = int(input())
branches = list()
for _ in range(n-1):
a, b = map(int, input().split())
if b == 1: # 新分支
branches.append(set([a]))
for branch in branches:
if b in branch:
branch.add(a)
print(branches)
print(max(map(len, branches)))
```
[Leaf-Similar Trees](https://leetcode.com/problems/leaf-similar-trees/)。一颗二叉树,从左往右扫描经过的所有叶节点构成的序列为叶节点序列。给两颗二叉树,判断两棵树的叶节点序列是否相同。
思路:易得叶节点序列可以通过中序遍历得到。
```
def leafSimilar(root1: TreeNode, root2: TreeNode) -> bool:
def get_leaf_seq(root):
res=list()
if not root:
return res
s=list()
while root or s:
while root:
s.append(root)
root=root.left
vis_node=s.pop()
if not vis_node.left and not vis_node.right:
res.append(vis_node.val)
if vis_node.right:
root=vis_node.right
return res
seq_1,seq_2=get_leaf_seq(root1),get_leaf_seq(root2)
if len(seq_1)!=len(seq_2):
return False
for val_1,val_2 in zip(seq_1,seq_2):
if val_1!=val_2:
return False
return True
```
[Increasing Order Search Tree](https://leetcode.com/problems/increasing-order-search-tree/)。给一颗BST,将其转化成只有右分支的单边树。
思路:只有右分支的BST,那么根节点是最小节点,一直往右一直增。BST的递增序列是通过中序遍历得到,新建一棵树即可。
```
def increasingBST(root: TreeNode) -> TreeNode:
res = None
s = list()
while s or root:
while root:
s.append(root)
root = root.left
vis_node = s.pop()
if res is None: # 第一个节点特殊处理
res = TreeNode(vis_node.val)
ptr = res
else:
ptr.right = TreeNode(vis_node.val)
ptr = ptr.right
if vis_node.right:
root = vis_node.right
return res
```
| github_jupyter |
# All
## Set Up
```
print("Installing dependencies...")
%tensorflow_version 2.x
!pip install -q t5
import functools
import os
import time
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import t5
```
## Set UP TPU Runtime
```
ON_CLOUD = True
if ON_CLOUD:
print("Setting up GCS access...")
import tensorflow_gcs_config
from google.colab import auth
# Set credentials for GCS reading/writing from Colab and TPU.
TPU_TOPOLOGY = "v3-8"
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU zdetection
TPU_ADDRESS = tpu.get_master()
print('Running on TPU:', TPU_ADDRESS)
except ValueError:
raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
auth.authenticate_user()
tf.config.experimental_connect_to_host(TPU_ADDRESS)
tensorflow_gcs_config.configure_gcs_from_colab_auth()
tf.disable_v2_behavior()
# Improve logging.
from contextlib import contextmanager
import logging as py_logging
if ON_CLOUD:
tf.get_logger().propagate = False
py_logging.root.setLevel('INFO')
@contextmanager
def tf_verbosity_level(level):
og_level = tf.logging.get_verbosity()
tf.logging.set_verbosity(level)
yield
tf.logging.set_verbosity(og_level)
```
## 4b
```
def dumping_dataset(split, shuffle_files = False):
del shuffle_files
if split == 'train':
ds = tf.data.TextLineDataset(
[
'gs://scifive/finetune/bioasq4b/bioasq_4b_train_1.tsv',
]
)
else:
ds = tf.data.TextLineDataset(
[
'gs://scifive/finetune/bioasq4b/bioasq_4b_test.tsv',
]
)
# # Split each "<t1>\t<t2>" example into (input), target) tuple.
ds = ds.map(
functools.partial(tf.io.decode_csv, record_defaults=["", ""],
field_delim="\t", use_quote_delim=False),
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Map each tuple to a {"input": ... "target": ...} dict.
ds = ds.map(lambda *ex: dict(zip(["input", "target"], ex)))
return ds
print("A few raw validation examples...")
for ex in tfds.as_numpy(dumping_dataset("train").take(5)):
print(ex)
def ner_preprocessor(ds):
def normalize_text(text):
text = tf.strings.lower(text)
text = tf.strings.regex_replace(text,"'(.*)'", r"\1")
return text
def to_inputs_and_targets(ex):
"""Map {"inputs": ..., "targets": ...}->{"inputs": ner..., "targets": ...}."""
return {
"inputs":
tf.strings.join(
["bioasq4b: ", normalize_text(ex["input"])]),
"targets": normalize_text(ex["target"])
}
return ds.map(to_inputs_and_targets,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
t5.data.TaskRegistry.remove('bioasq4b')
t5.data.TaskRegistry.add(
"bioasq4b",
# Supply a function which returns a tf.data.Dataset.
dataset_fn=dumping_dataset,
splits=["train", "validation"],
# Supply a function which preprocesses text from the tf.data.Dataset.
text_preprocessor=[ner_preprocessor],
# Lowercase targets before computing metrics.
postprocess_fn=t5.data.postprocessors.lower_text,
# We'll use accuracy as our evaluation metric.
metric_fns=[t5.evaluation.metrics.accuracy,
t5.evaluation.metrics.sequence_accuracy,
],
# output_features=t5.data.Feature(vocabulary=t5.data.SentencePieceVocabulary(vocab))
)
nq_task = t5.data.TaskRegistry.get("bioasq4b")
ds = nq_task.get_dataset(split="train", sequence_length={"inputs": 128, "targets": 128})
print("A few preprocessed validation examples...")
for ex in tfds.as_numpy(ds.take(5)):
print(ex)
```
## Dataset Mixture
```
t5.data.MixtureRegistry.remove("bioasqb")
t5.data.MixtureRegistry.add(
"bioasqb",
["bioasq4b"],
default_rate=1.0
)
```
## Define Model
```
# Using pretrained_models from wiki + books
MODEL_SIZE = "base"
# BASE_PRETRAINED_DIR = "gs://t5-data/pretrained_models"
BASE_PRETRAINED_DIR = "gs://t5_training/models/bio/pmc_v1"
PRETRAINED_DIR = os.path.join(BASE_PRETRAINED_DIR, MODEL_SIZE)
# MODEL_DIR = "gs://t5_training/models/bio/bioasq4b_pmc_v2"
MODEL_DIR = "gs://t5_training/models/bio/bioasq4b_pmc_v2"
MODEL_DIR = os.path.join(MODEL_DIR, MODEL_SIZE)
# Set parallelism and batch size to fit on v2-8 TPU (if possible).
# Limit number of checkpoints to fit within 5GB (if possible).
model_parallelism, train_batch_size, keep_checkpoint_max = {
"small": (1, 256, 16),
"base": (2, 128*2, 8),
"large": (8, 64, 4),
"3B": (8, 16, 1),
"11B": (8, 16, 1)}[MODEL_SIZE]
tf.io.gfile.makedirs(MODEL_DIR)
# The models from our paper are based on the Mesh Tensorflow Transformer.
model = t5.models.MtfModel(
model_dir=MODEL_DIR,
tpu=TPU_ADDRESS,
tpu_topology=TPU_TOPOLOGY,
model_parallelism=model_parallelism,
batch_size=train_batch_size,
sequence_length={"inputs": 512, "targets": 52},
learning_rate_schedule=0.001,
save_checkpoints_steps=1000,
keep_checkpoint_max=keep_checkpoint_max if ON_CLOUD else None,
iterations_per_loop=100,
)
```
## Finetune
```
FINETUNE_STEPS = 45000
model.finetune(
mixture_or_task_name="bioasqb",
pretrained_model_dir=PRETRAINED_DIR,
finetune_steps=FINETUNE_STEPS
)
```
## Predict
```
year = 4
output_dir = 'predict_output_pmc_lower'
import tensorflow.compat.v1 as tf
# for year in range(4,7):
for batch in range (1,6):
task = "%dB%d"%(year, batch)
dir = "bioasq%db"%(year)
input_file = task + '_factoid_predict_input_lower.txt'
output_file = task + '_predict_output.txt'
predict_inputs_path = os.path.join('gs://t5_training/t5-data/bio_data', dir, 'eval_data', input_file)
print(predict_inputs_path)
predict_outputs_path = os.path.join('gs://t5_training/t5-data/bio_data', dir, output_dir, MODEL_SIZE, output_file)
with tf_verbosity_level('ERROR'):
# prediction_files = sorted(tf.io.gfile.glob(predict_outputs_path + "*"))
model.batch_size = 8 # Min size for small model on v2-8 with parallelism 1.
model.predict(
input_file=predict_inputs_path,
output_file=predict_outputs_path,
# inputs=predict_inputs_path,
# targets=predict_outputs_path + '-' + str(prediction_files[-1].split("-")[-1]),
# Select the most probable output token at each step.
temperature=0,
)
print("Predicted task : " + task)
prediction_files = sorted(tf.io.gfile.glob(predict_outputs_path + "*"))
# print('score', score)
print("\nPredictions using checkpoint %s:\n" % prediction_files[-1].split("-")[-1])
# t5_training/t5-data/bio_data/bioasq4b/eval_data/4B1_factoid_predict_input.txt
```
| github_jupyter |
```
import tensorflow as tf
import math
print('TensorFlow version: ' + tf.__version__)
from tensorflow.examples.tutorials.mnist import input_data as mnist_data
mnist = mnist_data.read_data_sets("../MNIST_data", one_hot=True, reshape=True, validation_size=0)
x_train = mnist.train.images # we will not be using these to feed in data
y_train = mnist.train.labels # instead we will be using next_batch function to train in batches
x_test = mnist.test.images
y_test = mnist.test.labels
print ('We have '+str(x_train.shape[0])+' training examples in dataset')
TUTORIAL_NAME = 'Tutorial3a'
MODEL_NAME = 'convnetTFonAndroid'
SAVED_MODEL_PATH = '../' + TUTORIAL_NAME+'_Saved_model/'
BATCH_SIZE = 100
KEEP_PROB = 0.75
LEARNING_RATE_MAX = 1e-1 #1e-2
LEARNING_RATE_MIN = 1e-1 #1e-5
TRAIN_STEPS = 600 #600 * 8
def getExponentiallyDecayingLR(TRAIN_STEPS):
up = -math.log(LEARNING_RATE_MAX,10)
lp = -math.log(LEARNING_RATE_MIN,10)
return LEARNING_RATE_MIN + (LEARNING_RATE_MAX - LEARNING_RATE_MIN)* math.pow(10,(up - lp)*((i)/(TRAIN_STEPS+1e-9)))
for i in range(TRAIN_STEPS+1):
LEARNING_RATE = getExponentiallyDecayingLR(TRAIN_STEPS)
if i%(TRAIN_STEPS/10)==0:
print 'Learning Rate: '+str(LEARNING_RATE)
keepProb = tf.placeholder(tf.float32)
lRate = tf.placeholder(tf.float32)
W1 = tf.Variable(tf.truncated_normal([6, 6, 1, 6], stddev=0.1))
B1 = tf.Variable(tf.constant(0.1, tf.float32, [6]))
W2 = tf.Variable(tf.truncated_normal([5, 5, 6, 12], stddev=0.1))
B2 = tf.Variable(tf.constant(0.1, tf.float32, [12]))
W3 = tf.Variable(tf.truncated_normal([4, 4, 12, 24], stddev=0.1))
B3 = tf.Variable(tf.constant(0.1, tf.float32, [24]))
W4 = tf.Variable(tf.truncated_normal([7 * 7 * 24, 200], stddev=0.1))
B4 = tf.Variable(tf.constant(0.1, tf.float32, [200]))
W5 = tf.Variable(tf.truncated_normal([200, 10], stddev=0.1))
B5 = tf.Variable(tf.constant(0.1, tf.float32, [10]))
# The model
X = tf.placeholder(tf.float32, [None, 28*28], name='modelInput')
X_image = tf.reshape(X, [-1, 28, 28, 1])
Y_ = tf.placeholder(tf.float32, [None, 10])
Y1 = tf.nn.relu(tf.nn.conv2d(X_image, W1, strides=[1, 1, 1, 1], padding='SAME') + B1)
Y2 = tf.nn.relu(tf.nn.conv2d(Y1, W2, strides=[1, 2, 2, 1], padding='SAME') + B2)
Y3 = tf.nn.relu(tf.nn.conv2d(Y2, W3, strides=[1, 2, 2, 1], padding='SAME') + B3)
YY = tf.reshape(Y3, shape=[-1, 7 * 7 * 24])
Y4 = tf.nn.relu(tf.matmul(YY, W4) + B4)
YY4 = tf.nn.dropout(Y4, keepProb)
Ylogits = tf.matmul(YY4, W5) + B5
Y = tf.nn.softmax(Ylogits)
#########
Ylogits = tf.matmul(Y4, W5) + B5
Y_inf = tf.nn.softmax(Ylogits, name='modelOutput')
#########
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Ylogits, labels=Y_))*100
correct_prediction = tf.equal(tf.argmax(Y, 1), tf.argmax(Y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
train_step = tf.train.AdamOptimizer(lRate).minimize(cross_entropy)
tf.set_random_seed(0)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
saver = tf.train.Saver()
for i in range(TRAIN_STEPS+1):
up = -math.log(LEARNING_RATE_MAX,10)
lp = -math.log(LEARNING_RATE_MIN,10)
LEARNING_RATE = getExponentiallyDecayingLR(TRAIN_STEPS)
batch_X, batch_Y = mnist.train.next_batch(BATCH_SIZE)
sess.run(train_step, feed_dict={X: batch_X, Y_: batch_Y, lRate:LEARNING_RATE, keepProb:KEEP_PROB})
if i%100 == 0:
print('Latest learning rate is: '+str(LEARNING_RATE))
if i%100 == 0:
print('Training Step:' + str(i)
+ ' Accuracy = ' + str(sess.run(accuracy, feed_dict={X: x_test, Y_: y_test, keepProb:1.0}))
+ ' Loss = ' + str(sess.run(cross_entropy, {X: x_test, Y_: y_test}))
)
# uncomment this when learning rate is decreasing to save checkpoints
# if i%600 == 0:
# out = saver.save(sess, SAVED_MODEL_PATH + MODEL_NAME + '.ckpt', global_step=i)
tf.train.write_graph(sess.graph_def, SAVED_MODEL_PATH , MODEL_NAME + '.pbtxt')
tf.train.write_graph(sess.graph_def, SAVED_MODEL_PATH , MODEL_NAME + '.pb',as_text=False)
from tensorflow.python.tools import freeze_graph
# Freeze the graph
input_graph = SAVED_MODEL_PATH+MODEL_NAME+'.pb'
input_saver = ""
input_binary = True
input_checkpoint = SAVED_MODEL_PATH+MODEL_NAME+'.ckpt-'+str(TRAIN_STEPS) # change this value TRAIN_STEPS here as per your latest checkpoint saved
output_node_names = 'modelOutput'
restore_op_name = 'save/restore_all'
filename_tensor_name = 'save/Const:0'
output_graph = SAVED_MODEL_PATH+'frozen_'+MODEL_NAME+'.pb'
clear_devices = True
initializer_nodes = ""
variable_names_blacklist = ""
freeze_graph.freeze_graph(
input_graph,
input_saver,
input_binary,
input_checkpoint,
output_node_names,
restore_op_name,
filename_tensor_name,
output_graph,
clear_devices,
initializer_nodes,
variable_names_blacklist
)
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import malaya_speech.train.model.alconformer as conformer
import malaya_speech.train.model.transducer as transducer
import malaya_speech
import tensorflow as tf
import numpy as np
subwords = malaya_speech.subword.load('transducer.subword')
featurizer = malaya_speech.tf_featurization.STTFeaturizer(
normalize_per_feature = True
)
X = tf.compat.v1.placeholder(tf.float32, [None, None], name = 'X_placeholder')
X_len = tf.compat.v1.placeholder(tf.int32, [None], name = 'X_len_placeholder')
batch_size = tf.shape(X)[0]
features = tf.TensorArray(dtype = tf.float32, size = batch_size, dynamic_size = True, infer_shape = False)
features_len = tf.TensorArray(dtype = tf.int32, size = batch_size)
init_state = (0, features, features_len)
def condition(i, features, features_len):
return i < batch_size
def body(i, features, features_len):
f = featurizer(X[i, :X_len[i]])
f_len = tf.shape(f)[0]
return i + 1, features.write(i, f), features_len.write(i, f_len)
_, features, features_len = tf.while_loop(condition, body, init_state)
features_len = features_len.stack()
padded_features = tf.TensorArray(dtype = tf.float32, size = batch_size)
padded_lens = tf.TensorArray(dtype = tf.int32, size = batch_size)
maxlen = tf.reduce_max(features_len)
init_state = (0, padded_features, padded_lens)
def condition(i, padded_features, padded_lens):
return i < batch_size
def body(i, padded_features, padded_lens):
f = features.read(i)
len_f = tf.shape(f)[0]
f = tf.pad(f, [[0, maxlen - tf.shape(f)[0]], [0,0]])
return i + 1, padded_features.write(i, f), padded_lens.write(i, len_f)
_, padded_features, padded_lens = tf.while_loop(condition, body, init_state)
padded_features = padded_features.stack()
padded_lens = padded_lens.stack()
padded_lens.set_shape((None))
padded_features.set_shape((None, None, 80))
padded_features = tf.expand_dims(padded_features, -1)
padded_features, padded_lens
padded_features = tf.identity(padded_features, name = 'padded_features')
padded_lens = tf.identity(padded_lens, name = 'padded_lens')
config = malaya_speech.config.conformer_small_encoder_config
config['dropout'] = 0.0
conformer_model = conformer.Model(**config)
decoder_config = malaya_speech.config.conformer_small_decoder_config
decoder_config['embed_dropout'] = 0.0
transducer_model = transducer.rnn.Model(
conformer_model, vocabulary_size = subwords.vocab_size, **decoder_config
)
p = tf.compat.v1.placeholder(tf.int32, [None, None])
z = tf.zeros((tf.shape(p)[0], 1),dtype=tf.int32)
c = tf.concat([z, p], axis = 1)
p_len = tf.compat.v1.placeholder(tf.int32, [None])
c
training = True
logits = transducer_model([padded_features, c, p_len], training = training)
logits
sess = tf.Session()
sess.run(tf.global_variables_initializer())
var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
saver = tf.train.Saver(var_list = var_list)
saver.restore(sess, 'asr-small-alconformer-transducer/model.ckpt-500000')
decoded = transducer_model.greedy_decoder(padded_features, padded_lens, training = training)
decoded = tf.identity(decoded, name = 'greedy_decoder')
decoded
encoded = transducer_model.encoder(padded_features, training = training)
encoded = tf.identity(encoded, name = 'encoded')
encoded_placeholder = tf.placeholder(tf.float32, [config['dmodel']], name = 'encoded_placeholder')
predicted_placeholder = tf.placeholder(tf.int32, None, name = 'predicted_placeholder')
t = transducer_model.predict_net.get_initial_state().shape
states_placeholder = tf.placeholder(tf.float32, [int(i) for i in t], name = 'states_placeholder')
ytu, new_states = transducer_model.decoder_inference(
encoded=encoded_placeholder,
predicted=predicted_placeholder,
states=states_placeholder,
training = training
)
ytu = tf.identity(ytu, name = 'ytu')
new_states = tf.identity(new_states, name = 'new_states')
ytu, new_states
initial_states = transducer_model.predict_net.get_initial_state()
initial_states = tf.identity(initial_states, name = 'initial_states')
# sess = tf.Session()
# sess.run(tf.global_variables_initializer())
# var_list = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
# saver = tf.train.Saver(var_list = var_list)
# saver.restore(sess, 'asr-small-conformer-transducer/model.ckpt-325000')
files = [
'speech/record/savewav_2020-11-26_22-36-06_294832.wav',
'speech/record/savewav_2020-11-26_22-40-56_929661.wav',
'speech/record/675.wav',
'speech/record/664.wav',
'speech/example-speaker/husein-zolkepli.wav',
'speech/example-speaker/mas-aisyah.wav',
'speech/example-speaker/khalil-nooh.wav',
'speech/example-speaker/shafiqah-idayu.wav',
'speech/khutbah/wadi-annuar.wav',
]
ys = [malaya_speech.load(f)[0] for f in files]
padded, lens = malaya_speech.padding.sequence_1d(ys, return_len = True)
import collections
import numpy as np
import tensorflow as tf
BeamHypothesis = collections.namedtuple(
'BeamHypothesis', ('score', 'prediction', 'states')
)
def transducer(
enc,
total,
initial_states,
encoded_placeholder,
predicted_placeholder,
states_placeholder,
ytu,
new_states,
sess,
beam_width = 10,
norm_score = True,
):
kept_hyps = [
BeamHypothesis(score = 0.0, prediction = [0], states = initial_states)
]
B = kept_hyps
for i in range(total):
A = B
B = []
while True:
y_hat = max(A, key = lambda x: x.score)
A.remove(y_hat)
ytu_, new_states_ = sess.run(
[ytu, new_states],
feed_dict = {
encoded_placeholder: enc[i],
predicted_placeholder: y_hat.prediction[-1],
states_placeholder: y_hat.states,
},
)
for k in range(ytu_.shape[0]):
beam_hyp = BeamHypothesis(
score = (y_hat.score + float(ytu_[k])),
prediction = y_hat.prediction,
states = y_hat.states,
)
if k == 0:
B.append(beam_hyp)
else:
beam_hyp = BeamHypothesis(
score = beam_hyp.score,
prediction = (beam_hyp.prediction + [int(k)]),
states = new_states_,
)
A.append(beam_hyp)
if len(B) > beam_width:
break
if norm_score:
kept_hyps = sorted(
B, key = lambda x: x.score / len(x.prediction), reverse = True
)[:beam_width]
else:
kept_hyps = sorted(B, key = lambda x: x.score, reverse = True)[
:beam_width
]
return kept_hyps[0].prediction
%%time
r = sess.run(decoded, feed_dict = {X: padded, X_len: lens})
for row in r:
print(malaya_speech.subword.decode(subwords, row[row > 0]))
%%time
encoded_, padded_lens_ = sess.run([encoded, padded_lens], feed_dict = {X: padded, X_len: lens})
padded_lens_ = padded_lens_ // conformer_model.conv_subsampling.time_reduction_factor
s = sess.run(initial_states)
for i in range(len(encoded_)):
r = transducer(
enc = encoded_[i],
total = padded_lens_[i],
initial_states = s,
encoded_placeholder = encoded_placeholder,
predicted_placeholder = predicted_placeholder,
states_placeholder = states_placeholder,
ytu = ytu,
new_states = new_states,
sess = sess,
beam_width = 1,
)
print(malaya_speech.subword.decode(subwords, r))
encoded = transducer_model.encoder_inference(padded_features[0])
g = transducer_model._perform_greedy(encoded, tf.shape(encoded)[0],
tf.constant(0, dtype = tf.int32),
transducer_model.predict_net.get_initial_state())
g
indices = g.prediction
minus_one = -1 * tf.ones_like(indices, dtype=tf.int32)
blank_like = 0 * tf.ones_like(indices, dtype=tf.int32)
indices = tf.where(indices == minus_one, blank_like, indices)
num_samples = tf.cast(tf.shape(X[0])[0], dtype=tf.float32)
total_time_reduction_factor = featurizer.frame_step
stime = tf.range(0, num_samples, delta=total_time_reduction_factor, dtype=tf.float32)
stime /= tf.cast(featurizer.sample_rate, dtype=tf.float32)
etime = tf.range(total_time_reduction_factor, num_samples, delta=total_time_reduction_factor, dtype=tf.float32)
etime /= tf.cast(featurizer.sample_rate, dtype=tf.float32)
non_blank = tf.where(tf.not_equal(indices, 0))
non_blank_transcript = tf.gather_nd(indices, non_blank)
non_blank_stime = tf.gather_nd(tf.repeat(tf.expand_dims(stime, axis=-1), tf.shape(indices)[-1], axis=-1), non_blank)[:,0]
non_blank_transcript = tf.identity(non_blank_transcript, name = 'non_blank_transcript')
non_blank_stime = tf.identity(non_blank_stime, name = 'non_blank_stime')
%%time
r = sess.run([non_blank_transcript, non_blank_stime], feed_dict = {X: padded[:1], X_len: lens[:1]})
list(zip([subwords._id_to_subword(row - 1) for row in r[0]], r[1]))
saver = tf.train.Saver()
saver.save(sess, 'output-small-alconformer/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'gather' in n.op.lower()
or 'placeholder' in n.name
or 'encoded' in n.name
or 'decoder' in n.name
or 'ytu' in n.name
or 'new_states' in n.name
or 'padded_' in n.name
or 'initial_states' in n.name
or 'non_blank' in n.name)
and 'adam' not in n.name
and 'global_step' not in n.name
and 'Assign' not in n.name
and 'ReadVariableOp' not in n.name
and 'Gather' not in n.name
]
)
strings.split(',')
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('output-small-alconformer', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('output-small-alconformer/frozen_model.pb')
input_nodes = [
'X_placeholder',
'X_len_placeholder',
'encoded_placeholder',
'predicted_placeholder',
'states_placeholder',
]
output_nodes = [
'greedy_decoder',
'encoded',
'ytu',
'new_states',
'padded_features',
'padded_lens',
'initial_states',
'non_blank_transcript',
'non_blank_stime'
]
inputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in input_nodes}
outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes}
test_sess = tf.Session(graph = g)
r = test_sess.run(outputs['greedy_decoder'], feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
for row in r:
print(malaya_speech.subword.decode(subwords, row[row > 0]))
encoded_, padded_lens_, s = test_sess.run([outputs['encoded'], outputs['padded_lens'], outputs['initial_states']],
feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
padded_lens_ = padded_lens_ // conformer_model.conv_subsampling.time_reduction_factor
i = 0
r = transducer(
enc = encoded_[i],
total = padded_lens_[i],
initial_states = s,
encoded_placeholder = inputs['encoded_placeholder'],
predicted_placeholder = inputs['predicted_placeholder'],
states_placeholder = inputs['states_placeholder'],
ytu = outputs['ytu'],
new_states = outputs['new_states'],
sess = test_sess,
beam_width = 1,
)
malaya_speech.subword.decode(subwords, r)
from tensorflow.tools.graph_transforms import TransformGraph
transforms = ['add_default_attributes',
'remove_nodes(op=Identity, op=CheckNumerics, op=Dropout)',
'fold_batch_norms',
'fold_old_batch_norms',
'quantize_weights(fallback_min=-10, fallback_max=10)',
'strip_unused_nodes',
'sort_by_execution_order']
pb = 'output-small-alconformer/frozen_model.pb'
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile(pb, 'rb') as f:
input_graph_def.ParseFromString(f.read())
transformed_graph_def = TransformGraph(input_graph_def,
input_nodes,
output_nodes, transforms)
with tf.gfile.GFile(f'{pb}.quantized', 'wb') as f:
f.write(transformed_graph_def.SerializeToString())
g = load_graph('output-small-alconformer/frozen_model.pb.quantized')
inputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in input_nodes}
outputs = {n: g.get_tensor_by_name(f'import/{n}:0') for n in output_nodes}
test_sess = tf.Session(graph = g)
r = test_sess.run(outputs['greedy_decoder'], feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
for row in r:
print(malaya_speech.subword.decode(subwords, row[row > 0]))
encoded_, padded_lens_, s = test_sess.run([outputs['encoded'], outputs['padded_lens'], outputs['initial_states']],
feed_dict = {inputs['X_placeholder']: padded,
inputs['X_len_placeholder']: lens})
padded_lens_ = padded_lens_ // conformer_model.conv_subsampling.time_reduction_factor
i = 0
r = transducer(
enc = encoded_[i],
total = padded_lens_[i],
initial_states = s,
encoded_placeholder = inputs['encoded_placeholder'],
predicted_placeholder = inputs['predicted_placeholder'],
states_placeholder = inputs['states_placeholder'],
ytu = outputs['ytu'],
new_states = outputs['new_states'],
sess = test_sess,
beam_width = 1,
)
malaya_speech.subword.decode(subwords, r)
```
| github_jupyter |
# Providing your notebook
## It's all JSON
```
!head -20 tour.ipynb
```
# Deliver as HTML
## nbconvert
$ jupyter nbconvert tour.ipynb
[NbConvertApp] Converting notebook tour.ipynb to html
[NbConvertApp] Writing 219930 bytes to tour.html
[tour.html](tour.html)
## hosted static
### Raw HTML online
### [nbviewer](https://github.com/jupyter/nbviewer)
### Github
## Download link
# Live notebooks
## Hosted live notebooks
[Wakari](https://wakari.io/)
## Notebook server
$ jupyter notebook
[I 09:28:21.390 NotebookApp] Serving notebooks from local directory: /Users/catherinedevlin/werk/tech-talks/jupyter-notebook
[I 09:28:21.390 NotebookApp] 0 active kernels
[I 09:28:21.390 NotebookApp] The IPython Notebook is running at: http://localhost:8888/
[I 09:28:21.390 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
## [tmpnb](https://github.com/jupyter/tmpnb)
Service that launches Docker containers
[try.jupyter.org](https://try.jupyter.org)
## [Thebe](https://oreillymedia.github.io/thebe/)
## S5 slideshow
$ jupyter nbconvert --to slides tour.ipynb
[NbConvertApp] Converting notebook tour.ipynb to slides
[NbConvertApp] Writing 223572 bytes to tour.slides.html
[tour.slides.html](tour.slides.html)
<table> <tr><th><th>Description</th></th><th>Power</th>
<th>Ease for user</th><th>Ease for you</th></tr>
<tr><td><a href="https://ipython.org/ipython-doc/3/notebook/nbconvert.html">jupyter nbconvert</a></td><td>Generate static HTML from notebook</td>
<td class="rating power">+</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
<tr><td><a href="http://nbviewer.ipython.org">nbviewer</a></td><td>Renders from a JSON URL</td>
<td class="rating power">+</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td>GitHub</td>
<td>Automatic rendering at GitHub URL</td>
<td class="rating power">+</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++++</td>
</tr>
<tr><td>Download link</td><td>From any rendered notebook</td>
<td class="rating power">++++</td>
<td class="rating user-ease">+</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td>Commercially hosted</td><td><a href="https://wakari.io/">Wakari</a>, etc.</td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td><a href="http://jupyter-notebook.readthedocs.org/en/latest/public_server.html">Notebook server</a></td><td></td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
</tr>
<tr><td>[tmpnb](https://github.com/jupyter/tmpnb)
<a href="https://try.jupyter.org/">(try.jupyter.org)</a></td>
<td>Dockerized notebook server per connection</td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
<tr><td><a href="https://oreillymedia.github.io/thebe/">Thebe</a></td><td>Connects HTML to notebook server</td>
<td class="rating power">+++</td>
<td class="rating user-ease">+++</td>
<td class="rating dev-ease">++</td>
</tr>
<tr><td><a href="https://ipython.org/ipython-doc/3/notebook/nbconvert.html">jupyter convert --to slides</a></td><td>S5</td>
<td class="rating power">+</td>
<td class="rating user-ease">++++</td>
<td class="rating dev-ease">+++</td>
</tr>
<tr><td><a href="https://github.com/damianavila/RISE">RISE</a></td><td>S5 with executable cells</td>
<td class="rating power">+++</td>
<td class="rating user-ease">++++</td>
<td class="rating dev-ease">++</td>
</tr>
</table>
| github_jupyter |
```
from pathlib import Path
from matplotlib import rcParams
rcParams['font.family'] = 'sans-serif'
rcParams['font.sans-serif'] = ['Arial']
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pyprojroot
import seaborn as sns
import searchnets
def cm_to_inches(cm):
return cm / 2.54
mpl.style.use(['seaborn-darkgrid', 'seaborn-paper'])
```
paths
```
SOURCE_DATA_ROOT = pyprojroot.here('results/searchstims/source_data/3stims_white_background')
FIGURES_ROOT = pyprojroot.here('docs/paper/figures/experiment-1/searchstims-3stims-white-background')
```
constants
```
LEARNING_RATE = 1e-3
NET_NAMES = [
'alexnet',
]
METHODS = [
'initialize',
'transfer'
]
MODES = [
'classify',
]
```
## load source data
Get just the transfer learning results, then group by network, stimulus, and set size, and compute the mean accuracy for each set size.
```
df_all = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('all.csv')
)
stim_acc_diff_df = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('stim_acc_diff.csv')
)
net_acc_diff_df = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('net_acc_diff.csv')
)
df_acc_diff_by_stim = pd.read_csv(
SOURCE_DATA_ROOT.joinpath('acc_diff_by_stim.csv'),
index_col='net_name'
)
```
columns will be stimuli, in increasing order of accuracy drop across models
```
# not sure why my sorting isn't working right in the script that generates source data,
# but it's clear by eye the effect size is in the right order
# so I'm just typing them manually
FIG_COLUMNS = ['RVvGV', 'RVvRHGV', '2_v_5']
```
rows will be nets, in decreasing order of accuracy drops across stimuli
```
FIG_ROWS = net_acc_diff_df['net_name'].values.tolist()
print(FIG_ROWS)
```
## plot figure
```
pal = sns.color_palette("rocket", n_colors=6)
len(pal)
cmaps = {}
for net in ('alexnet', 'CORnet_Z', 'VGG16', 'CORnet_S'):
cmaps[net] = {
'transfer': {
'unit_both': pal[3],
'mn_both': pal[2],
},
'initialize': {
'unit_both': pal[5],
'mn_both': pal[4],
}
}
UNIT_COLORS = {
'present': 'violet',
'absent': 'lightgreen',
'both': 'darkgrey'
}
# default colors used for plotting mean across sampling units in each condition
MN_COLORS = {
'present': 'magenta',
'absent': 'lawngreen',
'both': 'black'
}
def metric_v_set_size_df(df, net_name, method, stimulus, metric, conditions,
unit_colors=UNIT_COLORS, mn_colors=MN_COLORS,
ax=None, title=None, save_as=None, figsize=(10, 5),
set_xlabel=False, set_ylabel=False, set_ylim=True,
ylim=(0, 1.1), yticks=None, plot_mean=True, add_legend=False, dpi=600):
"""plot accuracy as a function of visual search task set size
for models trained on a single task or dataset
Accepts a Pandas dataframe and column names that determine what to plot.
Dataframe is produces by searchstims.utils.general.results_csv function.
Parameters
----------
df : pandas.Dataframe
path to results.gz file saved after measuring accuracy of trained networks
on test set of visual search stimuli
net_name : str
name of neural net architecture. Must be a value in the 'net_name' column
of df.
method : str
method used for training. Must be a value in the 'method' column of df.
stimulus : str
type of visual search stimulus, e.g. 'RVvGV', '2_v_5'. Must be a value in
the 'stimulus' column of df.
metric : str
metric to plot. One of {'acc', 'd_prime'}.
conditions : list, str
conditions to plot. One of {'present', 'absent', 'both'}. Corresponds to
'target_condition' column in df.
Other Parameters
----------------
unit_colors : dict
mapping of conditions to colors used for plotting 'sampling units', i.e. each trained
network. Default is UNIT_COLORS defined in this module.
mn_colors : dict
mapping of conditions to colors used for plotting mean across 'sampling units'
(i.e., each trained network). Default is MN_COLORS defined in this module.
ax : matplotlib.Axis
axis on which to plot figure. Default is None, in which case a new figure with
a single axis is created for the plot.
title : str
string to use as title of figure. Default is None.
save_as : str
path to directory where figure should be saved. Default is None, in which
case figure is not saved.
figsize : tuple
(width, height) in inches. Default is (10, 5). Only used if ax is None and a new
figure is created.
set_xlabel : bool
if True, set the value of xlabel to "set size". Default is False.
set_ylabel : bool
if True, set the value of ylabel to metric. Default is False.
set_ylim : bool
if True, set the y-axis limits to the value of ylim.
ylim : tuple
with two elements, limits for y-axis. Default is (0, 1.1).
plot_mean : bool
if True, find mean accuracy and plot as a separate solid line. Default is True.
add_legend : bool
if True, add legend to axis. Default is False.
Returns
-------
None
"""
if ax is None:
fig, ax = plt.subplots(dpi=dpi, figsize=figsize)
df = df[(df['net_name'] == net_name)
& (df['method'] == method)
& (df['stimulus'] == stimulus)]
if not all(
[df['target_condition'].isin([targ_cond]).any() for targ_cond in conditions]
):
raise ValueError(f'not all target conditions specified were found in dataframe.'
f'Target conditions specified were: {conditions}')
handles = []
labels = []
set_sizes = None # because we verify set sizes is the same across conditions
net_nums = df['net_number'].unique()
# get metric across set sizes for each training replicate
# we end up with a list of vectors we can pass to ax.plot,
# so that the 'line' for each training replicate gets plotted
for targ_cond in conditions:
metric_vals = []
for net_num in net_nums:
metric_vals.append(
df[(df['net_number'] == net_num)
& (df['target_condition'] == targ_cond)][metric].values
)
curr_set_size = df[(df['net_number'] == net_num)
& (df['target_condition'] == targ_cond)]['set_size'].values
if set_sizes is None:
set_sizes = curr_set_size
else:
if not np.array_equal(set_sizes, curr_set_size):
raise ValueError(
f'set size for net number {net_num}, '
f'target condition {targ_cond}, did not match others'
)
for row_num, arr_metric in enumerate(metric_vals):
x = np.arange(1, len(set_sizes) + 1) * 2
# just label first row, so only one entry shows up in legend
if row_num == 0:
label = f'training replicate, {method}'
else:
label = None
ax.plot(x, arr_metric, color=unit_colors[targ_cond], linewidth=1,
linestyle='--', alpha=0.95, label=label)
ax.set_xticks(x)
ax.set_xticklabels(set_sizes)
ax.set_xlim([0, x.max() + 2])
if plot_mean:
mn_metric = np.asarray(metric_vals).mean(axis=0)
if targ_cond == 'both':
mn_metric_label = f'mean, {method}'
else:
mn_metric_label = f'mean {metric}, {targ_cond}, {method}'
labels.append(mn_metric_label)
mn_metric_line, = ax.plot(x, mn_metric,
color=mn_colors[targ_cond], linewidth=1.5,
alpha=0.65,
label=mn_metric_label)
ax.set_xticks(x)
ax.set_xticklabels(set_sizes)
ax.set_xlim([0, x.max() + 2])
handles.append(mn_metric_line)
if title:
ax.set_title(title)
if set_xlabel:
ax.set_xlabel('set size')
if set_ylabel:
ax.set_ylabel(metric)
if yticks is not None:
ax.set_yticks(yticks)
if set_ylim:
ax.set_ylim(ylim)
if add_legend:
ax.legend(handles=handles,
labels=labels,
loc='lower left')
if save_as:
plt.savefig(save_as)
FIGSIZE = tuple(cm_to_inches(size) for size in (7, 2.5))
DPI = 300
n_rows = len(FIG_ROWS)
n_cols = len(FIG_COLUMNS)
fig, ax = plt.subplots(n_rows, n_cols, sharey=True, figsize=FIGSIZE, dpi=DPI)
ax = ax.reshape(n_rows, n_cols)
fig.subplots_adjust(hspace=0.5)
LABELSIZE = 6
XTICKPAD = 2
YTICKPAD = 1
for ax_ in ax.ravel():
ax_.xaxis.set_tick_params(pad=XTICKPAD, labelsize=LABELSIZE)
ax_.yaxis.set_tick_params(pad=YTICKPAD, labelsize=LABELSIZE)
STIM_FONTSIZE = 4
add_legend = False
for row, net_name in enumerate(FIG_ROWS):
df_this_net = df_all[df_all['net_name'] == net_name]
for method in ['transfer', 'initialize']:
for col, stim_name in enumerate(FIG_COLUMNS):
unit_colors = {'both': cmaps[net_name][method]['unit_both']}
mn_colors = {'both': cmaps[net_name][method]['mn_both']}
ax[row, col].set_axisbelow(True) # so grid is behind
metric_v_set_size_df(df=df_this_net,
net_name=net_name,
method=method,
stimulus=stim_name,
metric='accuracy',
conditions=['both'],
unit_colors=unit_colors,
mn_colors=mn_colors,
set_ylim=True,
ax=ax[row, col],
ylim=(0.4, 1.1),
yticks=(0.5, 0.6, 0.7, 0.8, 0.9, 1.0),
add_legend=add_legend)
if row == 0:
title = stim_name.replace('_',' ')
ax[row, col].set_title(title,
fontsize=STIM_FONTSIZE,
pad=5) # pad so we can put image over title without it showing
if col == 0:
ax[row, col].set_ylabel('accuracy')
net_name_for_fig = net_name.replace('_', ' ')
ax[row, col].text(0, 0.15, net_name_for_fig, fontweight='bold', fontsize=8)
# add a big axis, hide frame
big_ax = fig.add_subplot(111, frameon=False)
# hide tick and tick label of the big axis
big_ax.tick_params(labelcolor='none', top=False, bottom=False, left=False, right=False)
big_ax.grid(False)
handles, labels = ax[0, 0].get_legend_handles_labels()
LEGEND_FONTSIZE = 4
BBOX_TO_ANCHOR = (0.0125, 0.2, 0.8, .075)
big_ax.legend(handles, labels,
bbox_to_anchor=BBOX_TO_ANCHOR,
ncol=2, mode="expand", frameon=True,
borderaxespad=0., fontsize=LEGEND_FONTSIZE);
big_ax.set_xlabel("set size", labelpad=0.1);
for ext in ('svg', 'png'):
fig_path = FIGURES_ROOT.joinpath(
f'acc-v-set-size/acc-v-set-size.{ext}'
)
plt.savefig(fig_path)
```
| github_jupyter |
<!---------------------- Introduction Section ------------------->
<h1> PTRAIL: A <b><i>P</i></b>arallel
<b><i>TR</i></b>ajectory
d<b><i>A</i></b>ta
preprocess<b><i>I</i></b>ng
<b><i>L</i></b>ibrary
</h1>
<h2> Introduction </h2>
<p align='justify'>
PTRAIL is a state-of-the art Mobility Data Preprocessing Library that mainly deals with filtering data, generating features and interpolation of Trajectory Data.
<b><i> The main features of PTRAIL are: </i></b>
<ol align='justify'>
<li> PTRAIL uses primarily parallel computation based on
python Pandas and numpy which makes it very fast as compared
to other libraries available.
</li>
<li> PTRAIL harnesses the full power of the machine that
it is running on by using all the cores available in the
computer.
</li>
<li> PTRAIL uses a customized DataFrame built on top of python
pandas for representation and storage of Trajectory Data.
</li>
<li> PTRAIL also provides several Temporal and kinematic features
which are calculated mostly using parallel computation for very
fast and accurate calculations.
</li>
<li> Moreover, PTRAIL also provides several filteration and
outlier detection methods for cleaning and noise reduction of
the Trajectory Data.
</li>
<li> Apart from the features mentioned above, <i><b> four </b></i>
different kinds of Trajectory Interpolation techniques are
offered by PTRAIL which is a first in the community.
</li>
</ol>
</p>
<!----------------- Dataset Link Section --------------------->
<hr style="height:6px;background-color:black">
<p align='justify'>
In the introduction of the library, the seagulls dataset is used
which can be downloaded from the link below: <br>
<span> ↪ </span>
<a href="https://github.com/YakshHaranwala/PTRAIL/blob/main/examples/data/gulls.csv" target='_blank'> Seagulls Dataset </a>
</p>
<!----------------- NbViewer Link ---------------------------->
<hr style="height:6px;background-color:black">
<p align='justify'>
Note: Viewing this notebook in GitHub will not render JavaScript
elements. Hence, for a better experience, click the link below
to open the Jupyter notebook in NB viewer.
<span> ↪ </span>
<a href="https://nbviewer.jupyter.org/github/YakshHaranwala/PTRAIL/blob/main/examples/0.%20Intro%20to%20PTRAIL.ipynb" target='_blank'> Click Here </a>
</p>
<!------------------------- Documentation Link ----------------->
<hr style="height:6px;background-color:black">
<p align='justify'>
The Link to PTRAIL's Documentation is: <br>
<span> ↪ </span>
<a href='https://PTRAIL.readthedocs.io/en/latest/' target='_blank'> <i> PTRAIL Documentation </i> </a>
<hr style="height:6px;background-color:black">
</p>
<h2> Importing Trajectory Data into a PTRAILDataFrame Dataframe </h2>
<p align='justify'>
PTRAIL Library stores Mobility Data (Trajectories) in a specialised
pandas Dataframe structure called PTRAILDataFrame. As a result, the following
constraints are enforced for the data to be able to be stores in a PTRAILDataFrame.
<ol align='justify'>
<li>
Firstly, for a mobility dataset to be able to work with PTRAIL Library needs
to have the following mandatory columns present:
<ul type='square'>
<li> DateTime </li>
<li> Trajectory ID </li>
<li> Latitude </li>
<li> Longitude </li>
</ul>
</li>
<li>
Secondly, PTRAILDataFrame has a very specific constraint for the index of the
dataframes, the Library enforces a multi-index consisting of the
<b><i> Trajectory ID, DateTime </i></b> columns because the operations of the
library are dependent on the 2 columns. As a result, it is recommended
to not change the index and keep the multi-index of <b><i> Trajectory ID, DateTime </i></b>
at all times.
</li>
<li>
Note that since PTRAILDataFrame Dataframe is built on top of
python pandas, it does not have any restrictions on the number
of columns that the dataset has. The only requirement is that
the dataset should atleast contain the above mentioned four columns.
</li>
</ol>
</p>
<hr style="height:6px;background-color:black">
```
"""
METHOD - I:
1. Enter the trajectory data into a list.
2. Then, convert the list into a PTRAILDataFrame
Dataframe to be used with PTRAIL Library.
"""
import pandas as pd
from ptrail.core.TrajectoryDF import PTRAILDataFrame
list_data = [
[39.984094, 116.319236, '2008-10-23 05:53:05', 1],
[39.984198, 116.319322, '2008-10-23 05:53:06', 1],
[39.984224, 116.319402, '2008-10-23 05:53:11', 1],
[39.984224, 116.319404, '2008-10-23 05:53:11', 1],
[39.984224, 116.568956, '2008-10-23 05:53:11', 1],
[39.984224, 116.568956, '2008-10-23 05:53:11', 1]
]
list_df = PTRAILDataFrame(data_set=list_data,
latitude='lat',
longitude='lon',
datetime='datetime',
traj_id='id')
print(f"The dimensions of the dataframe:{list_df.shape}")
print(f"Type of the dataframe: {type(list_df)}")
"""
METHOD - II:
1. Enter the trajectory data into a dictionary.
2. Then, convert the dictionary into a PTRAILDataFrame
Dataframe to be used with PTRAIL Library.
"""
dict_data = {
'lat': [39.984198, 39.984224, 39.984094, 40.98, 41.256],
'lon': [116.319402, 116.319322, 116.319402, 116.3589, 117],
'datetime': ['2008-10-23 05:53:11', '2008-10-23 05:53:06', '2008-10-23 05:53:30', '2008-10-23 05:54:06', '2008-10-23 05:59:06'],
'id' : [1, 1, 1, 3, 3],
}
dict_df = PTRAILDataFrame(data_set=dict_data,
latitude='lat',
longitude='lon',
datetime='datetime',
traj_id='id')
print(f"The dimensions of the dataframe:{dict_df.shape}")
print(f"Type of the dataframe: {type(dict_df)}")
# Now, printing the head of the dataframe with data
# imported from a list.
list_df.head()
# Now, printing the head of the dataframe with data
# imported from a dictionary.
dict_df.head()
"""
METHOD - III:
1. First, import the seagulls dataset from the csv file
using pandas into a pandas dataframe.
2. Then, convert the pandas dataframe into a PTRAILDataFrame
DataFrame to be used with PTRAIL library.
"""
df = pd.read_csv('https://raw.githubusercontent.com/YakshHaranwala/PTRAIL/main/examples/data/gulls.csv')
seagulls_df = PTRAILDataFrame(data_set=df,
latitude='location-lat',
longitude='location-long',
datetime='timestamp',
traj_id='tag-local-identifier',
rest_of_columns=[])
print(f"The dimensions of the dataframe:{seagulls_df.shape}")
print(f"Type of the dataframe: {type(seagulls_df)}")
# Now, print the head of the seagulls_df dataframe.
seagulls_df.head()
```
<h1>Trajectory Feature Extraction </h1>
<p align='justify'>
As mentioned above, PTRAIL offers a multitude of features
which are calculated based on both Datetime, and the coordinates
of the points given in the data. Both the feature module are named
as follows:
</p>
<ul align='justify'>
<li> temporal_features (based on DateTime) </li>
<li> kinematic_features (based on geographical coordinates) </li>
</ul>
<hr style="background-color:black; height:7px">
<h2> PTRAIL Temporal Features </h2>
<p align='justify'>
The following steps are performed to demonstrate the usage of
Temporal features present in PTRAIL:
<ul type='square', align='justify'>
<li>Various features Date, Time, Week-day, Time of Day etc are
calculated using temporal_features.py module functions and
the results are appended to the original dataframe.
</li>
<li> Not all the functions present in the module are demonstrated
here. Only a few of the functions are demonstrated here, keeping
the length of jupyter notebook in mind. Further functions can
be explored in the documentation of the library. The documentation
link is provided in the introduction section of this notebook.
</li>
</ul>
</p>
```
%%time
"""
To demonstrate the temporal features, we will:
1. First, import the temporal_features.py module from the
features package.
2. Generate Date, Day_Of_Week, Time_Of_day features on
the seagulls dataset.
3. Print the execution time of the code.
4. Finally, check the head of the dataframe to
see the results of feature generation.
"""
from ptrail.features.temporal_features import TemporalFeatures as temporal
temporal_features_df = temporal.create_date_column(seagulls_df)
temporal_features_df = temporal.create_day_of_week_column(temporal_features_df)
temporal_features_df = temporal.create_time_of_day_column(temporal_features_df)
temporal_features_df.head()
```
<h2> PTRAIL kinematic Features </h2>
<p align='justify'>
The following steps are performed to demonstrate the usage of
kinematic features present in PTRAIL:
</p>
<ul type='square', align='justify'>
<li>Various features like Distance, Jerk and rate of bearing rate are
calculated using kinematic_features.py module functions and
the results are appended to the original dataframe.
</li>
<li> While calculating jerk, the columns of acceleration, speed and
distance all are added to the dataframe. Similarly, when calculating
rate of bearing rate, the column of Bearing and Bearing rate are
added to the dataframe.
</li>
<li> Not all the functions present in the module are demonstrated
here. Only a few of the functions are demonstrated here, keeping
the length of jupyter notebook in mind. Further functions can
be explored in the documentation of the library. The documentation
link is provided in the introduction section of this notebook.
</li>
</ul>
```
%%time
"""
To demonstrate the kinematic features, we will:
1. First, import the kinematic_features.py module from the
features package.
2. Calculate Distance, Jerk and Rate of bearing rate features on
the seagulls dataset.
3. Print the execution time of the code.
4. Finally, check the head of the dataframe to
see the results of feature generation.
"""
from ptrail.features.kinematic_features import KinematicFeatures as kinematic
kinematic_features_df = kinematic.create_distance_column(seagulls_df)
kinematic_features_df = kinematic.create_jerk_column(kinematic_features_df)
kinematic_features_df = kinematic.create_rate_of_br_column(kinematic_features_df)
kinematic_features_df.head()
```
<h1> Trajectory Data Preprocessing </h1>
<p align = 'justify'>
In the form of preprocessing PTRAIL provides:
</p>
<ul align = 'justify'>
<li> Outlier Detection </li>
<li> Data filtering based on various constraints </li>
<li> Trajectory Interpolation </li>
</ul>
<p> For interpolation the user needs to provide the type of interpolation. </p>
<hr style="background-color:black; height:7px">
<h2> Outlier Detection and Data Filtering </h2>
<p> PTRAIL provides several outlier detection method based on: </p>
<ul align='justify'>
<li>Distance</li>
<li> Speed </li>
<li> Trajectory length </li>
<li> Hampel filter algorithm </li>
</ul>
<p> It also provides several filtering methods based on various constraints such as: </p>
<ul type = 'square' align='justify'>
<li> Date </li>
<li> Speed </li>
<li> Distance, etc. </li>
</ul>
<p align = 'justify'> Not all the functions present in the module are demonstrated
here. Only a few of the functions are demonstrated here, keeping
the length of jupyter notebook in mind. Further functions can
be explored in the documentation of the library. The documentation
link is provided in the introduction section of this notebook.
</p>
```
%%time
"""
To demonstrate the kinematic features, we will:
1. First, import the filters.py module from the
preprocessing package.
2. Detect outliers and Filter out data based on
bounding box, date and distance.
3. Print the execution time of the code.
"""
from ptrail.preprocessing.filters import Filters as filt
# Makes use of hampel filter from preprocessing package for outlier removal
outlier_df = filt.hampel_outlier_detection(seagulls_df, column_name='lat')
print(f"Length of original: {len(seagulls_df)}")
print(f"Length after outlier removal: {len(outlier_df)}")
print(f"Number of points removed: {len(seagulls_df) - len(outlier_df)}")
%%time
# Filters and gives out data contained within the given bounding box
filter_df_bbox = filt.filter_by_bounding_box(seagulls_df, (61, 24, 65, 25))
print(f"Length of original: {len(seagulls_df)}")
print(f"Length of data in the bounding box: {len(filter_df_bbox)}")
print(f"Number of points removed: {len(seagulls_df) - len(filter_df_bbox)}")
%%time
# Gives out the points contained within the given date range
filter_df_date = filt.filter_by_date(temporal_features_df, start_date='2009-05-28', end_date='2009-06-30')
print(f"Length of original: {len(temporal_features_df)}")
print(f"Length of data within specified date: {len(filter_df_date)}")
print(f"Number of points removed: {len(temporal_features_df) - len(filter_df_date)}")
%%time
# Filtered dataset with a given maximum distance
filter_df_distance = filt.filter_by_max_consecutive_distance(kinematic_features_df, max_distance=4500)
print(f"Length of original: {len(kinematic_features_df)}")
print(f"Length of Max distance Filtered DF: {len(filter_df_distance)}")
print(f"Number of points removed: {len(kinematic_features_df) - len(filter_df_distance)}")
```
<h2> Trajectory Interpolation </h2>
<p align='justify'>
As mentioned above, for the first time in the community, PTRAIL
offers <b><i> four different Interpolation Techniques </i></b> built
into it.
Trajectory Interpolation is widely used when the trajectory data
on hand is not clean and has several Jumps in it which makes the
trajectory abrupt. Using interpolation techniques, those jumps
between the trajectory points can be filled in order to make the
trajectory smoother.
PTRAIL offers following four interpolation techniques:
</p>
<ol align='justify'>
<i>
<li> Linear Interpolation </li>
<li> Cubic Interpolation </li>
<li> Random-Walk Interpolation </li>
<li> Kinematic Interpolation </li>
</i>
</ol>
In the examples of interpolation provided in this notebook, the
examples are demonstrated on a single trajectory selected from the
seagulls dataset. However, it is to be noted that the interpolation
works equally well on datasets containing several trajectories.
<h3> Note </h3>
<p align='justify'>
Furthermore, it is also worthwhile to note that PTRAIL only
interpolates 4 fundamental columns i.e., <i><b> Latitude, Longitude,
DateTime and Trajectory ID </b></i>. Hence, the dataframe returned
by the interpolation methods only contain the above mentioned 4
columns. This decision is taken while keeping the following point
in mind that it is not possible to interpolate other kinds of
data that may or may not be present in the dataset.
</p>
```
"""
First, the following operations are performed:
1. Import the necessary interpolation modules
from the preprocessing package.
2. Select a single trajectory ID from the seagulls
dataset and then plot it using folium.
3. The number of points having time jump greater than
4 hours between 2 points is also calculated and shown.
"""
import ptrail.utilities.constants as const
import folium
from ptrail.preprocessing.interpolation import Interpolation as ip
small_seagulls = seagulls_df.reset_index().loc[seagulls_df.reset_index()[const.TRAJECTORY_ID] == '91732'][[const.TRAJECTORY_ID, const.DateTime, const.LAT, const.LONG]]
time_del = small_seagulls.reset_index()[const.DateTime].diff().dt.total_seconds()
print((time_del > 3600*4).value_counts())
# Here, we plot the smaller trajectory on a folium map.
sw = small_seagulls[['lat', 'lon']].min().values.tolist()
ne = small_seagulls[['lat', 'lon']].max().values.tolist()
coords = [zip(small_seagulls[const.LAT], small_seagulls[const.LONG])]
m1 = folium.Map(location=[small_seagulls[const.LAT].iloc[0], small_seagulls[const.LONG].iloc[0]], zoom_start=1000)
folium.PolyLine(coords,
color='blue',
weight=2,
opacity=0.7).add_to(m1)
m1.fit_bounds([sw, ne])
m1
%%time
"""
Now, to demonstrate interpolation, the following steps
are taken:
1. Interpolate the selected trajectory using all of
the above mentioned techniques and print their
execution times along with them.
2. Then, plot all the trajectories side by side along
with the original trajectory on a scatter plot to
see how the time jumps are filled and the points
are inserted into the trajectory.
"""
# First, linear interpolation is performed.
linear_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='linear')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(linear_ip_gulls)}")
%%time
# Now, cubic interpolation is performed.
cubic_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='cubic')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(cubic_ip_gulls)}")
%%time
# Now, random-walk interpolation is performed.
rw_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='random-walk')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(rw_ip_gulls)}")
%%time
# Now, kinematic interpolation is performed.
kin_ip_gulls = ip.interpolate_position(dataframe=small_seagulls,
time_jump=3600*4,
ip_type='kinematic')
print(f"Original DF Length: {len(small_seagulls)}")
print(f"Interpolated DF Length: {len(kin_ip_gulls)}")
"""
Now, plotting all the scatter plots side by side
in order to compare the interpolation techniques.
"""
import matplotlib.pyplot as plt
plt.scatter(small_seagulls[const.LAT],
small_seagulls[const.LONG],
s=15, color='purple')
plt.title('Original Trajectory', color='black', size=15)
plt.show()
fig, ax = plt.subplots(nrows=2, ncols=2, figsize=(25, 20))
ax[0, 0].scatter(linear_ip_gulls[const.LAT],
linear_ip_gulls[const.LONG],
s=50, color='red')
ax[0, 0].set_title('Linear Interpolation', color='black', size=40)
ax[0, 1].scatter(cubic_ip_gulls[const.LAT],
cubic_ip_gulls[const.LONG],
s=50, color='orange')
ax[0, 1].set_title('Cubic Interpolation', color='black', size=40)
ax[1, 0].scatter(rw_ip_gulls[const.LAT],
rw_ip_gulls[const.LONG],
s=50, color='blue')
ax[1, 0].set_title('Random-Walk Interpolation', color='black', size=40)
ax[1, 1].scatter(kin_ip_gulls[const.LAT],
kin_ip_gulls[const.LONG],
s=50, color='brown')
ax[1, 1].set_title('Kinematic Interpolation', color='black', size=40)
for plot in enumerate(ax.flat):
plot[1].set_xlabel('Latitude', color='grey', size=25)
plot[1].set_ylabel('Longitude', color='grey', size=25)
```
| github_jupyter |
# Modeling and Simulation in Python
Chapter 10
Copyright 2017 Allen Downey
License: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
```
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
```
### Under the hood
To get a `DataFrame` and a `Series`, I'll read the world population data and select a column.
`DataFrame` and `Series` contain a variable called `shape` that indicates the number of rows and columns.
```
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
table2.shape
census = table2.census / 1e9
census.shape
un = table2.un / 1e9
un.shape
```
A `DataFrame` contains `index`, which labels the rows. It is an `Int64Index`, which is similar to a NumPy array.
```
table2.index
```
And `columns`, which labels the columns.
```
table2.columns
```
And `values`, which is an array of values.
```
table2.values
```
A `Series` does not have `columns`, but it does have `name`.
```
census.name
```
It contains `values`, which is an array.
```
census.values
```
And it contains `index`:
```
census.index
```
If you ever wonder what kind of object a variable refers to, you can use the `type` function. The result indicates what type the object is, and the module where that type is defined.
`DataFrame`, `Int64Index`, `Index`, and `Series` are defined by Pandas.
`ndarray` is defined by NumPy.
```
type(table2)
type(table2.index)
type(table2.columns)
type(table2.values)
type(census)
type(census.index)
type(census.values)
```
## Optional exercise
The following exercise provides a chance to practice what you have learned so far, and maybe develop a different growth model. If you feel comfortable with what we have done so far, you might want to give it a try.
**Optional Exercise:** On the Wikipedia page about world population estimates, the first table contains estimates for prehistoric populations. The following cells process this table and plot some of the results.
Select `tables[1]`, which is the second table on the page.
```
table1 = tables[1]
table1.head()
```
Not all agencies and researchers provided estimates for the same dates. Again `NaN` is the special value that indicates missing data.
```
table1.tail()
```
Some of the estimates are in a form we can't read as numbers. We could clean them up by hand, but for simplicity I'll replace any value that has an `M` in it with `NaN`.
```
table1.replace('M', np.nan, regex=True, inplace=True)
```
Again, we'll replace the long column names with more convenient abbreviations.
```
table1.columns = ['prb', 'un', 'maddison', 'hyde', 'tanton',
'biraben', 'mj', 'thomlinson', 'durand', 'clark']
```
This function plots selected estimates.
```
def plot_prehistory(table):
"""Plots population estimates.
table: DataFrame
"""
plot(table.prb, 'ro', label='PRB')
plot(table.un, 'co', label='UN')
plot(table.hyde, 'yo', label='HYDE')
plot(table.tanton, 'go', label='Tanton')
plot(table.biraben, 'bo', label='Biraben')
plot(table.mj, 'mo', label='McEvedy & Jones')
```
Here are the results. Notice that we are working in millions now, not billions.
```
plot_prehistory(table1)
decorate(xlabel='Year',
ylabel='World population (millions)',
title='Prehistoric population estimates')
```
We can use `xlim` to zoom in on everything after Year 0.
```
plot_prehistory(table1)
decorate(xlim=[0, 2000], xlabel='Year',
ylabel='World population (millions)',
title='Prehistoric population estimates')
```
See if you can find a model that fits these data well from Year -1000 to 1940, or from Year 1 to 1940.
How well does your best model predict actual population growth from 1950 to the present?
```
# Solution
def update_func_prop(pop, t, system):
"""Compute the population next year with proportional growth.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
"""
net_growth = system.alpha * pop
return pop + net_growth
# Solution
t_0 = 1
p_0 = table1.biraben[t_0]
prehistory = System(t_0=t_0,
t_end=2016,
p_0=p_0,
alpha=0.0011)
# Solution
def run_simulation(system, update_func):
"""Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
"""
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
# Solution
results = run_simulation(prehistory, update_func_prop)
plot_prehistory(table1)
plot(results, color='gray', label='model')
decorate(xlim=[0, 2000], xlabel='Year',
ylabel='World population (millions)',
title='Prehistoric population estimates')
# Solution
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results / 1000, color='gray', label='model')
decorate(xlim=[1950, 2016], xlabel='Year',
ylabel='World population (billions)',
title='Prehistoric population estimates')
```
| github_jupyter |
```
import pandas as pd
bx_ratings = pd.read_csv('BX-Book-Ratings.csv')
bx_books = pd.read_csv('BX-Books.csv')
bx_users = pd.read_csv('BX-Users.csv')
bx_ratings.head()
print len(bx_ratings), len(bx_books), len(bx_users)
print len(bx_ratings)
#bx_ratings = bx_ratings[ bx_ratings['Book-Rating'] != 0]
print len(bx_ratings)
map_books = {}
for book in bx_ratings['ISBN']:
if( map_books.get(book) == None):
map_books[book] = 1
else:
map_books[book] += 1
count = 0
for key in map_books.keys():
if( map_books[key] >= 20):
count += 1
print count
map_users = {}
for book in bx_ratings['User-ID']:
if( map_users.get(book) == None):
map_users[book] = 1
else:
map_users[book] += 1
count = 0
for key in map_users.keys():
if( map_users[key] >= 5):
count += 1
print count
for key in map_books.keys():
if( map_books[key] < 20 ):
map_books.pop(key)
print len(map_books)
bx_ratings = bx_ratings[ bx_ratings.apply( lambda x: x['ISBN'] in map_books , axis = 1) ]
len(bx_ratings)
map_users = {}
for book in bx_ratings['User-ID']:
if( map_users.get(book) == None):
map_users[book] = 1
else:
map_users[book] += 1
count = 0
for key in map_users.keys():
if( map_users[key] >= 5):
count += 1
print count
for key in map_users.keys():
if( map_users[key] < 5 ):
map_users.pop(key)
print len(map_users)
bx_ratings = bx_ratings[ bx_ratings.apply( lambda x: x['User-ID'] in map_users, axis = 1) ]
print len(bx_ratings), len(map_users), len(map_books)
average_user = {}
bx_ratings.head()
i = 0
for row in bx_ratings.iterrows():
#print row[1][0], row[1][1] , row[1][2] , "\n"
if( row[1][2] > 5 ):
if( average_user.get(row[1][0]) == None ):
average_user[row[1][0]] = [row[1][2], 1]
else:
average_user[row[1][0]][0] += row[1][2]
average_user[row[1][0]][1] += 1
final_average = {}
i = 0
for item in average_user.iterkeys():
final_average[item] = average_user[item][0]*1.0 / average_user[item][1]
if( i <= 10):
print type(item),final_average[item]
i += 1
cnt1 = 0
cnt2 = 0
for i,row in bx_ratings.iterrows():
if( row['Book-Rating'] == 0 ):
if( final_average.get(row['User-ID']) != None):
bx_rating
s.set_value(i,'Book-Rating' , int(final_average[ row['User-ID'] ]) )
cnt1 += 1
cnt2 += 1
print cnt1
cnt = 0
for row in bx_ratings.iterrows():
if( row[1][2] == 0):
cnt += 1
print cnt
bx_ratings = bx_ratings[ bx_ratings.apply( lambda x: x['Book-Rating'] != 0 , axis = 1) ]
cnt = 0
for row in bx_ratings.iterrows():
if( row[1][2] == 0):
cnt += 1
print cnt
bx_ratings.to_csv("newData/df_ratings.csv", sep=',',index=False)
user_list = {}
book_list = {}
for i,item in bx_ratings.iterrows():
user_list[item['User-ID']] = 1
book_list[item['ISBN'] ] = 1
bx_books.head()
bx_books = bx_books[ bx_books.apply( lambda x: book_list.get( x['ISBN']) != None , axis = 1) ]
bx_users = bx_users[ bx_users.apply( lambda x: user_list.get( x['User-ID']) != None ,axis =1 )]
bx_books.head()
bx_users.head()
print len(bx_ratings), len(bx_books), len(bx_users)
bx_users = bx_users.drop( [ 'Unnamed: 3' , 'Unnamed: 4' , 'Unnamed: 5' ] , axis = 1)
bx_users.head()
for i,row in bx_users.iterrows():
tmp = row['Location'].split(',')[-1]
bx_users.set_value(i,'Location',tmp)
bx_users.head()
country_cnt = {}
country_average = {}
for i,row in bx_users.iterrows():
#print row['Age']
if( str(row['Age']) == "nan"):
#print "Rishabh"
continue
if( country_average.get(row['Location'] ) != None ):
country_average[ row['Location'] ] += int(row['Age'])
country_cnt[ row['Location'] ] += 1
else:
country_average[row['Location'] ] = int(row['Age'])
country_cnt[row['Location'] ] = 1
for keys in country_average.iterkeys():
country_average[keys] /= country_cnt[keys]
for i,row in bx_users.iterrows():
if( str(row['Age']) == "nan" ):
if( country_average.get( row['Location']) == None ):
bx_users.set_value(i, 'Age' , 25)
else:
bx_users.set_value(i, 'Age' , int(country_average[ row['Location'] ] ) )
bx_users.head()
for i,item in bx_users.iterrows():
tmp = int( item['Age'])
bx_users.set_value(i,'Age' ,tmp )
bx_users['Age'].value_counts()
set( bx_users['Age'] )
for i,item in bx_users.iterrows():
if( item['Location'] == "" ):
bx_users.set_value(i,'Location' , "Global")
bx_users.to_csv("newData/df_userss.csv", sep=',',index=False)
bx_books.to_csv("newData/df_bookss.csv" , sep = ',' , index = False)
```
| github_jupyter |
# Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: [Redmon et al., 2016](https://arxiv.org/abs/1506.02640) and [Redmon and Farhadi, 2016](https://arxiv.org/abs/1612.08242).
**You will learn to**:
- Use object detection on a car detection dataset
- Deal with bounding boxes
## <font color='darkblue'>Updates</font>
#### If you were working on the notebook before this update...
* The current notebook is version "3a".
* You can find your original work saved in the notebook with the previous version name ("v3")
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of updates
* Clarified "YOLO" instructions preceding the code.
* Added details about anchor boxes.
* Added explanation of how score is calculated.
* `yolo_filter_boxes`: added additional hints. Clarify syntax for argmax and max.
* `iou`: clarify instructions for finding the intersection.
* `iou`: give variable names for all 8 box vertices, for clarity. Adds `width` and `height` variables for clarity.
* `iou`: add test cases to check handling of non-intersecting boxes, intersection at vertices, or intersection at edges.
* `yolo_non_max_suppression`: clarify syntax for tf.image.non_max_suppression and keras.gather.
* "convert output of the model to usable bounding box tensors": Provides a link to the definition of `yolo_head`.
* `predict`: hint on calling sess.run.
* Spelling, grammar, wording and formatting updates to improve clarity.
## Import libraries
Run the following cell to load the packages and dependencies that you will find useful as you build the object detector!
```
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
```
**Important Note**: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: `K.function(...)`.
## 1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We thank [drive.ai](htps://www.drive.ai/) for providing this dataset.
</center></caption>
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> **Figure 1** </u>: **Definition of a box**<br> </center></caption>
If you have 80 classes that you want the object detector to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how "You Only Look Once" (YOLO) performs object detection, and then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
## 2 - YOLO
"You Only Look Once" (YOLO) is a popular algorithm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
### 2.1 - Model details
#### Inputs and outputs
- The **input** is a batch of images, and each image has the shape (m, 608, 608, 3)
- The **output** is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
#### Anchor Boxes
* Anchor boxes are chosen by exploring the training data to choose reasonable height/width ratios that represent the different classes. For this assignment, 5 anchor boxes were chosen for you (to cover the 80 classes), and stored in the file './model_data/yolo_anchors.txt'
* The dimension for anchor boxes is the second to last dimension in the encoding: $(m, n_H,n_W,anchors,classes)$.
* The YOLO architecture is: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
#### Encoding
Let's look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 2** </u>: **Encoding architecture for YOLO**<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 3** </u>: **Flattening the last two last dimensions**<br> </center></caption>
#### Class score
Now, for each box (of each cell) we will compute the following element-wise product and extract a probability that the box contains a certain class.
The class score is $score_{c,i} = p_{c} \times c_{i}$: the probability that there is an object $p_{c}$ times the probability that the object is a certain class $c_{i}$.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> **Figure 4** </u>: **Find the class detected by each box**<br> </center></caption>
##### Example of figure 4
* In figure 4, let's say for box 1 (cell 1), the probability that an object exists is $p_{1}=0.60$. So there's a 60% chance that an object exists in box 1 (cell 1).
* The probability that the object is the class "category 3 (a car)" is $c_{3}=0.73$.
* The score for box 1 and for category "3" is $score_{1,3}=0.60 \times 0.73 = 0.44$.
* Let's say we calculate the score for all 80 classes in box 1, and find that the score for the car class (class 3) is the maximum. So we'll assign the score 0.44 and class "3" to this box "1".
#### Visualizing classes
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across the 80 classes, one maximum for each of the 5 anchor boxes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> **Figure 5** </u>: Each one of the 19x19 grid cells is colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
#### Visualizing bounding boxes
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> **Figure 6** </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
#### Non-Max suppression
In the figure above, we plotted only boxes for which the model had assigned a high probability, but this is still too many boxes. You'd like to reduce the algorithm's output to a much smaller number of detected objects.
To do so, you'll use **non-max suppression**. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class; either due to the low probability of any object, or low probability of this particular class).
- Select only one box when several boxes overlap with each other and detect the same object.
### 2.2 - Filtering with a threshold on class scores
You are going to first apply a filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It is convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- `box_confidence`: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- `boxes`: tensor of shape $(19 \times 19, 5, 4)$ containing the midpoint and dimensions $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes in each cell.
- `box_class_probs`: tensor of shape $(19 \times 19, 5, 80)$ containing the "class probabilities" $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
#### **Exercise**: Implement `yolo_filter_boxes()`.
1. Compute box scores by doing the elementwise product as described in Figure 4 ($p \times c$).
The following code may help you choose the right operator:
```python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
```
This is an example of **broadcasting** (multiplying vectors of different sizes).
2. For each box, find:
- the index of the class with the maximum box score
- the corresponding box score
**Useful references**
* [Keras argmax](https://keras.io/backend/#argmax)
* [Keras max](https://keras.io/backend/#max)
**Additional Hints**
* For the `axis` parameter of `argmax` and `max`, if you want to select the **last** axis, one way to do so is to set `axis=-1`. This is similar to Python array indexing, where you can select the last position of an array using `arrayname[-1]`.
* Applying `max` normally collapses the axis for which the maximum is applied. `keepdims=False` is the default option, and allows that dimension to be removed. We don't need to keep the last dimension after applying the maximum here.
* Even though the documentation shows `keras.backend.argmax`, use `keras.argmax`. Similarly, use `keras.max`.
3. Create a mask by using a threshold. As a reminder: `([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4)` returns: `[False, True, False, False, True]`. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to `box_class_scores`, `boxes` and `box_classes` to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep.
**Useful reference**:
* [boolean mask](https://www.tensorflow.org/api_docs/python/tf/boolean_mask)
**Additional Hints**:
* For the `tf.boolean_mask`, we can keep the default `axis=None`.
**Reminder**: to call a Keras function, you should use `K.function(...)`.
```
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
"""Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
"""
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes using the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_scores, axis=-1)
box_class_scores = K.max(box_scores, axis=-1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to box_class_scores, boxes and box_classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
**Note** In the test for `yolo_filter_boxes`, we're using random numbers to test the function. In real data, the `box_class_probs` would contain non-zero values between 0 and 1 for the probabilities. The box coordinates in `boxes` would also be chosen so that lengths and heights are non-negative.
### 2.3 - Non-max suppression ###
Even after filtering by thresholding over the class scores, you still end up with a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 7** </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probability) of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called **"Intersection over Union"**, or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> **Figure 8** </u>: Definition of "Intersection over Union". <br> </center></caption>
#### **Exercise**: Implement iou(). Some hints:
- In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) is the lower-right corner. In other words, the (0,0) origin starts at the top left corner of the image. As x increases, we move to the right. As y increases, we move down.
- For this exercise, we define a box using its two corners: upper left $(x_1, y_1)$ and lower right $(x_2,y_2)$, instead of using the midpoint, height and width. (This makes it a bit easier to calculate the intersection).
- To calculate the area of a rectangle, multiply its height $(y_2 - y_1)$ by its width $(x_2 - x_1)$. (Since $(x_1,y_1)$ is the top left and $x_2,y_2$ are the bottom right, these differences should be non-negative.
- To find the **intersection** of the two boxes $(xi_{1}, yi_{1}, xi_{2}, yi_{2})$:
- Feel free to draw some examples on paper to clarify this conceptually.
- The top left corner of the intersection $(xi_{1}, yi_{1})$ is found by comparing the top left corners $(x_1, y_1)$ of the two boxes and finding a vertex that has an x-coordinate that is closer to the right, and y-coordinate that is closer to the bottom.
- The bottom right corner of the intersection $(xi_{2}, yi_{2})$ is found by comparing the bottom right corners $(x_2,y_2)$ of the two boxes and finding a vertex whose x-coordinate is closer to the left, and the y-coordinate that is closer to the top.
- The two boxes **may have no intersection**. You can detect this if the intersection coordinates you calculate end up being the top right and/or bottom left corners of an intersection box. Another way to think of this is if you calculate the height $(y_2 - y_1)$ or width $(x_2 - x_1)$ and find that at least one of these lengths is negative, then there is no intersection (intersection area is zero).
- The two boxes may intersect at the **edges or vertices**, in which case the intersection area is still zero. This happens when either the height or width (or both) of the calculated intersection is zero.
**Additional Hints**
- `xi1` = **max**imum of the x1 coordinates of the two boxes
- `yi1` = **max**imum of the y1 coordinates of the two boxes
- `xi2` = **min**imum of the x2 coordinates of the two boxes
- `yi2` = **min**imum of the y2 coordinates of the two boxes
- `inter_area` = You can use `max(height, 0)` and `max(width, 0)`
```
# GRADED FUNCTION: iou
def iou(box1, box2):
"""Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (box1_x1, box1_y1, box1_x2, box_1_y2)
box2 -- second box, list object with coordinates (box2_x1, box2_y1, box2_x2, box2_y2)
"""
# Assign variable names to coordinates for clarity
(box1_x1, box1_y1, box1_x2, box1_y2) = box1
(box2_x1, box2_y1, box2_x2, box2_y2) = box2
# Calculate the (yi1, xi1, yi2, xi2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 7 lines)
xi1 = max(box1_x1, box2_x1)
yi1 = max(box1_y1, box2_y1)
xi2 = min(box1_x2, box2_x2)
yi2 = min(box1_y2, box2_y2)
inter_width = max(xi2 - xi1, 0)
inter_height = max(yi2 - yi1, 0)
inter_area = inter_width * inter_height
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1_y2 - box1_y1) * (box1_x2 - box1_x1)
box2_area = (box2_y2 - box2_y1) * (box2_x2 - box2_x1)
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
## Test case 1: boxes intersect
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou for intersecting boxes = " + str(iou(box1, box2)))
## Test case 2: boxes do not intersect
box1 = (1,2,3,4)
box2 = (5,6,7,8)
print("iou for non-intersecting boxes = " + str(iou(box1,box2)))
## Test case 3: boxes intersect at vertices only
box1 = (1,1,2,2)
box2 = (2,2,3,3)
print("iou for boxes that only touch at vertices = " + str(iou(box1,box2)))
## Test case 4: boxes intersect at edge only
box1 = (1,1,3,3)
box2 = (2,3,3,4)
print("iou for boxes that only touch at edges = " + str(iou(box1,box2)))
```
**Expected Output**:
```
iou for intersecting boxes = 0.14285714285714285
iou for non-intersecting boxes = 0.0
iou for boxes that only touch at vertices = 0.0
iou for boxes that only touch at edges = 0.0
```
#### YOLO non-max suppression
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute the overlap of this box with all other boxes, and remove boxes that overlap significantly (iou >= `iou_threshold`).
3. Go back to step 1 and iterate until there are no more boxes with a lower score than the currently selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
**Exercise**: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your `iou()` implementation):
** Reference documentation **
- [tf.image.non_max_suppression()](https://www.tensorflow.org/api_docs/python/tf/image/non_max_suppression)
```
tf.image.non_max_suppression(
boxes,
scores,
max_output_size,
iou_threshold=0.5,
name=None
)
```
Note that in the version of tensorflow used here, there is no parameter `score_threshold` (it's shown in the documentation for the latest version) so trying to set this value will result in an error message: *got an unexpected keyword argument 'score_threshold.*
- [K.gather()](https://www.tensorflow.org/api_docs/python/tf/keras/backend/gather)
Even though the documentation shows `tf.keras.backend.gather()`, you can use `keras.gather()`.
```
keras.gather(
reference,
indices
)
```
```
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
"""
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
"""
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
### 2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
**Exercise**: Implement `yolo_eval()` which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
```python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
```
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of `yolo_filter_boxes`
```python
boxes = scale_boxes(boxes, image_shape)
```
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
```
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
"""
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
"""
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions (convert boxes box_xy and box_wh to corner coordinates)
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with
# maximum number of boxes set to max_boxes and a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
```
**Expected Output**:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
## Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
## 3 - Test YOLO pre-trained model on images
In this part, you are going to use a pre-trained model and test it on the car detection dataset. We'll need a session to execute the computation graph and evaluate the tensors.
```
sess = K.get_session()
```
### 3.1 - Defining classes, anchors and image shape.
* Recall that we are trying to detect 80 classes, and are using 5 anchor boxes.
* We have gathered the information on the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt".
* We'll read class names and anchors from text files.
* The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
```
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
```
### 3.2 - Loading a pre-trained model
* Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes.
* You are going to load an existing pre-trained Keras YOLO model stored in "yolo.h5".
* These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will simply refer to it as "YOLO" in this notebook.
Run the cell below to load the model from this file.
```
yolo_model = load_model("model_data/yolo.h5")
```
This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
```
yolo_model.summary()
```
**Note**: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
**Reminder**: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
### 3.3 - Convert output of the model to usable bounding box tensors
The output of `yolo_model` is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
If you are curious about how `yolo_head` is implemented, you can find the function definition in the file ['keras_yolo.py'](https://github.com/allanzelener/YAD2K/blob/master/yad2k/models/keras_yolo.py). The file is located in your workspace in this path 'yad2k/models/keras_yolo.py'.
```
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
```
You added `yolo_outputs` to your graph. This set of 4 tensors is ready to be used as input by your `yolo_eval` function.
### 3.4 - Filtering boxes
`yolo_outputs` gave you all the predicted boxes of `yolo_model` in the correct format. You're now ready to perform filtering and select only the best boxes. Let's now call `yolo_eval`, which you had previously implemented, to do this.
```
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
```
### 3.5 - Run the graph on an image
Let the fun begin. You have created a graph that can be summarized as follows:
1. <font color='purple'> yolo_model.input </font> is given to `yolo_model`. The model is used to compute the output <font color='purple'> yolo_model.output </font>
2. <font color='purple'> yolo_model.output </font> is processed by `yolo_head`. It gives you <font color='purple'> yolo_outputs </font>
3. <font color='purple'> yolo_outputs </font> goes through a filtering function, `yolo_eval`. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
**Exercise**: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute `scores, boxes, classes`.
The code below also uses the following function:
```python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
```
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
**Important note**: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
#### Hint: Using the TensorFlow Session object
* Recall that above, we called `K.get_Session()` and saved the Session object in `sess`.
* To evaluate a list of tensors, we call `sess.run()` like this:
```
sess.run(fetches=[tensor1,tensor2,tensor3],
feed_dict={yolo_model.input: the_input_variable,
K.learning_phase():0
}
```
* Notice that the variables `scores, boxes, classes` are not passed into the `predict` function, but these are global variables that you will use within the `predict` function.
```
def predict(sess, image_file):
"""
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the predictions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
"""
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
feed_dict={ yolo_model.input: image_data, K.learning_phase(): 0 }
out_scores, out_boxes, out_classes = sess.run(fetches=[scores, boxes, classes], feed_dict=feed_dict)
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
```
Run the following cell on the "test.jpg" image to verify that your function is correct.
```
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
```
**Expected Output**:
<table>
<tr>
<td>
**Found 7 boxes for test.jpg**
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.60 (925, 285) (1045, 374)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.66 (706, 279) (786, 350)
</td>
</tr>
<tr>
<td>
**bus**
</td>
<td>
0.67 (5, 266) (220, 407)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.70 (947, 324) (1280, 705)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.74 (159, 303) (346, 440)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.80 (761, 282) (942, 412)
</td>
</tr>
<tr>
<td>
**car**
</td>
<td>
0.89 (367, 300) (745, 648)
</td>
</tr>
</table>
The model you've just run is actually able to detect 80 different classes listed in "coco_classes.txt". To test the model on your own images:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the cell above code
4. Run the code and see the output of the algorithm!
If you were to run your session in a for loop over all your images. Here's what you would get:
<center>
<video width="400" height="200" src="nb_images/pred_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Predictions of the YOLO model on pictures taken from a camera while driving around the Silicon Valley <br> Thanks [drive.ai](https://www.drive.ai/) for providing this dataset! </center></caption>
## <font color='darkblue'>What you should remember:
- YOLO is a state-of-the-art object detection model that is fast and accurate
- It runs an input image through a CNN which outputs a 19x19x5x85 dimensional volume.
- The encoding can be seen as a grid where each of the 19x19 cells contains information about 5 boxes.
- You filter through all the boxes using non-max suppression. Specifically:
- Score thresholding on the probability of detecting a class to keep only accurate (high probability) boxes
- Intersection over Union (IoU) thresholding to eliminate overlapping boxes
- Because training a YOLO model from randomly initialized weights is non-trivial and requires a large dataset as well as lot of computation, we used previously trained model parameters in this exercise. If you wish, you can also try fine-tuning the YOLO model with your own dataset, though this would be a fairly non-trivial exercise.
**References**: The ideas presented in this notebook came primarily from the two YOLO papers. The implementation here also took significant inspiration and used many components from Allan Zelener's GitHub repository. The pre-trained weights used in this exercise came from the official YOLO website.
- Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi - [You Only Look Once: Unified, Real-Time Object Detection](https://arxiv.org/abs/1506.02640) (2015)
- Joseph Redmon, Ali Farhadi - [YOLO9000: Better, Faster, Stronger](https://arxiv.org/abs/1612.08242) (2016)
- Allan Zelener - [YAD2K: Yet Another Darknet 2 Keras](https://github.com/allanzelener/YAD2K)
- The official YOLO website (https://pjreddie.com/darknet/yolo/)
**Car detection dataset**:
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The Drive.ai Sample Dataset</span> (provided by drive.ai) is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. We are grateful to Brody Huval, Chih Hu and Rahul Patel for providing this data.
| github_jupyter |
```
import os
import sys
import torch
import gpytorch
from tqdm.auto import tqdm
import timeit
if os.path.abspath('..') not in sys.path:
sys.path.insert(0, os.path.abspath('..'))
from gpytorch_lattice_kernel import RBFLattice as BilateralKernel
# device = "cuda" if torch.cuda.is_available() else "cpu"
device = "cpu"
N_vals = torch.linspace(100, 10000000, 10).int().tolist()
D_vals = torch.linspace(1, 100, 10).int().tolist()
```
# Matmul
```
N_vary = []
for N in tqdm(N_vals):
D = 1
x = torch.randn(N, D).to(device)
K = BilateralKernel().to(device)(x)
v = torch.randn(N, 1).to(device)
def matmul():
return K @ v
time = timeit.timeit(matmul , number=10)
N_vary.append([N, D, time])
del x
del K
del v
del matmul
D_vary = []
for D in tqdm(D_vals):
N = 1000
x = torch.randn(N, D).to(device)
K = BilateralKernel().to(device)(x)
v = torch.randn(N, 1).to(device)
def matmul():
return K @ v
time = timeit.timeit(matmul , number=10)
D_vary.append([N, D, time])
del x
del K
del v
del matmul
import pandas as pd
N_vary = pd.DataFrame(N_vary, columns=["N", "D", "Time"])
D_vary = pd.DataFrame(D_vary, columns=["N", "D", "Time"])
import seaborn as sns
ax = sns.lineplot(data=N_vary, x="N", y="Time")
ax.set(title="Matmul (D=1)")
from sklearn.linear_model import LinearRegression
import numpy as np
regr = LinearRegression()
regr.fit(np.log(D_vary["D"].to_numpy()[:, None]), np.log(D_vary["Time"]))
print('Coefficients: \n', regr.coef_)
pred_time = regr.predict(np.log(D_vary["D"].to_numpy()[:, None]))
ax = sns.lineplot(data=D_vary, x="D", y="Time")
ax.set(title="Matmul (N=1000)", xscale="log", yscale="log")
ax.plot(D_vary["D"].to_numpy(), np.exp(pred_time))
```
# Gradient
```
N_vary = []
for N in tqdm(N_vals):
D = 1
x = torch.randn(N, D, requires_grad=True).to(device)
K = BilateralKernel().to(device)(x)
v = torch.randn(N, 1, requires_grad=True).to(device)
sum = (K @ v).sum()
def gradient():
torch.autograd.grad(sum, [x, v], retain_graph=True)
x.grad = None
v.grad = None
return
time = timeit.timeit(gradient, number=10)
N_vary.append([N, D, time])
del x
del K
del v
del gradient
D_vary = []
for D in tqdm(D_vals):
N = 1000
x = torch.randn(N, D, requires_grad=True).to(device)
K = BilateralKernel().to(device)(x)
v = torch.randn(N, 1, requires_grad=True).to(device)
sum = (K @ v).sum()
def gradient():
torch.autograd.grad(sum, [x, v], retain_graph=True)
x.grad = None
v.grad = None
return
time = timeit.timeit(gradient, number=10)
D_vary.append([N, D, time])
del x
del K
del v
del gradient
import pandas as pd
N_vary = pd.DataFrame(N_vary, columns=["N", "D", "Time"])
D_vary = pd.DataFrame(D_vary, columns=["N", "D", "Time"])
import seaborn as sns
ax = sns.lineplot(data=N_vary, x="N", y="Time")
ax.set(title="Gradient computation of (K@v).sum() (D=1)")
from sklearn.linear_model import LinearRegression
import numpy as np
regr = LinearRegression()
regr.fit(np.log(D_vary["D"].to_numpy()[:, None]), np.log(D_vary["Time"]))
print('Coefficients: \n', regr.coef_)
pred_time = regr.predict(np.log(D_vary["D"].to_numpy()[:, None]))
ax = sns.lineplot(data=D_vary, x="D", y="Time")
ax.set(title="Gradient computation of (K@v).sum() (N=100)", xscale="log", yscale="log")
ax.plot(D_vary["D"].to_numpy(), np.exp(pred_time))
```
| github_jupyter |
# Checking the OA status of an author's papers
```
import json
import orcid
import requests
import sys
import time
from IPython.display import HTML, display
```
ORCID of the author you want to check
```
ORCID = '0000-0001-5318-3910'
```
Sadly, to make requests to ORCID and Base APIs, you will need to register with them, I cannot share my identifiers
```
orcid_api_id = 'APP-XXXXXXXXXXXXXXXX'
orcid_api_secret = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX'
base_api_ua = 'XXXXXXXXXX@XXXXXXXXXXXXXXXXXXXXXX'
orcid_api_email = 'XXXXXXXXX@XXXXXXXXX'
```
Now we retrieve the papers for this author:
```
def findDOI(paper):
for i in paper['external-ids']['external-id']:
if i['external-id-type'] == 'doi':
return i['external-id-value']
return None
api = orcid.PublicAPI(orcid_api_id, orcid_api_secret)
token = api.get_search_token_from_orcid()
record = api.read_record_public(ORCID, 'record', token)
print(f'ORCID: {record["orcid-identifier"]["path"]}')
name = record['person']['name']
print(f'Name: {name["given-names"]["value"]} {name["family-name"]["value"]}')
works = api.read_record_public(ORCID, 'works', token)['group']
print(f'Number of papers in ORCID: {len(works)}\n')
dois = []
for paper in works:
doi = findDOI(paper)
if doi:
dois.append((doi, paper['work-summary'][0]['title']['title']['value']))
else:
title = paper['work-summary'][0]['title']['title']['value']
print(f'No DOI available for paper: {title}')
print(f'\nNumber of papers with DOI: {len(dois)}')
def base_search(doi):
r = requests.get('https://api.base-search.net/cgi-bin/BaseHttpSearchInterface.fcgi',
params={'func': 'PerformSearch', 'query': f'dcdoi:{doi}', 'boost': 'oa', 'format': 'json'},
headers={'user-agent': 'fx.coudert@chimieparistech.psl.eu'})
docs = r.json()['response']['docs']
for doc in docs:
if doc['dcoa'] == 1:
return True
return False
def unpaywall(doi):
r = requests.get(f'https://api.unpaywall.org/v2/{doi}',
params={"email": "fxcoudert@gmail.com"})
r = r.json()
if 'error' in r:
return False
if r['is_oa']:
return True
return False
res = []
print('Be patient, this step takes time (2 to 3 seconds per paper)')
for doi, title in dois:
res.append((doi, title, base_search(doi), unpaywall(doi)))
time.sleep(1)
def YesNoCell(b):
if b:
return '<td style="background-color: #60FF60">Yes</td>'
else:
return '<td style="background-color: #FF6060">No</td>'
s = ''
tot_base = 0
tot_unpaywall = 0
for doi, title, base, unpaywall in res:
s += f'<tr><td><a href="https://doi.org/{doi}" target="_blank">{title}</a></td>{YesNoCell(base)}{YesNoCell(unpaywall)}</td>'
if base:
tot_base += 1
if unpaywall:
tot_unpaywall += 1
s += f'<tr><td>Total</td><td>{100 * tot_base/ len(res):.1f}%</td><td>{100 * tot_unpaywall/ len(res):.1f}%</td></tr>'
header = '<tr><td>Paper title</td><td>OA in BASE</td><td>OA in Unpaywall</td></tr>'
display(HTML('<table>' + header + s + '</table>'))
```
| github_jupyter |
# Homework 1: Wine Rating with Pandas and Sklearn
### 1. Read the syllabus in its entirety. Mark “Yes” below.
_____ I have read and understood the syllabus for this class.
### 2. MTC (MegaTelCo) has decided to use supervised learning to address its problem of churn in its wireless phone business. As a consultant to MTC, you realize that a main task in the business understanding/data understanding phases of the data mining process is to define the target variable. In one or two sentences each, please suggest a definition for the target variable. Suggest 3 features and explain why you think they will be give information on the target variable. Be as precise as possible—someone else will be implementing your suggestion. (Remember: your formulation should make sense from a business point of view, and it should be reasonable that MTC would have data available to know the value of the target variable and features for historical customers.)
___
For the remainder of this homework, we explore the chemical characteristics of wine and how those characteristics relate to ratings given to wine by expert wine tasters. The data is described at a high level [here](https://archive.ics.uci.edu/ml/datasets/Wine+Quality). For simplicity, for this assignment, we'll restrict our analyses to white wines.
Tasks for this assignment will extend what we've done in the first lecture, and include data manipulation, analysis, visualation, and modeling.
```
# consider the following url referencing white wines
wine_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv"
```
### **3.** Load the data referenced by the above url into a pandas data frame. What are the names of the columns that are present? Hint: columns of this data are delimited by a ;
### 4. What is the distribution of different quality ratings? Print the number of occurrences for each quality rating in the data frame you've created
### 5. Plot the above distribution as a histogram inline in this notebook
### 6. Notice the above distribution is highly imbalanced, some ratings occur much more frequently than others, making it difficult to fully compare ratings to each other. One common way to deal with this kind of imbalance in data visualization is by plotting a histogram of the _log_ of counts. Plot another histogram so that the data is presented on a log scale
### 7. Show descriptive statistics for all the columns "pH", "density", "chlorides" and "fixed acidity".
### 8. Measures of correlation, such as Pearson's correlation coefficient, show whether one numeric variable gives information on another numeric variable. Pandas allows us to compute the Pearson correlation coefficient between all pairs of columns in our dataframe. Display the correlations between all pairs of columns, for those rows with a quality score of at least 5.
### 9. Is this the only sort of correlation that Pandas lets us compute? If so, repeat exercise 6 with the an alternate correlation. If not, name another and explain the difference.
### 10. Heatmaps are a tool for conveniently interpreting correlation data. Plot these correlations as a seaborn heatmap. Which pairs of variables are most closely correlated? Which variable gives the most information on wine quality?
### 11. Following the example in class, build a linear model to predict the quality of wine using the chemical info available. Generate predictions and compare predicted quality to the actual value in a scatter plot
### 12. There are many different types of predictive models, each with their own plusses and minuses. For this task, repeat your modeling performed in step 8, but using a `sklearn.ensemble.RandomForestRegressor`. How does the scatter plot compare with the prior results?
| github_jupyter |
```
import SimpleITK as sitk
import numpy as np
import os
def normalize_one_volume(volume):
new_volume = np.zeros(volume.shape)
location = np.where(volume != 0)
mean = np.mean(volume[location])
var = np.std(volume[location])
new_volume[location] = (volume[location] - mean) / var
return new_volume
def merge_volumes(*volumes):
return np.stack(volumes, axis=0)
def get_volume(root, patient, desired_depth,
desired_height, desired_width,
normalize_flag,
flip=0):
flair_suffix = "_flair.nii.gz"
t1_suffix = "_t1.nii.gz"
t1ce_suffix = "_t1ce.nii.gz"
t2_suffix = "_t2.nii.gz"
path_flair = os.path.join(root, patient, patient + flair_suffix)
path_t1 = os.path.join(root, patient, patient + t1_suffix)
path_t2 = os.path.join(root, patient, patient + t2_suffix)
path_t1ce = os.path.join(root, patient, patient + t1ce_suffix)
flair = sitk.GetArrayFromImage(sitk.ReadImage(path_flair))
t1 = sitk.GetArrayFromImage(sitk.ReadImage(path_t1))
t2 = sitk.GetArrayFromImage(sitk.ReadImage(path_t2))
t1ce = sitk.GetArrayFromImage(sitk.ReadImage(path_t1ce))
if desired_depth > 155:
flair = np.concatenate([flair, np.zeros((desired_depth - 155, 240, 240))], axis=0)
t1 = np.concatenate([t1, np.zeros((desired_depth - 155, 240, 240))], axis=0)
t2 = np.concatenate([t2, np.zeros((desired_depth - 155, 240, 240))], axis=0)
t1ce = np.concatenate([t1ce, np.zeros((desired_depth - 155, 240, 240))], axis=0)
if normalize_flag == True:
out = merge_volumes(normalize_one_volume(flair), normalize_one_volume(t2), normalize_one_volume(t1ce),
normalize_one_volume(t1))
else:
out = merge_volumes(flair, t2, t1ce, t1)
if flip == 1:
out = out[:, ::-1, :, :]
elif flip == 2:
out = out[:, :, ::-1, :]
elif flip == 3:
out = out[:, :, :, ::-1]
elif flip == 4:
out = out[:, :, ::-1, ::-1]
elif flip == 5:
out = out[:, ::-1, ::-1, ::-1]
return np.expand_dims(out, axis=0)
import sys
import numpy as np
sys.path.remove('/home/sentic/.local/lib/python3.6/site-packages')
import torch
torch.backends.cudnn.benchmark=True
device_id = 0
torch.cuda.set_device(device_id)
root = "/home/sentic/MICCAI/data/train/"
use_gpu = True
n_epochs = 300
batch_size = 1
use_amp = False
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
import torch.optim as optim
import torch.optim.lr_scheduler as lr_scheduler
from torch.optim.lr_scheduler import LambdaLR
from torch.utils.data import DataLoader
from model import LargeCascadedModel
from dataset import BraTS
from losses import DiceLoss, DiceLossLoss
from tqdm import tqdm_notebook, tqdm
paths_model = ["./checkpoints/checkpoint_190.pt", "./checkpoints/checkpoint_191.pt", "./checkpoints/checkpoint_192.pt",
"./checkpoints/checkpoint_193.pt", "./checkpoints/checkpoint_194.pt", "./checkpoints/checkpoint_195.pt",
"./checkpoints/checkpoint_196.pt", "./checkpoints/checkpoint_197.pt", "./checkpoints/checkpoint_198.pt",
"./checkpoints/checkpoint_199.pt"]
diceLoss = DiceLossLoss()
intresting_patients = ['BraTS20_Validation_067', 'BraTS20_Validation_068', 'BraTS20_Validation_069', 'BraTS20_Validation_072',
'BraTS20_Validation_083', 'BraTS20_Validation_077', 'BraTS20_Validation_076', 'BraTS20_Validation_074',
'BraTS20_Validation_085', 'BraTS20_Validation_087', 'BraTS20_Validation_088', 'BraTS20_Validation_089',
'BraTS20_Validation_091', 'BraTS20_Validation_092', 'BraTS20_Validation_099', 'BraTS20_Validation_103']
patients_path = "/home/sentic/MICCAI/data/train/"
for patient_name in tqdm(os.listdir(patients_path)):
if patient_name.startswith('BraTS'):
output = np.zeros((3, 155, 240, 240))
for path_model in paths_model:
model = LargeCascadedModel(inplanes_encoder_1=4, channels_encoder_1=16, num_classes_1=3,
inplanes_encoder_2=7, channels_encoder_2=32, num_classes_2=3)
model.load_state_dict(torch.load(path_model, map_location='cuda:0')['state_dict'])
if use_gpu:
model = model.to("cuda")
model.eval()
with torch.no_grad():
for flip in range(0, 6):
volume = get_volume(patients_path, patient_name, 160, 240, 240, True, flip)
volume = torch.FloatTensor(volume.copy())
if use_gpu:
volume = volume.to("cuda")
_, _, decoded_region3, _ = model(volume)
decoded_region3 = decoded_region3.detach().cpu().numpy()
decoded_region3 = decoded_region3.squeeze()
if flip == 1:
decoded_region3 = decoded_region3[:, ::-1, :, :]
elif flip == 2:
decoded_region3 = decoded_region3[:, :, ::-1, :]
elif flip == 3:
decoded_region3 = decoded_region3[:, :, :, ::-1]
elif flip == 4:
decoded_region3 = decoded_region3[:, :, ::-1, ::-1]
elif flip == 5:
decoded_region3 = decoded_region3[:, ::-1, ::-1, ::-1]
output += decoded_region3[:, :155, :, :]
np_array = output
np_array = np_array / (6.0 * 10.0)
np.save("./val_masks_np/" + patient_name + ".np", np_array)
import os
import numpy as np
import SimpleITK as sitk
threshold_wt = 0.7
threshold_tc = 0.6
threshold_et = 0.7
low_threshold_et = 0.6
threshold_num_pixels_et = 150
patients_path = "/home/sentic/MICCAI/data/train/"
intresting_patients = ['BraTS20_Validation_067', 'BraTS20_Validation_068', 'BraTS20_Validation_069', 'BraTS20_Validation_072',
'BraTS20_Validation_083', 'BraTS20_Validation_077', 'BraTS20_Validation_076', 'BraTS20_Validation_074',
'BraTS20_Validation_085', 'BraTS20_Validation_087', 'BraTS20_Validation_088', 'BraTS20_Validation_089',
'BraTS20_Validation_091', 'BraTS20_Validation_092', 'BraTS20_Validation_099', 'BraTS20_Validation_103']
for patient_name in os.listdir(patients_path):
if patient_name.startswith('BraTS'):
path_big_volume = os.path.join(patients_path, patient_name, patient_name + "_flair.nii.gz")
np_array = np.load("./val_masks_np/" + patient_name + ".np.npy")
image = sitk.ReadImage(path_big_volume)
direction = image.GetDirection()
spacing = image.GetSpacing()
origin = image.GetOrigin()
seg_image = np.zeros((155, 240, 240))
label_1 = np_array[2, :, :, :] # where the enhanced tumor is
location_pixels_et = np.where(label_1 >= threshold_et)
num_pixels_et = location_pixels_et[0].shape[0]
label_2 = np_array[1, :, :, :] # locatia 1-urilor si 4-urilor
label_3 = np_array[0, :, :, :] # locatia 1 + 2 + 4
if patient_name in intresting_patients:
print(patient_name, "--->", num_pixels_et)
else:
print(patient_name, "***->", num_pixels_et)
if num_pixels_et > threshold_num_pixels_et: # if there are at least num of pixels
label_1[label_1 >= threshold_et] = 1 # put them in et category
else:
label_1[label_1 >= threshold_et] = 0 # don't put them
label_2[location_pixels_et] = 1 # but put them on tumor core
label_2[(label_1 < threshold_et) & (label_1 >= low_threshold_et)] = 1
label_1[label_1 < threshold_et] = 0
location_1 = np.where(label_1 != 0)
seg_image[location_1] = 4
label_2[label_2 >= threshold_tc] = 1
label_2[label_2 < threshold_tc] = 0
location_2 = np.where((label_2 != 0) & (label_1 == 0))
seg_image[location_2] = 1
label_3[label_3 >= threshold_wt] = 1
label_3[label_3 < threshold_wt] = 0
location_3 = np.where((label_3 != 0) & (label_2 == 0))
seg_image[location_3] = 2
out_image = sitk.GetImageFromArray(seg_image)
out_image.SetDirection(direction)
out_image.SetSpacing(spacing)
out_image.SetOrigin(origin)
sitk.WriteImage(out_image, os.path.join("./final_masks", patient_name + ".nii.gz"))
```
| github_jupyter |
<h1> Getting started with TensorFlow </h1>
In this notebook, you play around with the TensorFlow Python API.
```
import tensorflow as tf
import numpy as np
print(tf.__version__)
```
<h2> Adding two tensors </h2>
First, let's try doing this using numpy, the Python numeric package. numpy code is immediately evaluated.
```
a = np.array([5, 3, 8])
b = np.array([3, -1, 2])
c = np.add(a, b)
print(c)
```
The equivalent code in TensorFlow consists of two steps:
<p>
<h3> Step 1: Build the graph </h3>
```
a = tf.constant([5, 3, 8])
b = tf.constant([3, -1, 2])
c = tf.add(a, b)
print(c)
```
c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph.
Try the following in the cell above:
<ol>
<li> Change the 5 to 5.0, and similarly the other five numbers. What happens when you run this cell? </li>
<li> Add an extra number to a, but leave b at the original (3,) shape. What happens when you run this cell? </li>
<li> Change the code back to a version that works </li>
</ol>
<p/>
<h3> Step 2: Run the graph
```
with tf.Session() as sess:
result = sess.run(c)
print(result)
```
<h2> Using a feed_dict </h2>
Same graph, but without hardcoding inputs at build stage
```
a = tf.placeholder(dtype=tf.int32, shape=(None,)) # batchsize x scalar
b = tf.placeholder(dtype=tf.int32, shape=(None,))
c = tf.add(a, b)
with tf.Session() as sess:
result = sess.run(c, feed_dict={
a: [3, 4, 5],
b: [-1, 2, 3]
})
print(result)
```
<h2> Heron's Formula in TensorFlow </h2>
The area of triangle whose three sides are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$
Look up the available operations at https://www.tensorflow.org/api_docs/python/tf
```
def compute_area(sides):
# slice the input to get the sides
a = sides[:,0] # 5.0, 2.3
b = sides[:,1] # 3.0, 4.1
c = sides[:,2] # 7.1, 4.8
# Heron's formula
s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b)
areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b)
return tf.sqrt(areasq)
with tf.Session() as sess:
# pass in two triangles
area = compute_area(tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]))
result = sess.run(area)
print(result)
```
<h2> Placeholder and feed_dict </h2>
More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through which inputs will be passed in at run-time.
```
with tf.Session() as sess:
sides = tf.placeholder(tf.float32, shape=(None, 3)) # batchsize number of triangles, 3 sides
area = compute_area(sides)
result = sess.run(area, feed_dict = {
sides: [
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]
})
print(result)
```
## tf.eager
tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution.
<p>
One thing you could do is to develop using tf.eager and then comment out the eager execution and add in the session management code.
<b>You may need to restart the session to try this out.</b>
```
import tensorflow as tf
tf.enable_eager_execution()
def compute_area(sides):
# slice the input to get the sides
a = sides[:,0] # 5.0, 2.3
b = sides[:,1] # 3.0, 4.1
c = sides[:,2] # 7.1, 4.8
# Heron's formula
s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b)
areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b)
return tf.sqrt(areasq)
area = compute_area(tf.constant([
[5.0, 3.0, 7.1],
[2.3, 4.1, 4.8]
]))
print(area)
```
## Challenge Exercise
Use TensorFlow to find the roots of a fourth-degree polynomial using [Halley's Method](https://en.wikipedia.org/wiki/Halley%27s_method). The five coefficients (i.e. $a_0$ to $a_4$) of
<p>
$f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + a_4 x^4$
<p>
will be fed into the program, as will the initial guess $x_0$. Your program will start from that initial guess and then iterate one step using the formula:
<img src="https://wikimedia.org/api/rest_v1/media/math/render/svg/142614c0378a1d61cb623c1352bf85b6b7bc4397" />
<p>
If you got the above easily, try iterating indefinitely until the change between $x_n$ and $x_{n+1}$ is less than some specified tolerance. Hint: Use [tf.while_loop](https://www.tensorflow.org/api_docs/python/tf/while_loop)
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
### Linear SCM simulations with variance shift noise interventions in Section 5.2.2
variance shift instead of mean shift
| Sim Num | name | better estimator | baseline |
| :-----------: | :--------------------------------|:----------------:| :-------:|
| (viii) | Single source anti-causal DA without Y interv + variance shift | DIP-std+ | DIP |
| (ix) | Multiple source anti-causal DA with Y interv + variance shift | CIRMweigh-std+ | OLSPool |
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
plt.rcParams['axes.facecolor'] = 'lightgray'
np.set_printoptions(precision=3)
sns.set(style="darkgrid")
```
#### Helper functions
```
def boxplot_all_methods(plt_handle, res_all, title='', names=[], colors=[], ylim_option=0):
res_all_df = pd.DataFrame(res_all.T)
res_all_df.columns = names
res_all_df_melt = res_all_df.melt(var_name='methods', value_name='MSE')
res_all_mean = np.mean(res_all, axis=1)
plt_handle.set_title(title, fontsize=20)
plt_handle.axhline(res_all_mean[1], ls='--', color='b')
plt_handle.axhline(res_all_mean[0], ls='--', color='r')
ax = sns.boxplot(x="methods", y="MSE", data=res_all_df_melt,
palette=colors,
ax=plt_handle)
ax.set_xticklabels(ax.get_xticklabels(), rotation=-70, ha='left', fontsize=20)
ax.tick_params(labelsize=20)
# ax.yaxis.set_major_formatter(matplotlib.ticker.FormatStrFormatter('%.2f'))
ax.yaxis.grid(False) # Hide the horizontal gridlines
ax.xaxis.grid(True) # Show the vertical gridlines
# ax.xaxis.set_visible(False)
ax.set_xlabel("")
ax.set_ylabel("MSE", fontsize=20)
if ylim_option == 1:
lower_ylim = res_all_mean[0] - (res_all_mean[1] - res_all_mean[0]) *0.3
# upper_ylim = max(res_all_mean[1] + (res_all_mean[1] - res_all_mean[0]) *0.3, res_all_mean[0]*1.2)
upper_ylim = res_all_mean[1] + (res_all_mean[1] - res_all_mean[0]) *0.3
# get the boxes that are outside of the plot
outside_index = np.where(res_all_mean > upper_ylim)[0]
for oindex in outside_index:
ax.annotate("box\nbeyond\ny limit", xy=(oindex - 0.3, upper_ylim - (upper_ylim-lower_ylim)*0.15 ), fontsize=15)
plt_handle.set_ylim(lower_ylim, upper_ylim)
def scatterplot_two_methods(plt_handle, res_all, index1, index2, names, colors=[], title="", ylimmax = -1):
plt_handle.scatter(res_all[index1], res_all[index2], alpha=1.0, marker='+', c = np.array(colors[index2]).reshape(1, -1), s=100)
plt_handle.set_xlabel(names[index1], fontsize=20)
plt_handle.set_ylabel(names[index2], fontsize=20)
plt_handle.tick_params(labelsize=20)
#
if ylimmax <= 0:
# set ylim automatically
# ylimmax = np.max((np.max(res_all[index1]), np.max(res_all[index2])))
ylimmax = np.percentile(np.concatenate((res_all[index1], res_all[index2])), 90)
print(ylimmax)
plt_handle.plot([0, ylimmax],[0, ylimmax], 'k--', alpha=0.5)
# plt.axis('equal')
plt_handle.set_xlim(0.0, ylimmax)
plt_handle.set_ylim(0.0, ylimmax)
plt_handle.set_title(title, fontsize=20)
```
#### 8. Single source anti-causal DA without Y interv + variance shift - boxplots
boxplots showwing thatDIP-std+ and DIP-MMD works
```
names_short = ["OLSTar", "OLSSrc[1]", "DIP[1]-mean", "DIP[1]-std+", "DIP[1]-MMD"]
COLOR_PALETTE1 = sns.color_palette("Set1", 9, desat=1.)
COLOR_PALETTE2 = sns.color_palette("Set1", 9, desat=.7)
COLOR_PALETTE3 = sns.color_palette("Set1", 9, desat=.5)
COLOR_PALETTE4 = sns.color_palette("Set1", 9, desat=.3)
# this corresponds to the methods in names_short
COLOR_PALETTE = [COLOR_PALETTE2[0], COLOR_PALETTE2[1], COLOR_PALETTE2[3], COLOR_PALETTE2[3], COLOR_PALETTE3[3], COLOR_PALETTE4[3]]
sns.palplot(COLOR_PALETTE)
interv_type = 'sv1'
M = 2
lamL2 = 0.
lamL1 = 0.
epochs = 20000
lr = 1e-4
n = 5000
prefix_template_exp8_box = "simu_results/sim_exp8_box_r0%sd31020_%s_lamMatch%s_n%d_epochs%d_repeats%d"
save_dir = 'paper_figures'
nb_ba = 4
results_src_ba = np.zeros((3, M-1, nb_ba, 2, 10))
results_tar_ba = np.zeros((3, 1, nb_ba, 2, 10))
savefilename_prefix = prefix_template_exp8_box %(interv_type, 'baseline', 1.,
n, epochs, 10)
res_all_ba = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
for i in range(3):
for j in range(1):
results_src_ba[i, :], results_tar_ba[i, :] = res_all_ba.item()[i, j]
lamMatches = [10.**(k) for k in (np.arange(10)-5)]
print(lamMatches)
nb_damean = 2 # DIP, DIPOracle
results_src_damean = np.zeros((3, len(lamMatches), M-1, nb_damean, 2, 10))
results_tar_damean = np.zeros((3, len(lamMatches), 1, nb_damean, 2, 10))
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp8_box %(interv_type, 'DAmean', lam,
n, epochs, 10)
res_all_damean = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
for i in range(3):
for j in range(1):
results_src_damean[i, k, :], results_tar_damean[i, k, 0, :] = res_all_damean.item()[i, j]
nb_dastd = 2 # DIP-std, DIP-std+
results_src_dastd = np.zeros((3, len(lamMatches), M-1, nb_dastd, 2, 10))
results_tar_dastd = np.zeros((3, len(lamMatches), 1, nb_dastd, 2, 10))
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp8_box %(interv_type, 'DAstd', lam,
n, epochs, 10)
res_all_dastd = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
for i in range(3):
for j in range(1):
# run methods on data generated from sem
results_src_dastd[i, k, :], results_tar_dastd[i, k, 0, :] = res_all_dastd.item()[i, j]
nb_dammd = 1 # DIP-MMD
results_src_dammd = np.zeros((3, len(lamMatches), M-1, nb_dammd, 2, 10))
results_tar_dammd = np.zeros((3, len(lamMatches), 1, nb_dammd, 2, 10))
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp8_box %(interv_type, 'DAMMD', lam,
n, 2000, 10)
res_all_dammd = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
for i in range(3):
for j in range(1):
# run methods on data generated from sem
results_src_dammd[i, k, :], results_tar_dammd[i, k, 0, :] = res_all_dammd.item()[i, j]
# now add the methods
results_tar_plot = {}
lamMatchIndex = 6
print("Fix lambda choice: lmabda = ", lamMatches[lamMatchIndex])
for i in range(3):
results_tar_plot[i] = np.concatenate((results_tar_ba[i, 0, :2, 0, :],
results_tar_damean[i, lamMatchIndex, 0, 0, 0, :].reshape(1, -1),
results_tar_dastd[i, lamMatchIndex, 0, 1, 0, :].reshape(1, -1),
results_tar_dammd[i, lamMatchIndex, 0, 0, 0, :].reshape(1, -1)), axis=0)
ds = [3, 10, 20]
fig, axs = plt.subplots(1, 3, figsize=(20,5))
for i in range(3):
boxplot_all_methods(axs[i], results_tar_plot[i],
title="linear SCM: d=%d" %(ds[i]), names=names_short, colors=COLOR_PALETTE[:len(names_short)])
plt.subplots_adjust(top=0.9, bottom=0.1, left=0.1, right=0.9, hspace=0.6,
wspace=0.2)
plt.savefig("%s/sim_6_2_exp_%s.pdf" %(save_dir, interv_type), bbox_inches="tight")
plt.show()
fig, axs = plt.subplots(1, 1, figsize=(5,5))
boxplot_all_methods(axs, results_tar_plot[1],
title="linear SCM: d=%d" %(10), names=names_short, colors=COLOR_PALETTE[:len(names_short)])
plt.subplots_adjust(top=0.9, bottom=0.1, left=0.1, right=0.9, hspace=0.6,
wspace=0.2)
plt.savefig("%s/sim_6_2_exp_%s_single10.pdf" %(save_dir, interv_type), bbox_inches="tight")
plt.show()
```
#### 8 Single source anti-causal DA without Y interv + variance shift - scatterplots
Scatterplots showing that DIP-std+ and DIP-MMD works
```
interv_type = 'sv1'
M = 2
lamL2 = 0.
lamL1 = 0.
epochs = 20000
lr = 1e-4
n = 5000
prefix_template_exp8_scat = "simu_results/sim_exp8_scat_r0%sd10_%s_lamMatch%s_n%d_epochs%d_seed%d"
nb_ba = 4
repeats = 100
results_scat_src_ba = np.zeros((M-1, nb_ba, 2, repeats))
results_scat_tar_ba = np.zeros((1, nb_ba, 2, repeats))
for myseed in range(100):
savefilename_prefix = prefix_template_exp8_scat %(interv_type, 'baseline', 1.,
n, epochs, myseed)
res_all_ba = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results_scat_src_ba[0, :, :, myseed] = res_all_ba.item()['src'][:, :, 0]
results_scat_tar_ba[0, :, :, myseed] = res_all_ba.item()['tar'][:, :, 0]
lamMatches = [10.**(k) for k in (np.arange(10)-5)]
nb_damean = 2 # DIP, DIPOracle
results_scat_src_damean = np.zeros((len(lamMatches), M-1, nb_damean, 2, 100))
results_scat_tar_damean = np.zeros((len(lamMatches), 1, nb_damean, 2, 100))
for myseed in range(100):
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp8_scat %(interv_type, 'DAmean', lam,
n, epochs, myseed)
res_all_damean = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results_scat_src_damean[k, 0, :, :, myseed] = res_all_damean.item()['src'][:, :, 0]
results_scat_tar_damean[k, 0, :, :, myseed] = res_all_damean.item()['tar'][:, :, 0]
nb_dastd = 2 # DIP-std, DIP-std+
results_scat_src_dastd = np.zeros((len(lamMatches), M-1, nb_dastd, 2, 100))
results_scat_tar_dastd = np.zeros((len(lamMatches), 1, nb_dastd, 2, 100))
for myseed in range(100):
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp8_scat %(interv_type, 'DAstd', lam,
n, epochs, myseed)
res_all_dastd = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results_scat_src_dastd[k, 0, :, :, myseed] = res_all_dastd.item()['src'][:, :, 0]
results_scat_tar_dastd[k, 0, :, :, myseed] = res_all_dastd.item()['tar'][:, :, 0]
nb_dammd = 1 # DIP-MMD
results_scat_src_dammd = np.zeros((len(lamMatches), M-1, nb_dammd, 2, 100))
results_scat_tar_dammd = np.zeros((len(lamMatches), 1, nb_dammd, 2, 100))
for myseed in range(100):
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp8_scat %(interv_type, 'DAMMD', lam,
n, 2000, myseed)
res_all_dammd = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results_scat_src_dammd[k, 0, :, :, myseed] = res_all_dammd.item()['src'][:, :, 0]
results_scat_tar_dammd[k, 0, :, :, myseed] = res_all_dammd.item()['tar'][:, :, 0]
# now add the methods
results_tar_plot = {}
lamMatchIndex = 6
print("Fix lambda choice: lmabda = ", lamMatches[lamMatchIndex])
results_scat_tar_plot = np.concatenate((results_scat_tar_ba[0, :2, 0, :],
results_scat_tar_damean[lamMatchIndex, 0, 0, 0, :].reshape(1, -1),
results_scat_tar_dastd[lamMatchIndex, 0, 0, 0, :].reshape(1, -1),
results_scat_tar_dammd[lamMatchIndex, 0, 0, 0, :].reshape(1, -1)), axis=0)
for index1, name1 in enumerate(names_short):
for index2, name2 in enumerate(names_short):
if index2 > index1:
fig, axs = plt.subplots(1, 1, figsize=(5,5))
nb_below_diag = np.sum(results_scat_tar_plot[index1, :] >= results_scat_tar_plot[index2, :])
scatterplot_two_methods(axs, results_scat_tar_plot, index1, index2, names_short, COLOR_PALETTE[:len(names_short)],
title="%d%% of pts below the diagonal" % (nb_below_diag),
ylimmax = -1)
plt.savefig("%s/sim_6_2_exp_%s_single_repeat_%s_vs_%s.pdf" %(save_dir, interv_type, name1, name2), bbox_inches="tight")
plt.show()
```
#### 9 Multiple source anti-causal DA with Y interv + variance shift - scatterplots
Scatterplots showing that CIRM-std+ and CIRM-MMD works
```
interv_type = 'smv1'
M = 15
lamL2 = 0.
lamL1 = 0.
epochs = 20000
lr = 1e-4
n = 5000
prefix_template_exp9 = "simu_results/sim_exp9_scat_r0%sd20x4_%s_lamMatch%s_lamCIP%s_n%d_epochs%d_seed%d"
nb_ba = 4 # OLSTar, SrcPool, OLSTar, SrcPool
results9_src_ba = np.zeros((M-1, nb_ba, 2, 100))
results9_tar_ba = np.zeros((1, nb_ba, 2, 100))
for myseed in range(100):
savefilename_prefix = prefix_template_exp9 %(interv_type, 'baseline',
1., 0.1, n, epochs, myseed)
res_all_ba = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results9_src_ba[0, :, :, myseed] = res_all_ba.item()['src'][0, :, 0]
results9_tar_ba[0, :, :, myseed] = res_all_ba.item()['tar'][:, :, 0]
lamMatches = [10.**(k) for k in (np.arange(10)-5)]
nb_damean = 5 # DIP, DIPOracle, DIPweigh, CIP, CIRMweigh
results9_src_damean = np.zeros((len(lamMatches), M-1, nb_damean, 2, 100))-1
results9_tar_damean = np.zeros((len(lamMatches), 1, nb_damean, 2, 100))-1
for myseed in range(100):
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp9 %(interv_type, 'DAmean',
lam, 0.1, 5000, epochs, myseed)
res_all_damean = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results9_src_damean[k, :, :, :, myseed] = res_all_damean.item()['src'][:, :, :, 0]
results9_tar_damean[k, 0, :, :, myseed] = res_all_damean.item()['tar'][:, :, 0]
nb_dastd = 4 # DIP-std+, DIPweigh-std+, CIP-std+, CIRMweigh-std+
results9_src_dastd = np.zeros((len(lamMatches), M-1, nb_dastd, 2, 100))-1
results9_tar_dastd = np.zeros((len(lamMatches), 1, nb_dastd, 2, 100))-1
for myseed in range(100):
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp9 %(interv_type, 'DAstd',
lam, 0.1, 5000, epochs, myseed)
res_all_dastd = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results9_src_dastd[k, :, :, :, myseed] = res_all_dastd.item()['src'][:, :, :, 0]
results9_tar_dastd[k, 0, :, :, myseed] = res_all_dastd.item()['tar'][:, :, 0]
nb_dammd = 4 # DIP-MMD, DIPweigh-MMD, CIP-MMD, CIRMweigh-MMMD
results9_src_dammd = np.zeros((len(lamMatches), M-1, nb_dammd, 2, 100))-1
results9_tar_dammd = np.zeros((len(lamMatches), 1, nb_dammd, 2, 100))-1
for myseed in range(100):
for k, lam in enumerate(lamMatches):
savefilename_prefix = prefix_template_exp9 %(interv_type, 'DAMMD',
lam, 0.1, 5000, 2000, myseed)
res_all_dammd = np.load("%s.npy" %savefilename_prefix, allow_pickle=True)
results9_src_dammd[k, :, :, :, myseed] = res_all_dammd.item()['src'][:, :, :, 0]
results9_tar_dammd[k, 0, :, :, myseed] = res_all_dammd.item()['tar'][:, :, 0]
# now add the methods
names_short = ["Tar", "SrcPool", "CIRMweigh-mean", "CIRMweigh-std+", "DIPweigh-MMD", "CIRMweigh-MMD"]
COLOR_PALETTE1 = sns.color_palette("Set1", 9, desat=1.)
COLOR_PALETTE = [COLOR_PALETTE1[k] for k in [0, 1, 2, 3, 4, 7, 6]]
COLOR_PALETTE = [COLOR_PALETTE[k] for k in [0, 1, 6, 6, 4, 6]]
sns.palplot(COLOR_PALETTE)
lamMatchIndex = 6
print("Fix lambda choice: lmabda = ", lamMatches[lamMatchIndex])
results9_tar_plot = np.concatenate((results9_tar_ba[0, :2, 0, :],
results9_tar_damean[lamMatchIndex, 0, 4, 0, :].reshape(1, -1),
results9_tar_dastd[lamMatchIndex, 0, 3, 0, :].reshape(1, -1),
results9_tar_dammd[lamMatchIndex, 0, 1, 0, :].reshape(1, -1),
results9_tar_dammd[lamMatchIndex, 0, 3, 0, :].reshape(1, -1)), axis=0)
for index1, name1 in enumerate(names_short):
for index2, name2 in enumerate(names_short):
if index2 > index1:
fig, axs = plt.subplots(1, 1, figsize=(5,5))
nb_below_diag = np.sum(results9_tar_plot[index1, :] >= results9_tar_plot[index2, :])
scatterplot_two_methods(axs, results9_tar_plot, index1, index2, names_short, COLOR_PALETTE[:len(names_short)],
title="%d%% of pts below the diagonal" % (nb_below_diag),
ylimmax = -1)
plt.savefig("%s/sim_6_2_exp_%s_y_shift_single_repeat_%s_vs_%s.pdf" %(save_dir, interv_type, name1, name2), bbox_inches="tight")
plt.show()
```
| github_jupyter |
# General Imports
!! IMPORTANT !!
If you did NOT install opengrid with pip,
make sure the path to the opengrid folder is added to your PYTHONPATH
```
import os
import inspect
import sys
import pandas as pd
import charts
from opengrid_dev.library import houseprint
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 16,8
```
## Houseprint
```
hp = houseprint.Houseprint()
# for testing:
# hp = houseprint.Houseprint(spreadsheet='unit and integration test houseprint')
hp
hp.sites[:5]
hp.get_devices()[:4]
hp.get_sensors('water')[:3]
```
A Houseprint object can be saved as a pickle. It loses its tmpo session however (connections cannot be pickled)
```
hp.save('new_houseprint.pkl')
hp = houseprint.load_houseprint_from_file('new_houseprint.pkl')
```
### TMPO
The houseprint, sites, devices and sensors all have a get_data method. In order to get these working for the fluksosensors, the houseprint creates a tmpo session.
```
hp.init_tmpo()
hp._tmpos.debug = False
hp.sync_tmpos()
```
## Lookup sites, devices, sensors based on key
These methods return a single object
```
hp.find_site(1)
hp.find_device('FL03001562')
sensor = hp.find_sensor('d5a747b86224834f745f4c9775d70241')
print(sensor.site)
print(sensor.unit)
```
## Lookup sites, devices, sensors based on search criteria
These methods return a list with objects satisfying the criteria
```
hp.search_sites(inhabitants=5)
hp.search_sensors(type='electricity', direction='Import')
```
### Get Data
```
hp.sync_tmpos()
head = pd.Timestamp('20150101')
tail = pd.Timestamp('20160101')
df = hp.get_data(sensortype='water', head=head, tail=tail, diff=True, resample='min', unit='l/min')
#charts.plot(df, stock=True, show='inline')
df.info()
```
## Site
```
site = hp.find_site(1)
site
print(site.size)
print(site.inhabitants)
print(site.postcode)
print(site.construction_year)
print(site.k_level)
print(site.e_level)
print(site.epc_cert)
site.devices
site.get_sensors('electricity')
head = pd.Timestamp('20150617')
tail = pd.Timestamp('20150628')
df=site.get_data(sensortype='electricity', head=head,tail=tail, diff=True, unit='kW')
charts.plot(df, stock=True, show='inline')
```
## Device
```
device = hp.find_device('FL03001552')
device
device.key
device.get_sensors('gas')
head = pd.Timestamp('20151101')
tail = pd.Timestamp('20151104')
df = hp.get_data(sensortype='gas', head=head,tail=tail, diff=True, unit='kW')
charts.plot(df, stock=True, show='inline')
```
## Sensor
```
sensor = hp.find_sensor('53b1eb0479c83dee927fff10b0cb0fe6')
sensor
sensor.key
sensor.type
sensor.description
sensor.system
sensor.unit
head = pd.Timestamp('20150617')
tail = pd.Timestamp('20150618')
df=sensor.get_data(head,tail,diff=True, unit='W')
charts.plot(df, stock=True, show='inline')
```
## Getting data for a selection of sensors
```
sensors = hp.search_sensors(type='electricity', system='solar')
print(sensors)
df = hp.get_data(sensors=sensors, head=head, tail=tail, diff=True, unit='W')
charts.plot(df, stock=True, show='inline')
```
## Dynamically loading data sensor per sensor
A call to `hp.get_data()` is lazy: it creates a big list of Data Series per sensor and concatenates them. This can take a while, specifically when you need many sensors and a large time span.
Often, you don't use the big DataFrame as a whole, you rather re-divide it by using a for loop and looking at each sensor individually.
By using `hp.get_data_dynamic()`, data is fetched from tmpo per sensor at a time, just when you need it.
```
dyn_data = hp.get_data_dynamic(sensortype='electricity', head=head, tail=tail)
ts = next(dyn_data)
df = pd.DataFrame(ts)
charts.plot(df, stock=True, show='inline')
```
You can run the cell above multiple times and eacht time the next sensor will be fetched
| github_jupyter |
# Transfer Learning
In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html).
ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).
Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.
With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
```
Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`.
```
data_dir = 'Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=32)
```
We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on.
```
model = models.densenet121(pretrained=True)
model
```
This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.
```
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
```
With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.
PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.
```
import time
#for device in ['cpu', 'cuda']:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
# Test out your network!
import helper
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
print(images[0].size())
helper.imshow(img)
# Convert 2D image to 1D vector
#img = img.resize_(3, 224*224)
img = img.view(1, 3*224*224)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(3, 224, 224), ps, version='Fashion')
```
You can write device agnostic code which will automatically use CUDA if it's enabled like so:
```python
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
```
From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.
>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.
```
# TODO: Train a model with a pre-trained network
```
| github_jupyter |
# Demonstration of PET OSEM reconstruction with SIRF
This demonstration shows how to use OSEM as implemented in SIRF. It also suggests some exercises for reconstruction with and without attenuation etc.
The notebook is currently set-up to use prepared data with a single slice of an XCAT phantom, with a low resolution scanner, such that all results can be obtained easily on a laptop. Of course, the code will work exactly the same for any sized data.
Authors: Kris Thielemans and Evgueni Ovtchinnikov
First version: June 2021
CCP SyneRBI Synergistic Image Reconstruction Framework (SIRF).
Copyright 2015 - 2018, 2021 University College London.
This is software developed for the Collaborative Computational Project in Synergistic Reconstruction for Biomedical Imaging (http://www.ccpsynerbi.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# What is OSEM?
The following is just a very brief explanation of the concepts behind OSEM.
PET reconstruction is commonly based on the *Maximum Likelihood Estimation (MLE)* principle. The *likelihood* is the probability to observe some measured data given a (known) image. MLE attempts to find the image that maximises this likelihood. This needs to be done iteratively as the system of equations is very non-linear.
A common iterative method uses *Expectation Maximisation*, which we will not explain here. The resulting algorithm is called *MLEM* (or sometimes *EMML*). However, it is rather slow. The most popular method to increase computation speed is to compute every image update based on only a subset of the data. Subsets are nearly always chosen in terms of the "views" (or azimuthal angles). The *Ordered Subsets Expectation Maximisation (OSEM)* cycles through the subsets. More on this in another notebook, but here we just show how to use the SIRF implementation of OSEM.
OSEM is (still) the most common algorithm in use in clinical PET.
# Initial set-up
```
#%% make sure figures appears inline and animations works
%matplotlib notebook
# Setup the working directory for the notebook
import notebook_setup
from sirf_exercises import cd_to_working_dir
cd_to_working_dir('PET', 'OSEM_reconstruction')
#%% Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import os
import sys
import shutil
#import scipy
#from scipy import optimize
import sirf.STIR as pet
from sirf.Utilities import examples_data_path
from sirf_exercises import exercises_data_path
# define the directory with input files for this notebook
data_path = os.path.join(examples_data_path('PET'), 'thorax_single_slice')
#%% our usual handy function definitions
def plot_2d_image(idx,vol,title,clims=None,cmap="viridis"):
"""Customized version of subplot to plot 2D image"""
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar(shrink=.6)
plt.title(title)
plt.axis("off")
```
## We will first create some simulated data from ground-truth images
see previous notebooks for more information.
```
#%% Read in images
image = pet.ImageData(os.path.join(data_path, 'emission.hv'))
attn_image = pet.ImageData(os.path.join(data_path, 'attenuation.hv'))
#%% display
im_slice = image.dimensions()[0]//2
plt.figure(figsize=(9, 4))
plot_2d_image([1,2,1],image.as_array()[im_slice,:,:,], 'emission image')
plot_2d_image([1,2,2],attn_image.as_array()[im_slice,:,:,], 'attenuation image')
plt.tight_layout()
#%% save max for future displays
cmax = image.max()*.6
#%% create acquisition model
acq_model = pet.AcquisitionModelUsingRayTracingMatrix()
template = pet.AcquisitionData(os.path.join(data_path, 'template_sinogram.hs'))
acq_model.set_up(template, image)
#%% simulate data using forward projection
acquired_data=acq_model.forward(image)
#%% Display bitmaps of a middle sinogram
acquired_data.show(im_slice,title='Forward projection')
```
# Reconstruction via a SIRF reconstruction class
While you can write your own reconstruction algorithm by using `AcquisitionModel` etc (see other notebooks), SIRF provides a few reconstruction clases. We show how to use the OSEM implementation here.
## step 1: create the objective function
In PET, the iterative algorithms in SIRF rely on an objective function (i.e. the function to maximise).
In PET, this is normally the Poisson log-likelihood. (We will see later about adding prior information).
```
obj_fun = pet.make_Poisson_loglikelihood(acquired_data)
```
We could set acquisition model but the default (ray-tracing) is in this case ok. See below for more information. You could do this as follows.
```
obj_fun.set_acquisition_model(acq_model)
```
We could also add a prior, but we will not do that here (although the rest of the exercise would still work).
## step 2: create OSMAPOSL reconstructor
The `sirf.STIR.OSMAPOSLReconstructor` class implements the *Ordered Subsets Maximum A-Posteriori One Step Late algorithm*. That's quite a mouthful! We will get round to the "OSL" part, which is used to incorporate prior information. However, without a prior, this algorithm is identical to *Ordered Subsets Expectation Maximisation* (OSEM).
```
recon = pet.OSMAPOSLReconstructor()
recon.set_objective_function(obj_fun)
# use 4 subset and 60 image updates. This is not too far from clinical practice.
recon.set_num_subsets(4)
num_subiters=60
recon.set_num_subiterations(num_subiters)
```
## step 3: use this reconstructor!
We first create an initial image. Passing this image automatically gives the dimensions of the output image.
It is common practice to initialise OSEM with a uniform image. Here we use a value which is roughly of the correct scale, although this value doesn't matter too much (see discussion in the OSEM_DIY notebook).
Then we need to set-up the reconstructor. That will do various checks and initial computations.
And then finally we call the `reconstruct` method.
```
#initialisation
initial_image=image.get_uniform_copy(cmax / 4)
recon.set_current_estimate(initial_image)
# set up the reconstructor
recon.set_up(initial_image)
# do actual recon
recon.process()
reconstructed_image=recon.get_output()
```
display of images
```
plt.figure(figsize=(9, 4))
plot_2d_image([1,2,1],image.as_array()[im_slice,:,:,],'ground truth image',[0,cmax*1.2])
plot_2d_image([1,2,2],reconstructed_image.as_array()[im_slice,:,:,],'reconstructed image',[0,cmax*1.2])
plt.tight_layout();
```
## step 4: write to file
You can ask the `OSMAPOSLReconstructor` to write images to file every few sub-iterations, but this is by default disabled. We can however write the image to file from SIRF.
For each "engine" its default file format is used, which for STIR is Interfile.
```
reconstructed_image.write('OSEM_result.hv')
```
You can also use the `write_par` member to specify a STIR parameter file to write in a different file format, but this is out of scope for this exercise.
# Including a more realistic acquisition model
The above steps were appropriate for an acquisition without attenuation etc. This is of course not appropriate for measured data.
Let us use some things we've learned from the [image_creation_and_simulation notebook](image_creation_and_simulation.ipynb). First thing is to create a new acquisition model, then we need to use it to simulate new data, and finally to use it for the reconstruction.
```
# create attenuation
acq_model_for_attn = pet.AcquisitionModelUsingRayTracingMatrix()
asm_attn = pet.AcquisitionSensitivityModel(attn_image, acq_model_for_attn)
asm_attn.set_up(template)
attn_factors = asm_attn.forward(template.get_uniform_copy(1))
asm_attn = pet.AcquisitionSensitivityModel(attn_factors)
# create acquisition model
acq_model = pet.AcquisitionModelUsingRayTracingMatrix()
# we will increase the number of rays used for every Line-of-Response (LOR) as an example
# (it is not required for the exercise of course)
acq_model.set_num_tangential_LORs(5)
acq_model.set_acquisition_sensitivity(asm_attn)
# set-up
acq_model.set_up(template,image)
# simulate data
acquired_data = acq_model.forward(image)
# let's add a background term of a reasonable scale
background_term = acquired_data.get_uniform_copy(acquired_data.max()/10)
acq_model.set_background_term(background_term)
acquired_data = acq_model.forward(image)
# create reconstructor
obj_fun = pet.make_Poisson_loglikelihood(acquired_data)
obj_fun.set_acquisition_model(acq_model)
recon = pet.OSMAPOSLReconstructor()
recon.set_objective_function(obj_fun)
recon.set_num_subsets(4)
recon.set_num_subiterations(60)
# initialisation and reconstruction
recon.set_current_estimate(initial_image)
recon.set_up(initial_image)
recon.process()
reconstructed_image=recon.get_output()
# display
plt.figure(figsize=(9, 4))
plot_2d_image([1,2,1],image.as_array()[im_slice,:,:,],'ground truth image',[0,cmax*1.2])
plot_2d_image([1,2,2],reconstructed_image.as_array()[im_slice,:,:,],'reconstructed image',[0,cmax*1.2])
plt.tight_layout();
```
# Exercise: write a function to do an OSEM reconstruction
The above lines still are quite verbose. So, your task is now to create a function that includes these steps, such that you can avoid writing all those lines all over again.
For this, you need to know a bit about Python, but mostly you can copy-paste lines from above.
Let's make a function that creates an acquisition model, given some input. Then we can write OSEM function that does the reconstruction.
Below is a skeleton implementation. Look at the code above to fill in the details.
To debug your code, it might be helpful at any messages that STIR writes. By default these are written to the terminal, but this is not helpful when running in a jupyter notebook. The line below will redirect all messages to files which you can open via the `File>Open` menu.
```
msg_red = pet.MessageRedirector('info.txt', 'warnings.txt', 'errors.txt')
```
Note that they will be located in the current directory.
```
%pwd
def create_acq_model(attn_image, background_term):
'''create a PET acquisition model.
Arguments:
attn_image: the mu-map
background_term: bakcground-term as an sirf.STIR.AcquisitionData
'''
# acq_model_for_attn = ...
# asm_model = ...
acq_model = pet.AcquisitionModelUsingRayTracingMatrix();
# acq_model.set_...
return acq_model
def OSEM(acq_data, acq_model, initial_image, num_subiterations, num_subsets=1):
'''run OSEM
Arguments:
acq_data: the (measured) data
acq_model: the acquisition model
initial_image: used for initialisation (and sets voxel-sizes etc)
num_subiterations: number of sub-iterations (or image updates)
num_subsets: number of subsets (defaults to 1, i.e. MLEM)
'''
#obj_fun = ...
#obj_fun.set...
recon = pet.OSMAPOSLReconstructor()
#recon.set_objective_function(...)
#recon.set...
recon.set_current_estimate(initial_image)
recon.set_up(initial_image)
recon.process()
return recon.get_output()
```
Now test it with the above data.
```
acq_model = create_acq_model(attn_image, background_term)
my_reconstructed_image = OSEM(acquired_data, acq_model, image.get_uniform_copy(cmax), 30, 4)
```
# Exercise: reconstruct with and without attenuation
In some cases, it can be useful to reconstruct the emission data without taking attenuation (or even background terms) into account. One common example is to align the attenuation image to the emission image.
It is easy to do such an *no--attenuation-correction (NAC)* recontruction in SIRF. You need to create an `AcquisitionModel` that does not include the attenuation factors, and use that for the reconstruction. (Of course, in a simulation context, you would still use the full model to do the simulation).
Implement that here and reconstruct the data with and without attenuation to see visually what the difference is in the reconstructed images. If you have completed the previous exercise, you can use your own functions to do this.
Hint: the easiest way would be to take the existing attenuation image, and use `get_uniform_copy(0)` to create an image where all $\mu$-values are 0. Another (and more efficient) way would be to avoid creating the `AcquisitionSensitivityModel` at all.
# Final remarks
In these exercises we have used attenuation images of the same size as the emission image. This is easier to code and for display etc, but it is not a requirement of SIRF.
In addition, we have simulated and reconstructed the data with the same `AcquisitionModel` (and preserved image sizes). This is also convenient, but not a requirement (as you've seen in the NAC exercise). In fact, do not write your next paper using this "inverse crime". The problem is too
| github_jupyter |
```
import matplotlib
matplotlib.use('Qt5Agg')
import matplotlib.pyplot as plt
from src.utils import *
from src.InstrumentalVariable import InstrumentalVariable
from tqdm.notebook import tnrange
def run_manifold_tests(X, y, min_p_value=80, max_p_value=95, bootstrap=True, min_l2_reg=0,
max_l2_reg=50, n_tests=100):
n_samples, n_features, _ = X.shape
experiment_coefs = np.zeros((n_tests, n_features))
for i in tnrange(n_tests):
p_value = np.random.uniform(min_p_value, max_p_value)
if max_l2_reg > 0:
l2_reg = np.random.uniform(min_l2_reg, max_l2_reg)
else:
l2_reg = None
iv_model = InstrumentalVariable(p_value, l2_reg)
feature_size = np.random.randint(8, 20)
feature_inds = np.random.choice(n_features, feature_size, replace=False)
if bootstrap:
bootstrap_inds = np.random.choice(len(X), len(X))
X_train, y_train = X[bootstrap_inds], y[bootstrap_inds]
else:
X_train = X
y_train = y
X_train = X_train[:, feature_inds]
iv_model.fit(X_train, y_train)
np.put(experiment_coefs[i], feature_inds, iv_model.coef_)
return experiment_coefs
def filter_metrics(coefs):
positive_coefs = np.apply_along_axis(lambda feature: len(np.where(feature > 0)[0]), 0, coefs)
negative_coefs = np.apply_along_axis(lambda feature: len(np.where(feature < 0)[0]), 0, coefs)
print(positive_coefs)
print(negative_coefs)
filtered_coef_inds = []
for i, feature_coefs in enumerate(coefs.T):
pos = positive_coefs[i]
neg = negative_coefs[i]
if pos + neg == 0:
continue
if pos == 0 or neg == 0 or min(pos/neg, neg/pos) < 0.2:
filtered_coef_inds.append(i)
return np.array(filtered_coef_inds)
def plot_coefficients(coefs, metric_map=None):
n_tests, n_features = coefs.shape
fig, axes = plt.subplots(nrows=n_features, sharex=True)
fig.suptitle("3_metric_stability")
collections = []
for i, metric_coefs in enumerate(coefs.T):
ax = axes[i]
ax.set_title('Weights for short_term_' + str(metric_map[i]), loc='left')
ax.plot([0, 0], [-1, 1], 'r')
metric_coefs = metric_coefs[metric_coefs != 0]
n_tests = len(metric_coefs)
col = ax.scatter(metric_coefs, np.random.rand(n_tests) * 2 - 1,
cmap=plt.get_cmap("RdBu"), picker=5, s=50)
collections.append(col)
plt.show()
short_metrics_p, long_metrics_p = read_data(dataset_name='feed_top_ab_tests_pool_big_dataset.csv' ,shift=True)
short_metrics = short_metrics_p[:, :, 0]
long_metrics = long_metrics_p[:, :, 0]
target_metric_p = long_metrics_p[:, 3, :] # <--- here you can choose target (0, 1, 2, 3)
target_metric = target_metric_p[:, 0]
#main part of the sandbox, as it allows to change the constraints
coefs = run_manifold_tests(short_metrics_p, target_metric,
min_l2_reg=0, max_l2_reg=0.001,
min_p_value=50, max_p_value=95, n_tests=1000)
clear_metrics = filter_metrics(coefs)
filtered_coefs = coefs[:, clear_metrics]
plot_coefficients(coefs, range(np.shape(coefs)[1]))
```
| github_jupyter |
# Linear algebra
Linear algebra is the branch of mathematics that deals with **vector spaces**.
```
import re, math, random # regexes, math functions, random numbers
import matplotlib.pyplot as plt # pyplot
from collections import defaultdict, Counter
from functools import partial, reduce
```
# Vectors
Vectors are points in some finite-dimensional space.
```
v = [1, 2]
w = [2, 1]
vectors = [v, w]
def vector_add(v, w):
"""adds two vectors componentwise"""
return [v_i + w_i for v_i, w_i in zip(v,w)]
vector_add(v, w)
def vector_subtract(v, w):
"""subtracts two vectors componentwise"""
return [v_i - w_i for v_i, w_i in zip(v,w)]
vector_subtract(v, w)
def vector_sum(vectors):
return reduce(vector_add, vectors)
vector_sum(vectors)
def scalar_multiply(c, v):
# c is a number, v is a vector
return [c * v_i for v_i in v]
scalar_multiply(2.5, v)
def vector_mean(vectors):
"""compute the vector whose i-th element is the mean of the
i-th elements of the input vectors"""
n = len(vectors)
return scalar_multiply(1/n, vector_sum(vectors))
vector_mean(vectors)
def dot(v, w):
"""v_1 * w_1 + ... + v_n * w_n"""
return sum(v_i * w_i for v_i, w_i in zip(v, w))
dot(v, w)
```
The dot product measures how far the vector v extends in the w direction.
- For example, if w = [1, 0] then dot(v, w) is just the first component of v.
The dot product measures the length of the vector you’d get if you projected v onto w.
```
def sum_of_squares(v):
"""v_1 * v_1 + ... + v_n * v_n"""
return dot(v, v)
sum_of_squares(v)
def magnitude(v):
return math.sqrt(sum_of_squares(v))
magnitude(v)
def squared_distance(v, w):
return sum_of_squares(vector_subtract(v, w))
squared_distance(v, w)
def distance(v, w):
return math.sqrt(squared_distance(v, w))
distance(v, w)
```
Using lists as vectors
- is great for exposition
- but terrible for performance.
- to use the NumPy library.
# Matrices
A matrix is a two-dimensional collection of numbers.
- We will represent matrices as lists of lists
- If A is a matrix, then A[i][j] is the element in the ith row and the jth column.
```
A = [[1, 2, 3],
[4, 5, 6]]
B = [[1, 2],
[3, 4],
[5, 6]]
def shape(A):
num_rows = len(A)
num_cols = len(A[0]) if A else 0
return num_rows, num_cols
shape(A)
def get_row(A, i):
return A[i]
get_row(A, 1)
def get_column(A, j):
return [A_i[j] for A_i in A]
get_column(A, 2)
def make_matrix(num_rows, num_cols, entry_fn):
"""returns a num_rows x num_cols matrix
whose (i,j)-th entry is entry_fn(i, j),
entry_fn is a function for generating matrix elements."""
return [[entry_fn(i, j)
for j in range(num_cols)]
for i in range(num_rows)]
def entry_add(i, j):
"""a function for generating matrix elements. """
return i+j
make_matrix(5, 5, entry_add)
def is_diagonal(i, j):
"""1's on the 'diagonal',
0's everywhere else"""
return 1 if i == j else 0
identity_matrix = make_matrix(5, 5, is_diagonal)
identity_matrix
```
### Matrices will be important.
- using a matrix to represent a dataset
- using an n × k matrix to represent a linear function that maps k-dimensional vectors to n-dimensional vectors.
- using matrix to represent binary relationships.
```
friendships = [(0, 1),
(0, 2),
(1, 2),
(1, 3),
(2, 3),
(3, 4),
(4, 5),
(5, 6),
(5, 7),
(6, 8),
(7, 8),
(8, 9)]
friendships = [[0, 1, 1, 0, 0, 0, 0, 0, 0, 0], # user 0
[1, 0, 1, 1, 0, 0, 0, 0, 0, 0], # user 1
[1, 1, 0, 1, 0, 0, 0, 0, 0, 0], # user 2
[0, 1, 1, 0, 1, 0, 0, 0, 0, 0], # user 3
[0, 0, 0, 1, 0, 1, 0, 0, 0, 0], # user 4
[0, 0, 0, 0, 1, 0, 1, 1, 0, 0], # user 5
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 6
[0, 0, 0, 0, 0, 1, 0, 0, 1, 0], # user 7
[0, 0, 0, 0, 0, 0, 1, 1, 0, 1], # user 8
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]] # user 9
friendships[0][2] == 1 # True, 0 and 2 are friends
def matrix_add(A, B):
if shape(A) != shape(B):
raise ArithmeticError("cannot add matrices with different shapes")
num_rows, num_cols = shape(A)
def entry_fn(i, j): return A[i][j] + B[i][j]
return make_matrix(num_rows, num_cols, entry_fn)
A = make_matrix(5, 5, is_diagonal)
B = make_matrix(5, 5, entry_add)
matrix_add(A, B)
v = [2, 1]
w = [math.sqrt(.25), math.sqrt(.75)]
c = dot(v, w)
vonw = scalar_multiply(c, w)
o = [0,0]
plt.figure(figsize=(4, 5), dpi = 100)
plt.arrow(0, 0, v[0], v[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("v", v, xytext=[v[0] + 0.01, v[1]])
plt.arrow(0 ,0, w[0], w[1],
width=0.002, head_width=.1, length_includes_head=True)
plt.annotate("w", w, xytext=[w[0] - 0.1, w[1]])
plt.arrow(0, 0, vonw[0], vonw[1], length_includes_head=True)
plt.annotate(u"(v•w)w", vonw, xytext=[vonw[0] - 0.1, vonw[1] + 0.02])
plt.arrow(v[0], v[1], vonw[0] - v[0], vonw[1] - v[1],
linestyle='dotted', length_includes_head=True)
plt.scatter(*zip(v,w,o),marker='.')
plt.axis('equal')
plt.show()
```
| github_jupyter |
```
from scipy.cluster.hierarchy import linkage, fcluster
import matplotlib.pyplot as plt
import seaborn as sns, pandas as pd
x_coords = [80.1, 93.1, 86.6, 98.5, 86.4, 9.5, 15.2, 3.4, 10.4, 20.3, 44.2, 56.8, 49.2, 62.5]
y_coords = [87.2, 96.1, 95.6, 92.4, 92.4, 57.7, 49.4, 47.3, 59.1, 55.5, 25.6, 2.1, 10.9, 24.1]
df = pd.DataFrame({"x_coord" : x_coords, "y_coord": y_coords})
df.head()
Z = linkage(df, "ward")
df["cluster_labels"] = fcluster(Z, 3, criterion="maxclust")
df.head(3)
sns.scatterplot(x="x_coord", y="y_coord", hue="cluster_labels", data=df)
plt.show()
```
### K-means clustering in SciPy
#### two steps of k-means clustering:
* Define cluster centers through kmeans() function.
* It has two required arguments: observations and number of clusters.
* Assign cluster labels through the vq() function.
* It has two required arguments: observations and cluster centers.
```
from scipy.cluster.vq import kmeans, vq
import random
# Generate cluster centers
cluster_centers, distortion = kmeans(comic_con[["x_scaled","y_scaled"]], 2)
# Assign cluster labels
comic_con['cluster_labels'], distortion_list = vq(comic_con[["x_scaled","y_scaled"]], cluster_centers)
# Plot clusters
sns.scatterplot(x='x_scaled', y='y_scaled',
hue='cluster_labels', data = comic_con)
plt.show()
random.seed((1000,2000))
centroids, _ = kmeans(df, 3)
df["cluster_labels_kmeans"], _ = vq(df, centroids)
sns.scatterplot(x="x_coord", y="y_coord", hue="cluster_labels_kmeans", data=df)
plt.show()
```
### Normalization of Data
```
# Process of rescaling data to a standard deviation of 1
# x_new = x / std(x)
from scipy.cluster.vq import whiten
data = [5, 1, 3, 3, 2, 3, 3, 8, 1, 2, 2, 3, 5]
scaled_data = whiten(data)
scaled_data
plt.figure(figsize=(12,4))
plt.subplot(131)
plt.plot(data, label="original")
plt.legend()
plt.subplot(132)
plt.plot(scaled_data, label="scaled")
plt.legend()
plt.subplot(133)
plt.plot(data, label="original")
plt.plot(scaled_data, label="scaled")
plt.legend()
plt.show()
```
### Normalization of small numbers
```
# Prepare data
rate_cuts = [0.0025, 0.001, -0.0005, -0.001, -0.0005, 0.0025, -0.001, -0.0015, -0.001, 0.0005]
# Use the whiten() function to standardize the data
scaled_rate_cuts = whiten(rate_cuts)
plt.figure(figsize=(12,4))
plt.subplot(131)
plt.plot(rate_cuts, label="original")
plt.legend()
plt.subplot(132)
plt.plot(scaled_rate_cuts, label="scaled")
plt.legend()
plt.subplot(133)
plt.plot(rate_cuts, label='original')
plt.plot(scaled_rate_cuts, label='scaled')
plt.legend()
plt.show()
```
#### Hierarchical clustering: ward method
```
# Import the fcluster and linkage functions
from scipy.cluster.hierarchy import fcluster, linkage
# Use the linkage() function
distance_matrix = linkage(comic_con[['x_scaled', 'y_scaled']], method = "ward", metric = 'euclidean')
# Assign cluster labels
comic_con['cluster_labels'] = fcluster(distance_matrix, 2, criterion='maxclust')
# Plot clusters
sns.scatterplot(x='x_scaled', y='y_scaled',
hue='cluster_labels', data = comic_con)
plt.show()
```
#### Hierarchical clustering: single method
```
# Use the linkage() function
distance_matrix = linkage(comic_con[["x_scaled", "y_scaled"]], method = "single", metric = "euclidean")
# Assign cluster labels
comic_con['cluster_labels'] = fcluster(distance_matrix, 2, criterion="maxclust")
# Plot clusters
sns.scatterplot(x='x_scaled', y='y_scaled',
hue='cluster_labels', data = comic_con)
plt.show()
```
#### Hierarchical clustering: complete method
```
# Import the fcluster and linkage functions
from scipy.cluster.hierarchy import linkage, fcluster
# Use the linkage() function
distance_matrix = linkage(comic_con[["x_scaled", "y_scaled"]], method = "complete", metric = "euclidean")
# Assign cluster labels
comic_con['cluster_labels'] = fcluster(distance_matrix, 2, criterion="maxclust")
# Plot clusters
sns.scatterplot(x='x_scaled', y='y_scaled',
hue='cluster_labels', data = comic_con)
plt.show()
```
### Visualizing Data
```
# Import the pyplot class
import matplotlib.pyplot as plt
# Define a colors dictionary for clusters
colors = {1:'red', 2:'blue'}
# Plot a scatter plot
comic_con.plot.scatter(x="x_scaled",
y="y_scaled",
c=comic_con['cluster_labels'].apply(lambda x: colors[x]))
plt.show()
# Import the seaborn module
import seaborn as sns
# Plot a scatter plot using seaborn
sns.scatterplot(x="x_scaled",
y="y_scaled",
hue="cluster_labels",
data = comic_con)
plt.show()
```
### Dendogram
```
from scipy.cluster.hierarchy import dendrogram
Z = linkage(df[['x_whiten', 'y_whiten']], method='ward', metric='euclidean')
dn = dendrogram(Z)
plt.show()
### timing using %timeit
%timeit sum([1,3,5])
```
### Finding optimum "k" Elbow Method
```
distortions = []
num_clusters = range(1, 7)
# Create a list of distortions from the kmeans function
for i in num_clusters:
cluster_centers, distortion = kmeans(comic_con[["x_scaled","y_scaled"]], i)
distortions.append(distortion)
# Create a data frame with two lists - num_clusters, distortions
elbow_plot_data = pd.DataFrame({'num_clusters': num_clusters, 'distortions': distortions})
# Creat a line plot of num_clusters and distortions
sns.lineplot(x="num_clusters", y="distortions", data = elbow_plot_data)
plt.xticks(num_clusters)
plt.show()
```
### Elbow method on uniform data
```
# Let us now see how the elbow plot looks on a data set with uniformly distributed points.
distortions = []
num_clusters = range(2, 7)
# Create a list of distortions from the kmeans function
for i in num_clusters:
cluster_centers, distortion = kmeans(uniform_data[["x_scaled","y_scaled"]], i)
distortions.append(distortion)
# Create a data frame with two lists - number of clusters and distortions
elbow_plot = pd.DataFrame({'num_clusters': num_clusters, 'distortions': distortions})
# Creat a line plot of num_clusters and distortions
sns.lineplot(x="num_clusters", y="distortions", data=elbow_plot)
plt.xticks(num_clusters)
plt.show()
```
### Impact of seeds on distinct clusters
Notice that kmeans is unable to capture the three visible clusters clearly, and the two clusters towards the top have taken in some points along the boundary. This happens due to the underlying assumption in kmeans algorithm to minimize distortions which leads to clusters that are similar in terms of area.
### Dominant Colors in Images
#### Extracting RGB values from image
There are broadly three steps to find the dominant colors in an image:
* Extract RGB values into three lists.
* Perform k-means clustering on scaled RGB values.
* Display the colors of cluster centers.
To extract RGB values, we use the imread() function of the image class of matplotlib.
```
# Import image class of matplotlib
import matplotlib.image as img
from matplotlib.pyplot import imshow
# Read batman image and print dimensions
sea_horizon = img.imread("../00_DataSets/img/sea_horizon.jpg")
print(sea_horizon.shape)
imshow(sea_horizon)
# Store RGB values of all pixels in lists r, g and b
r, g, b = [], [], []
for row in sea_horizon:
for temp_r, temp_g, temp_b in row:
r.append(temp_r)
g.append(temp_g)
b.append(temp_b)
sea_horizon_df = pd.DataFrame({'red': r, 'blue': b, 'green': g})
sea_horizon_df.head()
distortions = []
num_clusters = range(1, 7)
# Create a list of distortions from the kmeans function
for i in num_clusters:
cluster_centers, distortion = kmeans(sea_horizon_df[["red", "blue", "green"]], i)
distortions.append(distortion)
# Create a data frame with two lists, num_clusters and distortions
elbow_plot_data = pd.DataFrame({"num_clusters":num_clusters, "distortions":distortions})
# Create a line plot of num_clusters and distortions
sns.lineplot(x="num_clusters", y="distortions", data = elbow_plot_data)
plt.xticks(num_clusters)
plt.show()
# scaling the data
sea_horizon_df["scaled_red"] = whiten(sea_horizon_df["red"])
sea_horizon_df["scaled_blue"] = whiten(sea_horizon_df["blue"])
sea_horizon_df["scaled_green"] = whiten(sea_horizon_df["green"])
distortions = []
num_clusters = range(1, 7)
# Create a list of distortions from the kmeans function
for i in num_clusters:
cluster_centers, distortion = kmeans(sea_horizon_df[["scaled_red", "scaled_blue", "scaled_green"]], i)
distortions.append(distortion)
# Create a data frame with two lists, num_clusters and distortions
elbow_plot_data = pd.DataFrame({"num_clusters":num_clusters, "distortions":distortions})
# Create a line plot of num_clusters and distortions
sns.lineplot(x="num_clusters", y="distortions", data = elbow_plot_data)
plt.xticks(num_clusters)
plt.show()
```
#### Show Dominant colors
To display the dominant colors, convert the colors of the cluster centers to their raw values and then converted them to the range of 0-1, using the following formula: converted_pixel = standardized_pixel * pixel_std / 255
```
# Get standard deviations of each color
r_std, g_std, b_std = sea_horizon_df[['red', 'green', 'blue']].std()
colors = []
for cluster_center in cluster_centers:
scaled_red, scaled_green, scaled_blue = cluster_center
# Convert each standardized value to scaled value
colors.append((
scaled_red * r_std / 255,
scaled_green * g_std / 255,
scaled_blue * b_std / 255
))
# Display colors of cluster centers
plt.imshow([colors])
plt.show()
```
### Document clustering
```
# TF-IDF of movie plots
from sklearn.feature_extraction.text import TfidfVectorizer
# Import TfidfVectorizer class from sklearn
from sklearn.feature_extraction.text import TfidfVectorizer
# Initialize TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=0.75, min_df=0.1, max_features=50, tokenizer=remove_noise)
# Use the .fit_transform() method on the list plots
tfidf_matrix = tfidf_vectorizer.fit_transform(plots)
num_clusters = 2
# Generate cluster centers through the kmeans function
cluster_centers, distortion = kmeans(tfidf_matrix.todense(), num_clusters)
# Generate terms from the tfidf_vectorizer object
terms = tfidf_vectorizer.get_feature_names()
for i in range(num_clusters):
# Sort the terms and print top 3 terms
center_terms = dict(zip(terms, list(cluster_centers[i])))
sorted_terms = sorted(center_terms, key=center_terms.get, reverse=True)
print(sorted_terms[:3])
```
| github_jupyter |
# Gradient Descent
:label:`sec_gd`
In this section we are going to introduce the basic concepts underlying gradient descent. This is brief by necessity. See e.g., :cite:`Boyd.Vandenberghe.2004` for an in-depth introduction to convex optimization. Although the latter is rarely used directly in deep learning, an understanding of gradient descent is key to understanding stochastic gradient descent algorithms. For instance, the optimization problem might diverge due to an overly large learning rate. This phenomenon can already be seen in gradient descent. Likewise, preconditioning is a common technique in gradient descent and carries over to more advanced algorithms. Let us start with a simple special case.
## Gradient Descent in One Dimension
Gradient descent in one dimension is an excellent example to explain why the gradient descent algorithm may reduce the value of the objective function. Consider some continuously differentiable real-valued function $f: \mathbb{R} \rightarrow \mathbb{R}$. Using a Taylor expansion (:numref:`sec_single_variable_calculus`) we obtain that
$$f(x + \epsilon) = f(x) + \epsilon f'(x) + \mathcal{O}(\epsilon^2).$$
:eqlabel:`gd-taylor`
That is, in first approximation $f(x+\epsilon)$ is given by the function value $f(x)$ and the first derivative $f'(x)$ at $x$. It is not unreasonable to assume that for small $\epsilon$ moving in the direction of the negative gradient will decrease $f$. To keep things simple we pick a fixed step size $\eta > 0$ and choose $\epsilon = -\eta f'(x)$. Plugging this into the Taylor expansion above we get
$$f(x - \eta f'(x)) = f(x) - \eta f'^2(x) + \mathcal{O}(\eta^2 f'^2(x)).$$
If the derivative $f'(x) \neq 0$ does not vanish we make progress since $\eta f'^2(x)>0$. Moreover, we can always choose $\eta$ small enough for the higher order terms to become irrelevant. Hence we arrive at
$$f(x - \eta f'(x)) \lessapprox f(x).$$
This means that, if we use
$$x \leftarrow x - \eta f'(x)$$
to iterate $x$, the value of function $f(x)$ might decline. Therefore, in gradient descent we first choose an initial value $x$ and a constant $\eta > 0$ and then use them to continuously iterate $x$ until the stop condition is reached, for example, when the magnitude of the gradient $|f'(x)|$ is small enough or the number of iterations has reached a certain value.
For simplicity we choose the objective function $f(x)=x^2$ to illustrate how to implement gradient descent. Although we know that $x=0$ is the solution to minimize $f(x)$, we still use this simple function to observe how $x$ changes. As always, we begin by importing all required modules.
```
%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.7.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven ai.djl.mxnet:mxnet-engine:0.7.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-b
%load ../utils/plot-utils
%load ../utils/Functions.java
import ai.djl.ndarray.*;
import tech.tablesaw.plotly.traces.ScatterTrace;
import java.lang.Math;
Function<Float, Float> f = x -> x * x; // Objective Function
Function<Float, Float> gradf = x -> 2 * x; // Its Derivative
NDManager manager = NDManager.newBaseManager();
```
Next, we use $x=10$ as the initial value and assume $\eta=0.2$. Using gradient descent to iterate $x$ for 10 times we can see that, eventually, the value of $x$ approaches the optimal solution.
```
public float[] gd(float eta) {
float x = 10f;
float[] results = new float[11];
results[0] = x;
for (int i = 0; i < 10; i++) {
x -= eta * gradf.apply(x);
results[i + 1] = x;
}
System.out.printf("epoch 10, x: %f\n", x);
return results;
}
float[] res = gd(0.2f);
```
The progress of optimizing over $x$ can be plotted as follows.
```
/* Saved in GradDescUtils.java */
public void plotGD(float[] x, float[] y, float[] segment, Function<Float, Float> func,
int width, int height) {
// Function Line
ScatterTrace trace = ScatterTrace.builder(Functions.floatToDoubleArray(x),
Functions.floatToDoubleArray(y))
.mode(ScatterTrace.Mode.LINE)
.build();
// GD Line
ScatterTrace trace2 = ScatterTrace.builder(Functions.floatToDoubleArray(segment),
Functions.floatToDoubleArray(Functions.callFunc(segment, func)))
.mode(ScatterTrace.Mode.LINE)
.build();
// GD Points
ScatterTrace trace3 = ScatterTrace.builder(Functions.floatToDoubleArray(segment),
Functions.floatToDoubleArray(Functions.callFunc(segment, func)))
.build();
Layout layout = Layout.builder()
.height(height)
.width(width)
.showLegend(false)
.build();
display(new Figure(layout, trace, trace2, trace3));
}
/* Saved in GradDescUtils.java */
public void showTrace(float[] res) {
float n = 0;
for (int i = 0; i < res.length; i++) {
if (Math.abs(res[i]) > n) {
n = Math.abs(res[i]);
}
}
NDArray fLineND = manager.arange(-n, n, 0.01f);
float[] fLine = fLineND.toFloatArray();
plotGD(fLine, Functions.callFunc(fLine, f), res, f, 500, 400);
}
showTrace(res);
```
### Learning Rate
:label:`section_gd-learningrate`
The learning rate $\eta$ can be set by the algorithm designer. If we use a learning rate that is too small, it will cause $x$ to update very slowly, requiring more iterations to get a better solution. To show what happens in such a case, consider the progress in the same optimization problem for $\eta = 0.05$. As we can see, even after 10 steps we are still very far from the optimal solution.
```
showTrace(gd(0.05f));
```
Conversely, if we use an excessively high learning rate, $\left|\eta f'(x)\right|$ might be too large for the first-order Taylor expansion formula. That is, the term $\mathcal{O}(\eta^2 f'^2(x))$ in :eqref:`gd-taylor` might become significant. In this case, we cannot guarantee that the iteration of $x$ will be able to lower the value of $f(x)$. For example, when we set the learning rate to $\eta=1.1$, $x$ overshoots the optimal solution $x=0$ and gradually diverges.
```
showTrace(gd(1.1f));
```
### Local Minima
To illustrate what happens for nonconvex functions consider the case of $f(x) = x \cdot \cos c x$. This function has infinitely many local minima. Depending on our choice of learning rate and depending on how well conditioned the problem is, we may end up with one of many solutions. The example below illustrates how an (unrealistically) high learning rate will lead to a poor local minimum.
```
float c = (float)(0.15f * Math.PI);
Function<Float, Float> f = x -> x * (float)Math.cos(c * x);
Function<Float, Float> gradf = x -> (float)(Math.cos(c * x) - c * x * Math.sin(c * x));
showTrace(gd(2));
```
## Multivariate Gradient Descent
Now that we have a better intuition of the univariate case, let us consider the situation where $\mathbf{x} \in \mathbb{R}^d$. That is, the objective function $f: \mathbb{R}^d \to \mathbb{R}$ maps vectors into scalars. Correspondingly its gradient is multivariate, too. It is a vector consisting of $d$ partial derivatives:
$$\nabla f(\mathbf{x}) = \bigg[\frac{\partial f(\mathbf{x})}{\partial x_1}, \frac{\partial f(\mathbf{x})}{\partial x_2}, \ldots, \frac{\partial f(\mathbf{x})}{\partial x_d}\bigg]^\top.$$
Each partial derivative element $\partial f(\mathbf{x})/\partial x_i$ in the gradient indicates the rate of change of $f$ at $\mathbf{x}$ with respect to the input $x_i$. As before in the univariate case we can use the corresponding Taylor approximation for multivariate functions to get some idea of what we should do. In particular, we have that
$$f(\mathbf{x} + \mathbf{\epsilon}) = f(\mathbf{x}) + \mathbf{\epsilon}^\top \nabla f(\mathbf{x}) + \mathcal{O}(\|\mathbf{\epsilon}\|^2).$$
:eqlabel:`gd-multi-taylor`
In other words, up to second order terms in $\mathbf{\epsilon}$ the direction of steepest descent is given by the negative gradient $-\nabla f(\mathbf{x})$. Choosing a suitable learning rate $\eta > 0$ yields the prototypical gradient descent algorithm:
$\mathbf{x} \leftarrow \mathbf{x} - \eta \nabla f(\mathbf{x}).$
To see how the algorithm behaves in practice let us construct an objective function $f(\mathbf{x})=x_1^2+2x_2^2$ with a two-dimensional vector $\mathbf{x} = [x_1, x_2]^\top$ as input and a scalar as output. The gradient is given by $\nabla f(\mathbf{x}) = [2x_1, 4x_2]^\top$. We will observe the trajectory of $\mathbf{x}$ by gradient descent from the initial position $[-5, -2]$. We need two more helper functions. The first uses an update function and applies it $20$ times to the initial value. The second helper visualizes the trajectory of $\mathbf{x}$.
We also create a `Weights` class to make it easier to store the weight parameters and return them in functions.
```
/* Saved in GradDescUtils.java */
public class Weights {
public float x1, x2;
public Weights(float x1, float x2) {
this.x1 = x1;
this.x2 = x2;
}
}
/* Saved in GradDescUtils.java */
/* Optimize a 2D objective function with a customized trainer. */
public ArrayList<Weights> train2d(Function<Float[], Float[]> trainer, int steps) {
// s1 and s2 are internal state variables and will
// be used later in the chapter
float x1 = -5f, x2 = -2f, s1 = 0f, s2 = 0f;
ArrayList<Weights> results = new ArrayList<>();
results.add(new Weights(x1, x2));
for (int i = 1; i < steps + 1; i++) {
Float[] step = trainer.apply(new Float[]{x1, x2, s1, s2});
x1 = step[0];
x2 = step[1];
s1 = step[2];
s2 = step[3];
results.add(new Weights(x1, x2));
System.out.printf("epoch %d, x1 %f, x2 %f\n", i, x1, x2);
}
return results;
}
import java.util.function.BiFunction;
/* Saved in GradDescUtils.java */
/* Show the trace of 2D variables during optimization. */
public void showTrace2d(BiFunction<Float, Float, Float> f, ArrayList<Weights> results) {
// TODO: add when tablesaw adds support for contour and meshgrids
}
```
Next, we observe the trajectory of the optimization variable $\mathbf{x}$ for learning rate $\eta = 0.1$. We can see that after 20 steps the value of $\mathbf{x}$ approaches its minimum at $[0, 0]$. Progress is fairly well-behaved albeit rather slow.
```
float eta = 0.1f;
BiFunction<Float, Float, Float> f = (x1, x2) -> x1 * x1 + 2 * x2 * x2; // Objective
BiFunction<Float, Float, Float[]> gradf = (x1, x2) -> new Float[]{2 * x1, 4 * x2}; // Gradient
Function<Float[], Float[]> gd = (state) -> {
Float x1 = state[0];
Float x2 = state[1];
Float[] g = gradf.apply(x1, x2); // Compute Gradient
Float g1 = g[0];
Float g2 = g[1];
return new Float[]{x1 - eta * g1, x2 - eta * g2, 0f, 0f}; // Update Variables
};
showTrace2d(f, train2d(gd, 20));
```

## Adaptive Methods
As we could see in :numref:`section_gd-learningrate`, getting the learning rate $\eta$ "just right" is tricky. If we pick it too small, we make no progress. If we pick it too large, the solution oscillates and in the worst case it might even diverge. What if we could determine $\eta$ automatically or get rid of having to select a step size at all? Second order methods that look not only at the value and gradient of the objective but also at its *curvature* can help in this case. While these methods cannot be applied to deep learning directly due to the computational cost, they provide useful intuition into how to design advanced optimization algorithms that mimic many of the desirable properties of the algorithms outlined below.
### Newton's Method
Reviewing the Taylor expansion of $f$ there is no need to stop after the first term. In fact, we can write it as
$$f(\mathbf{x} + \mathbf{\epsilon}) = f(\mathbf{x}) + \mathbf{\epsilon}^\top \nabla f(\mathbf{x}) + \frac{1}{2} \mathbf{\epsilon}^\top \nabla \nabla^\top f(\mathbf{x}) \mathbf{\epsilon} + \mathcal{O}(\|\mathbf{\epsilon}\|^3).$$
:eqlabel:`gd-hot-taylor`
To avoid cumbersome notation we define $H_f := \nabla \nabla^\top f(\mathbf{x})$ to be the *Hessian* of $f$. This is a $d \times d$ matrix. For small $d$ and simple problems $H_f$ is easy to compute. For deep networks, on the other hand, $H_f$ may be prohibitively large, due to the cost of storing $\mathcal{O}(d^2)$ entries. Furthermore it may be too expensive to compute via backprop as we would need to apply backprop to the backpropagation call graph. For now let us ignore such considerations and look at what algorithm we'd get.
After all, the minimum of $f$ satisfies $\nabla f(\mathbf{x}) = 0$. Taking derivatives of :eqref:`gd-hot-taylor` with regard to $\mathbf{\epsilon}$ and ignoring higher order terms we arrive at
$$\nabla f(\mathbf{x}) + H_f \mathbf{\epsilon} = 0 \text{ and hence }
\mathbf{\epsilon} = -H_f^{-1} \nabla f(\mathbf{x}).$$
That is, we need to invert the Hessian $H_f$ as part of the optimization problem.
For $f(x) = \frac{1}{2} x^2$ we have $\nabla f(x) = x$ and $H_f = 1$. Hence for any $x$ we obtain $\epsilon = -x$. In other words, a single step is sufficient to converge perfectly without the need for any adjustment! Alas, we got a bit lucky here since the Taylor expansion was exact. Let us see what happens in other problems.
```
float c = 0.5f;
Function<Float, Float> f = x -> (float)Math.cosh(c * x); // Objective
Function<Float, Float> gradf = x -> c * (float)Math.sinh(c * x); // Derivative
Function<Float, Float> hessf = x -> c * c * (float)Math.cosh(c * x); // Hessian
// Hide learning rate for now
public float[] newton(float eta) {
float x = 10f;
float[] results = new float[11];
results[0] = x;
for (int i = 0; i < 10; i++) {
x -= eta * gradf.apply(x) / hessf.apply(x);
results[i + 1] = x;
}
System.out.printf("epoch 10, x: %f\n", x);
return results;
}
showTrace(newton(1));
```
Now let us see what happens when we have a *nonconvex* function, such as $f(x) = x \cos(c x)$. After all, note that in Newton's method we end up dividing by the Hessian. This means that if the second derivative is *negative* we would walk into the direction of *increasing* $f$. That is a fatal flaw of the algorithm. Let us see what happens in practice.
```
c = 0.15f * (float)Math.PI;
Function<Float, Float> f = x -> x * (float)Math.cos(c * x);
Function<Float, Float> gradf = x -> (float)(Math.cos(c * x) - c * x * Math.sin(c * x));
Function<Float, Float> hessf = x -> (float)(-2 * c * Math.sin(c * x) -
x * c * c * Math.cos(c * x));
showTrace(newton(1));
```
This went spectacularly wrong. How can we fix it? One way would be to "fix" the Hessian by taking its absolute value instead. Another strategy is to bring back the learning rate. This seems to defeat the purpose, but not quite. Having second order information allows us to be cautious whenever the curvature is large and to take longer steps whenever the objective is flat. Let us see how this works with a slightly smaller learning rate, say $\eta = 0.5$. As we can see, we have quite an efficient algorithm.
```
showTrace(newton(0.5f));
```
### Convergence Analysis
We only analyze the convergence rate for convex and three times differentiable $f$, where at its minimum $x^*$ the second derivative is nonzero, i.e., where $f''(x^*) > 0$. The multivariate proof is a straightforward extension of the argument below and omitted since it doesn't help us much in terms of intuition.
Denote by $x_k$ the value of $x$ at the $k$-th iteration and let $e_k := x_k - x^*$ be the distance from optimality. By Taylor series expansion we have that the condition $f'(x^*) = 0$ can be written as
$$0 = f'(x_k - e_k) = f'(x_k) - e_k f''(x_k) + \frac{1}{2} e_k^2 f'''(\xi_k).$$
This holds for some $\xi_k \in [x_k - e_k, x_k]$. Recall that we have the update $x_{k+1} = x_k - f'(x_k) / f''(x_k)$. Dividing the above expansion by $f''(x_k)$ yields
$$e_k - f'(x_k) / f''(x_k) = \frac{1}{2} e_k^2 f'''(\xi_k) / f''(x_k).$$
Plugging in the update equations leads to the following bound $e_{k+1} \leq e_k^2 f'''(\xi_k) / f'(x_k)$. Consequently, whenever we are in a region of bounded $f'''(\xi_k) / f''(x_k) \leq c$, we have a quadratically decreasing error $e_{k+1} \leq c e_k^2$.
As an aside, optimization researchers call this *linear* convergence, whereas a condition such as $e_{k+1} \leq \alpha e_k$ would be called a *constant* rate of convergence.
Note that this analysis comes with a number of caveats: We do not really have much of a guarantee when we will reach the region of rapid convergence. Instead, we only know that once we reach it, convergence will be very quick. Second, this requires that $f$ is well-behaved up to higher order derivatives. It comes down to ensuring that $f$ does not have any "surprising" properties in terms of how it might change its values.
### Preconditioning
Quite unsurprisingly computing and storing the full Hessian is very expensive. It is thus desirable to find alternatives. One way to improve matters is by avoiding to compute the Hessian in its entirety but only compute the *diagonal* entries. While this is not quite as good as the full Newton method, it is still much better than not using it. Moreover, estimates for the main diagonal elements are what drives some of the innovation in stochastic gradient descent optimization algorithms. This leads to update algorithms of the form
$$\mathbf{x} \leftarrow \mathbf{x} - \eta \mathrm{diag}(H_f)^{-1} \nabla \mathbf{x}.$$
To see why this might be a good idea consider a situation where one variable denotes height in millimeters and the other one denotes height in kilometers. Assuming that for both the natural scale is in meters we have a terrible mismatch in parameterizations. Using preconditioning removes this. Effectively preconditioning with gradient descent amounts to selecting a different learning rate for each coordinate.
### Gradient Descent with Line Search
One of the key problems in gradient descent was that we might overshoot the goal or make insufficient progress. A simple fix for the problem is to use line search in conjunction with gradient descent. That is, we use the direction given by $\nabla f(\mathbf{x})$ and then perform binary search as to which step length $\eta$ minimizes $f(\mathbf{x} - \eta \nabla f(\mathbf{x}))$.
This algorithm converges rapidly (for an analysis and proof see e.g., :cite:`Boyd.Vandenberghe.2004`). However, for the purpose of deep learning this is not quite so feasible, since each step of the line search would require us to evaluate the objective function on the entire dataset. This is way too costly to accomplish.
## Summary
* Learning rates matter. Too large and we diverge, too small and we do not make progress.
* Gradient descent can get stuck in local minima.
* In high dimensions adjusting learning the learning rate is complicated.
* Preconditioning can help with scale adjustment.
* Newton's method is a lot faster *once* it has started working properly in convex problems.
* Beware of using Newton's method without any adjustments for nonconvex problems.
## Exercises
1. Experiment with different learning rates and objective functions for gradient descent.
1. Implement line search to minimize a convex function in the interval $[a, b]$.
* Do you need derivatives for binary search, i.e., to decide whether to pick $[a, (a+b)/2]$ or $[(a+b)/2, b]$.
* How rapid is the rate of convergence for the algorithm?
* Implement the algorithm and apply it to minimizing $\log (\exp(x) + \exp(-2*x -3))$.
1. Design an objective function defined on $\mathbb{R}^2$ where gradient descent is exceedingly slow. Hint - scale different coordinates differently.
1. Implement the lightweight version of Newton's method using preconditioning:
* Use diagonal Hessian as preconditioner.
* Use the absolute values of that rather than the actual (possibly signed) values.
* Apply this to the problem above.
1. Apply the algorithm above to a number of objective functions (convex or not). What happens if you rotate coordinates by $45$ degrees?
| github_jupyter |
## Demo 4: HKR multiclass and fooling
[](https://colab.research.google.com/github/deel-ai/deel-lip/blob/master/doc/notebooks/demo4.ipynb)
This notebook will show how to train a lispchitz network in a multiclass setup.
The HKR is extended to multiclass using a one-vs all setup. It will go through
the process of designing and training the network. It will also show how to create robustness certificates from the output of the network. Finally these
certificates will be checked by attacking the network.
### installation
First, we install the required libraries. `Foolbox` will allow to perform adversarial attacks on the trained network.
```
# pip install deel-lip foolbox -qqq
from deel.lip.layers import (
SpectralDense,
SpectralConv2D,
ScaledL2NormPooling2D,
ScaledAveragePooling2D,
FrobeniusDense,
)
from deel.lip.model import Sequential
from deel.lip.activations import GroupSort, FullSort
from deel.lip.losses import MulticlassHKR, MulticlassKR
from deel.lip.callbacks import CondenseCallback
from tensorflow.keras.layers import Input, Flatten
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.datasets import mnist, fashion_mnist, cifar10
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import numpy as np
```
For this example, the dataset `fashion_mnist` will be used. In order to keep things simple, no data augmentation will be performed.
```
# load data
(x_train, y_train_ord), (x_test, y_test_ord) = fashion_mnist.load_data()
# standardize and reshape the data
x_train = np.expand_dims(x_train, -1) / 255
x_test = np.expand_dims(x_test, -1) / 255
# one hot encode the labels
y_train = to_categorical(y_train_ord)
y_test = to_categorical(y_test_ord)
```
Let's build the network.
### the architecture
The original one vs all setup would require 10 different networks ( 1 per class ), however, in practice we use a network with
a common body and 10 1-lipschitz heads. Experiments have shown that this setup don't affect the network performance. In order to ease the creation of such network, `FrobeniusDense` layer has a parameter for this: whenr `disjoint_neurons=True` it act as the stacking of 10 single neurons head. Note that, altough each head is a 1-lipschitz function the overall network is not 1-lipschitz (Concatenation is not 1-lipschitz). We will see later how this affects the certficate creation.
### the loss
The multiclass loss can be found in `HKR_multiclass_loss`. The loss has two params: `alpha` and `min_margin`. Decreasing `alpha` and increasing `min_margin` improve robustness (at the cost of accuracy). note also in the case of lipschitz networks, more robustness require more parameters. For more information see [our paper](https://arxiv.org/abs/2006.06520).
In this setup choosing `alpha=100`, `min_margin=.25` provide a good robustness without hurting the accuracy too much.
Finally the `KR_multiclass_loss()` indicate the robustness of the network ( proxy of the average certificate )
```
# Sequential (resp Model) from deel.model has the same properties as any lipschitz model.
# It act only as a container, with features specific to lipschitz
# functions (condensation, vanilla_exportation...)
model = Sequential(
[
Input(shape=x_train.shape[1:]),
# Lipschitz layers preserve the API of their superclass ( here Conv2D )
# an optional param is available: k_coef_lip which control the lipschitz
# constant of the layer
SpectralConv2D(
filters=16,
kernel_size=(3, 3),
activation=GroupSort(2),
use_bias=True,
kernel_initializer="orthogonal",
),
# usual pooling layer are implemented (avg, max...), but new layers are also available
ScaledL2NormPooling2D(pool_size=(2, 2), data_format="channels_last"),
SpectralConv2D(
filters=32,
kernel_size=(3, 3),
activation=GroupSort(2),
use_bias=True,
kernel_initializer="orthogonal",
),
ScaledL2NormPooling2D(pool_size=(2, 2), data_format="channels_last"),
# our layers are fully interoperable with existing keras layers
Flatten(),
SpectralDense(
64,
activation=GroupSort(2),
use_bias=True,
kernel_initializer="orthogonal",
),
FrobeniusDense(
y_train.shape[-1], activation=None, use_bias=False, kernel_initializer="orthogonal"
),
],
# similary model has a parameter to set the lipschitz constant
# to set automatically the constant of each layer
k_coef_lip=1.0,
name="hkr_model",
)
# HKR (Hinge-Krantorovich-Rubinstein) optimize robustness along with accuracy
model.compile(
# decreasing alpha and increasing min_margin improve robustness (at the cost of accuracy)
# note also in the case of lipschitz networks, more robustness require more parameters.
loss=MulticlassHKR(alpha=100, min_margin=.25),
optimizer=Adam(1e-4),
metrics=["accuracy", MulticlassKR()],
)
model.summary()
```
### notes about constraint enforcement
There are currently 3 way to enforce a constraint in a network:
1. regularization
2. weight reparametrization
3. weight projection
The first one don't provide the required garanties, this is why `deel-lip` focuses on the later two. Weight reparametrization is done directly in the layers (parameter `niter_bjorck`) this trick allow to perform arbitrary gradient updates without breaking the constraint. However this is done in the graph, increasing ressources consumption. The last method project the weights between each batch, ensuring the constraint at an more affordable computational cost. It can be done in `deel-lip` using the `CondenseCallback`. The main problem with this method is a reduced efficiency of each update.
As a rule of thumb, when reparametrization is used alone, setting `niter_bjorck` to at least 15 is advised. However when combined with weight projection, this setting can be lowered greatly.
```
# fit the model
model.fit(
x_train,
y_train,
batch_size=4096,
epochs=100,
validation_data=(x_test, y_test),
shuffle=True,
verbose=1,
)
```
### model exportation
Once training is finished, the model can be optimized for inference by using the `vanilla_export()` method.
```
# once training is finished you can convert
# SpectralDense layers into Dense layers and SpectralConv2D into Conv2D
# which optimize performance for inference
vanilla_model = model.vanilla_export()
```
### certificates generation and adversarial attacks
```
import foolbox as fb
from tensorflow import convert_to_tensor
import matplotlib.pyplot as plt
import tensorflow as tf
# we will test it on 10 samples one of each class
nb_adv = 10
hkr_fmodel = fb.TensorFlowModel(vanilla_model, bounds=(0., 1.), device="/GPU:0")
```
In order to test the robustness of the model, the first correctly classified element of each class are selected.
```
# strategy: first
# we select a sample from each class.
images_list = []
labels_list = []
# select only a few element from the test set
selected=np.random.choice(len(y_test_ord), 500)
sub_y_test_ord = y_test_ord[:300]
sub_x_test = x_test[:300]
# drop misclassified elements
misclassified_mask = tf.equal(tf.argmax(vanilla_model.predict(sub_x_test), axis=-1), sub_y_test_ord)
sub_x_test = sub_x_test[misclassified_mask]
sub_y_test_ord = sub_y_test_ord[misclassified_mask]
# now we will build a list with input image for each element of the matrix
for i in range(10):
# select the first element of the ith label
label_mask = [sub_y_test_ord==i]
x = sub_x_test[label_mask][0]
y = sub_y_test_ord[label_mask][0]
# convert it to tensor for use with foolbox
images = convert_to_tensor(x.astype("float32"), dtype="float32")
labels = convert_to_tensor(y, dtype="int64")
# repeat the input 10 times, one per misclassification target
images_list.append(images)
labels_list.append(labels)
images = convert_to_tensor(images_list)
labels = convert_to_tensor(labels_list)
```
In order to build a certficate, we take for each sample the top 2 output and apply this formula:
$$ \epsilon \geq \frac{\text{top}_1 - \text{top}_2}{2} $$
Where epsilon is the robustness radius for the considered sample.
```
values, classes = tf.math.top_k(hkr_fmodel(images), k=2)
certificates = (values[:, 0] - values[:, 1]) / 2
certificates
```
now we will attack the model to check if the certificates are respected. In this setup `L2CarliniWagnerAttack` is used but in practice as these kind of networks are gradient norm preserving, other attacks gives very similar results.
```
attack = fb.attacks.L2CarliniWagnerAttack(binary_search_steps=6, steps=8000)
imgs, advs, success = attack(hkr_fmodel, images, labels, epsilons=None)
dist_to_adv = np.sqrt(np.sum(np.square(images - advs), axis=(1,2,3)))
dist_to_adv
```
As we can see the certificate are respected.
```
tf.assert_less(certificates, dist_to_adv)
```
Finally we can take a visual look at the obtained examples.
We first start with utility functions for display.
```
class_mapping = {
0: "T-shirt/top",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle boot",
}
def adversarial_viz(model, images, advs, class_mapping):
"""
This functions shows for each sample:
- the original image
- the adversarial image
- the difference map
- the certificate and the observed distance to adversarial
"""
scale = 1.5
kwargs={}
nb_imgs = images.shape[0]
# compute certificates
values, classes = tf.math.top_k(model(images), k=2)
certificates = (values[:, 0] - values[:, 1]) / 2
# compute difference distance to adversarial
dist_to_adv = np.sqrt(np.sum(np.square(images - advs), axis=(1,2,3)))
# find classes labels for imgs and advs
orig_classes = [class_mapping[i] for i in tf.argmax(model(images), axis=-1).numpy()]
advs_classes = [class_mapping[i] for i in tf.argmax(model(advs), axis=-1).numpy()]
# compute differences maps
if images.shape[-1] != 3:
diff_pos = np.clip(advs - images, 0, 1.)
diff_neg = np.clip(images - advs, 0, 1.)
diff_map = np.concatenate([diff_neg, diff_pos, np.zeros_like(diff_neg)], axis=-1)
else:
diff_map = np.abs(advs - images)
# expands image to be displayed
if images.shape[-1] != 3:
images = np.repeat(images, 3, -1)
if advs.shape[-1] != 3:
advs = np.repeat(advs, 3, -1)
# create plot
figsize = (3 * scale, nb_imgs * scale)
fig, axes = plt.subplots(
ncols=3,
nrows=nb_imgs,
figsize=figsize,
squeeze=False,
constrained_layout=True,
**kwargs,
)
for i in range(nb_imgs):
ax = axes[i][0]
ax.set_title(orig_classes[i])
ax.set_xticks([])
ax.set_yticks([])
ax.axis("off")
ax.imshow(images[i])
ax = axes[i][1]
ax.set_title(advs_classes[i])
ax.set_xticks([])
ax.set_yticks([])
ax.axis("off")
ax.imshow(advs[i])
ax = axes[i][2]
ax.set_title(f"certif: {certificates[i]:.2f}, obs: {dist_to_adv[i]:.2f}")
ax.set_xticks([])
ax.set_yticks([])
ax.axis("off")
ax.imshow(diff_map[i]/diff_map[i].max())
```
When looking at the adversarial examples we can see that the network has interresting properties:
#### predictability
by looking at the certificates, we can predict if the adversarial example will be close of not
#### disparity among classes
As we can see, the attacks are very efficent on similar classes (eg. T-shirt/top, and Shirt ). This denote that all classes are not made equal regarding robustness.
#### explainability
The network is more explainable: attacks can be used as counterfactuals.
We can tell that removing the inscription on a T-shirt turns it into a shirt makes sense. Non robust examples reveals that the network rely on textures rather on shapes to make it's decision.
```
adversarial_viz(hkr_fmodel, images, advs, class_mapping)
```
| github_jupyter |
```
using Tensorflow;
using static Tensorflow.Binding;
using PlotNET;
using NumSharp;
int training_epochs = 1000;
// Parameters
float learning_rate = 0.01f;
int display_step = 50;
NumPyRandom rng = np.random;
NDArray train_X, train_Y;
int n_samples;
train_X = np.array(3.3f, 4.4f, 5.5f, 6.71f, 6.93f, 4.168f, 9.779f, 6.182f, 7.59f, 2.167f,
7.042f, 10.791f, 5.313f, 7.997f, 5.654f, 9.27f, 3.1f);
train_Y = np.array(1.7f, 2.76f, 2.09f, 3.19f, 1.694f, 1.573f, 3.366f, 2.596f, 2.53f, 1.221f,
2.827f, 3.465f, 1.65f, 2.904f, 2.42f, 2.94f, 1.3f);
n_samples = train_X.shape[0];
// tf Graph Input
var X = tf.placeholder(tf.float32);
var Y = tf.placeholder(tf.float32);
// Set model weights
// We can set a fixed init value in order to debug
// var rnd1 = rng.randn<float>();
// var rnd2 = rng.randn<float>();
var W = tf.Variable(-0.06f, name: "weight");
var b = tf.Variable(-0.73f, name: "bias");
// Construct a linear model
var pred = tf.add(tf.multiply(X, W), b);
// Mean squared error
var cost = tf.reduce_sum(tf.pow(pred - Y, 2.0f)) / (2.0f * n_samples);
// Gradient descent
// Note, minimize() knows to modify W and b because Variable objects are trainable=True by default
var optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost);
// Initialize the variables (i.e. assign their default value)
var init = tf.global_variables_initializer();
// Start training
using (var sess = tf.Session())
{
// Run the initializer
sess.run(init);
// Fit all training data
for (int epoch = 0; epoch < training_epochs; epoch++)
{
foreach (var (x, y) in zip<float>(train_X, train_Y))
{
sess.run(optimizer,
new FeedItem(X, x),
new FeedItem(Y, y));
}
// Display logs per epoch step
if ((epoch + 1) % display_step == 0)
{
var c = sess.run(cost,
new FeedItem(X, train_X),
new FeedItem(Y, train_Y));
Console.WriteLine($"Epoch: {epoch + 1} cost={c} " + $"W={sess.run(W)} b={sess.run(b)}");
}
}
Console.WriteLine("Optimization Finished!");
var training_cost = sess.run(cost,
new FeedItem(X, train_X),
new FeedItem(Y, train_Y));
var plotter = new Plotter();
plotter.Plot(
train_X,
train_Y,
"Original data", ChartType.Scatter,"markers");
plotter.Plot(
train_X,
sess.run(W) * train_X + sess.run(b),
"Fitted line", ChartType.Scatter, "Fitted line");
plotter.Show();
// Testing example
var test_X = np.array(6.83f, 4.668f, 8.9f, 7.91f, 5.7f, 8.7f, 3.1f, 2.1f);
var test_Y = np.array(1.84f, 2.273f, 3.2f, 2.831f, 2.92f, 3.24f, 1.35f, 1.03f);
Console.WriteLine("Testing... (Mean square loss Comparison)");
var testing_cost = sess.run(tf.reduce_sum(tf.pow(pred - Y, 2.0f)) / (2.0f * test_X.shape[0]),
new FeedItem(X, test_X),
new FeedItem(Y, test_Y));
Console.WriteLine($"Testing cost={testing_cost}");
var diff = Math.Abs((float)training_cost - (float)testing_cost);
Console.WriteLine($"Absolute mean square loss difference: {diff}");
plotter.Plot(
test_X,
test_Y,
"Testing data", ChartType.Scatter, "markers");
plotter.Plot(
train_X,
sess.run(W) * train_X + sess.run(b),
"Fitted line", ChartType.Scatter);
plotter.Show();
return diff < 0.01;
}
```
| github_jupyter |
# Representing Qubit States
You now know something about bits, and about how our familiar digital computers work. All the complex variables, objects and data structures used in modern software are basically all just big piles of bits. Those of us who work on quantum computing call these *classical variables.* The computers that use them, like the one you are using to read this article, we call *classical computers*.
In quantum computers, our basic variable is the _qubit:_ a quantum variant of the bit. These have exactly the same restrictions as normal bits do: they can store only a single binary piece of information, and can only ever give us an output of `0` or `1`. However, they can also be manipulated in ways that can only be described by quantum mechanics. This gives us new gates to play with, allowing us to find new ways to design algorithms.
To fully understand these new gates, we first need to understand how to write down qubit states. For this we will use the mathematics of vectors, matrices and complex numbers. Though we will introduce these concepts as we go, it would be best if you are comfortable with them already. If you need a more in-depth explanation or refresher, you can find a guide [here](../ch-prerequisites/linear_algebra.html).
## Contents
1. [Classical vs Quantum Bits](#cvsq)
1.1 [Statevectors](#statevectors)
1.2 [Qubit Notation](#notation)
1.3 [Exploring Qubits with Qiskit](#exploring-qubits)
2. [The Rules of Measurement](#rules-measurement)
2.1 [A Very Important Rule](#important-rule)
2.2 [The Implications of this Rule](#implications)
3. [The Bloch Sphere](#bloch-sphere)
3.1 [Describing the Restricted Qubit State](#bloch-sphere-1)
3.2 [Visually Representing a Qubit State](#bloch-sphere-2)
## 1. Classical vs Quantum Bits <a id="cvsq"></a>
### 1.1 Statevectors<a id="statevectors"></a>
In quantum physics we use _statevectors_ to describe the state of our system. Say we wanted to describe the position of a car along a track, this is a classical system so we could use a number $x$:

$$ x=4 $$
Alternatively, we could instead use a collection of numbers in a vector called a _statevector._ Each element in the statevector contains the probability of finding the car in a certain place:

$$
|x\rangle = \begin{bmatrix} 0\\ \vdots \\ 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix}
\begin{matrix} \\ \\ \\ \leftarrow \\ \\ \\ \\ \end{matrix}
\begin{matrix} \\ \\ \text{Probability of} \\ \text{car being at} \\ \text{position 4} \\ \\ \\ \end{matrix}
$$
This isn’t limited to position, we could also keep a statevector of all the possible speeds the car could have, and all the possible colours the car could be. With classical systems (like the car example above), this is a silly thing to do as it requires keeping huge vectors when we only really need one number. But as we will see in this chapter, statevectors happen to be a very good way of keeping track of quantum systems, including quantum computers.
### 1.2 Qubit Notation <a id="notation"></a>
Classical bits always have a completely well-defined state: they are either `0` or `1` at every point during a computation. There is no more detail we can add to the state of a bit than this. So to write down the state of a classical bit (`c`), we can just use these two binary values. For example:
c = 0
This restriction is lifted for quantum bits. Whether we get a `0` or a `1` from a qubit only needs to be well-defined when a measurement is made to extract an output. At that point, it must commit to one of these two options. At all other times, its state will be something more complex than can be captured by a simple binary value.
To see how to describe these, we can first focus on the two simplest cases. As we saw in the last section, it is possible to prepare a qubit in a state for which it definitely gives the outcome `0` when measured.
We need a name for this state. Let's be unimaginative and call it $0$ . Similarly, there exists a qubit state that is certain to output a `1`. We'll call this $1$. These two states are completely mutually exclusive. Either the qubit definitely outputs a ```0```, or it definitely outputs a ```1```. There is no overlap. One way to represent this with mathematics is to use two orthogonal vectors.
$$
|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix} \, \, \, \, |1\rangle =\begin{bmatrix} 0 \\ 1 \end{bmatrix}.
$$
This is a lot of notation to take in all at once. First, let's unpack the weird $|$ and $\rangle$. Their job is essentially just to remind us that we are talking about the vectors that represent qubit states labelled $0$ and $1$. This helps us distinguish them from things like the bit values ```0``` and ```1``` or the numbers 0 and 1. It is part of the bra-ket notation, introduced by Dirac.
If you are not familiar with vectors, you can essentially just think of them as lists of numbers which we manipulate using certain rules. If you are familiar with vectors from your high school physics classes, you'll know that these rules make vectors well-suited for describing quantities with a magnitude and a direction. For example, the velocity of an object is described perfectly with a vector. However, the way we use vectors for quantum states is slightly different from this, so don't hold on too hard to your previous intuition. It's time to do something new!
With vectors we can describe more complex states than just $|0\rangle$ and $|1\rangle$. For example, consider the vector
$$
|q_0\rangle = \begin{bmatrix} \tfrac{1}{\sqrt{2}} \\ \tfrac{i}{\sqrt{2}} \end{bmatrix} .
$$
To understand what this state means, we'll need to use the mathematical rules for manipulating vectors. Specifically, we'll need to understand how to add vectors together and how to multiply them by scalars.
<p>
<details>
<summary>Reminder: Matrix Addition and Multiplication by Scalars (Click here to expand)</summary>
<p>To add two vectors, we add their elements together:
$$|a\rangle = \begin{bmatrix}a_0 \\ a_1 \\ \vdots \\ a_n \end{bmatrix}, \quad
|b\rangle = \begin{bmatrix}b_0 \\ b_1 \\ \vdots \\ b_n \end{bmatrix}$$
$$|a\rangle + |b\rangle = \begin{bmatrix}a_0 + b_0 \\ a_1 + b_1 \\ \vdots \\ a_n + b_n \end{bmatrix} $$
</p>
<p>And to multiply a vector by a scalar, we multiply each element by the scalar:
$$x|a\rangle = \begin{bmatrix}x \times a_0 \\ x \times a_1 \\ \vdots \\ x \times a_n \end{bmatrix}$$
</p>
<p>These two rules are used to rewrite the vector $|q_0\rangle$ (as shown above):
$$
\begin{aligned}
|q_0\rangle & = \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{i}{\sqrt{2}}|1\rangle \\
& = \tfrac{1}{\sqrt{2}}\begin{bmatrix}1\\0\end{bmatrix} + \tfrac{i}{\sqrt{2}}\begin{bmatrix}0\\1\end{bmatrix}\\
& = \begin{bmatrix}\tfrac{1}{\sqrt{2}}\\0\end{bmatrix} + \begin{bmatrix}0\\\tfrac{i}{\sqrt{2}}\end{bmatrix}\\
& = \begin{bmatrix}\tfrac{1}{\sqrt{2}} \\ \tfrac{i}{\sqrt{2}} \end{bmatrix}\\
\end{aligned}
$$
</details>
</p>
<p>
<details>
<summary>Reminder: Orthonormal Bases (Click here to expand)</summary>
<p>
It was stated before that the two vectors $|0\rangle$ and $|1\rangle$ are orthonormal, this means they are both <i>orthogonal</i> and <i>normalised</i>. Orthogonal means the vectors are at right angles:
</p><p><img src="images/basis.svg"></p>
<p>And normalised means their magnitudes (length of the arrow) is equal to 1. The two vectors $|0\rangle$ and $|1\rangle$ are <i>linearly independent</i>, which means we cannot describe $|0\rangle$ in terms of $|1\rangle$, and vice versa. However, using both the vectors $|0\rangle$ and $|1\rangle$, and our rules of addition and multiplication by scalars, we can describe all possible vectors in 2D space:
</p><p><img src="images/basis2.svg"></p>
<p>Because the vectors $|0\rangle$ and $|1\rangle$ are linearly independent, and can be used to describe any vector in 2D space using vector addition and scalar multiplication, we say the vectors $|0\rangle$ and $|1\rangle$ form a <i>basis</i>. In this case, since they are both orthogonal and normalised, we call it an <i>orthonormal basis</i>.
</details>
</p>
Since the states $|0\rangle$ and $|1\rangle$ form an orthonormal basis, we can represent any 2D vector with a combination of these two states. This allows us to write the state of our qubit in the alternative form:
$$ |q_0\rangle = \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{i}{\sqrt{2}}|1\rangle $$
This vector, $|q_0\rangle$ is called the qubit's _statevector,_ it tells us everything we could possibly know about this qubit. For now, we are only able to draw a few simple conclusions about this particular example of a statevector: it is not entirely $|0\rangle$ and not entirely $|1\rangle$. Instead, it is described by a linear combination of the two. In quantum mechanics, we typically describe linear combinations such as this using the word 'superposition'.
Though our example state $|q_0\rangle$ can be expressed as a superposition of $|0\rangle$ and $|1\rangle$, it is no less a definite and well-defined qubit state than they are. To see this, we can begin to explore how a qubit can be manipulated.
### 1.3 Exploring Qubits with Qiskit <a id="exploring-qubits"></a>
First, we need to import all the tools we will need:
```
from qiskit import QuantumCircuit, execute, Aer
from qiskit.visualization import plot_histogram, plot_bloch_vector
from math import sqrt, pi
```
In Qiskit, we use the `QuantumCircuit` object to store our circuits, this is essentially a list of the quantum gates in our circuit and the qubits they are applied to.
```
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit
```
In our quantum circuits, our qubits always start out in the state $|0\rangle$. We can use the `initialize()` method to transform this into any state. We give `initialize()` the vector we want in the form of a list, and tell it which qubit(s) we want to initialise in this state:
```
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit
initial_state = [0,1] # Define initial_state as |1>
qc.initialize(initial_state, 0) # Apply initialisation operation to the 0th qubit
qc.draw('text') # Let's view our circuit (text drawing is required for the 'Initialize' gate due to a known bug in qiskit)
```
We can then use one of Qiskit’s simulators to view the resulting state of our qubit. To begin with we will use the statevector simulator, but we will explain the different simulators and their uses later.
```
backend = Aer.get_backend('statevector_simulator') # Tell Qiskit how to simulate our circuit
```
To get the results from our circuit, we use `execute` to run our circuit, giving the circuit and the backend as arguments. We then use `.result()` to get the result of this:
```
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit
initial_state = [0,1] # Define initial_state as |1>
qc.initialize(initial_state, 0) # Apply initialisation operation to the 0th qubit
result = execute(qc,backend).result() # Do the simulation, returning the result
```
from `result`, we can then get the final statevector using `.get_statevector()`:
```
qc = QuantumCircuit(1) # Create a quantum circuit with one qubit
initial_state = [0,1] # Define initial_state as |1>
qc.initialize(initial_state, 0) # Apply initialisation operation to the 0th qubit
result = execute(qc,backend).result() # Do the simulation, returning the result
out_state = result.get_statevector()
print(out_state) # Display the output state vector
```
**Note:** Python uses `j` to represent $i$ in complex numbers. We see a vector with two complex elements: `0.+0.j` = 0, and `1.+0.j` = 1.
Let’s now measure our qubit as we would in a real quantum computer and see the result:
```
qc.measure_all()
qc.draw('text')
```
This time, instead of the statevector we will get the counts for the `0` and `1` results using `.get_counts()`:
```
result = execute(qc,backend).result()
counts = result.get_counts()
plot_histogram(counts)
```
We can see that we (unsurprisingly) have a 100% chance of measuring $|1\rangle$. This time, let’s instead put our qubit into a superposition and see what happens. We will use the state $|q_0\rangle$ from earlier in this section:
$$ |q_0\rangle = \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{i}{\sqrt{2}}|1\rangle $$
We need to add these amplitudes to a python list. To add a complex amplitude we use `complex`, giving the real and imaginary parts as arguments:
```
initial_state = [1/sqrt(2), 1j/sqrt(2)] # Define state |q>
```
And we then repeat the steps for initialising the qubit as before:
```
qc = QuantumCircuit(1) # Must redefine qc
qc.initialize(initial_state, 0) # Initialise the 0th qubit in the state `initial_state`
state = execute(qc,backend).result().get_statevector() # Execute the circuit
print(state) # Print the result
results = execute(qc,backend).result().get_counts()
plot_histogram(results)
```
We can see we have an equal probability of measuring either $|0\rangle$ or $|1\rangle$. To explain this, we need to talk about measurement.
## 2. The Rules of Measurement <a id="rules-measurement"></a>
### 2.1 A Very Important Rule <a id="important-rule"></a>
There is a simple rule for measurement. To find the probability of measuring a state $|\psi \rangle$ in the state $|x\rangle$ we do:
$$p(|x\rangle) = | \langle x| \psi \rangle|^2$$
The symbols $\langle$ and $|$ tell us $\langle x |$ is a row vector. In quantum mechanics we call the column vectors _kets_ and the row vectors _bras._ Together they make up _bra-ket_ notation. Any ket $|a\rangle$ has a corresponding bra $\langle a|$, and we convert between them using the conjugate transpose.
<details>
<summary>Reminder: The Inner Product (Click here to expand)</summary>
<p>There are different ways to multiply vectors, here we use the <i>inner product</i>. The inner product is a generalisation of the <i>dot product</i> which you may already be familiar with. In this guide, we use the inner product between a bra (row vector) and a ket (column vector), and it follows this rule:
$$\langle a| = \begin{bmatrix}a_0^*, & a_1^*, & \dots & a_n^* \end{bmatrix}, \quad
|b\rangle = \begin{bmatrix}b_0 \\ b_1 \\ \vdots \\ b_n \end{bmatrix}$$
$$\langle a|b\rangle = a_0^* b_0 + a_1^* b_1 \dots a_n^* b_n$$
</p>
<p>We can see that the inner product of two vectors always gives us a scalar. A useful thing to remember is that the inner product of two orthogonal vectors is 0, for example if we have the orthogonal vectors $|0\rangle$ and $|1\rangle$:
$$\langle1|0\rangle = \begin{bmatrix} 0 , & 1\end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix} = 0$$
</p>
<p>Additionally, remember that the vectors $|0\rangle$ and $|1\rangle$ are also normalised (magnitudes are equal to 1):
$$
\begin{aligned}
\langle0|0\rangle & = \begin{bmatrix} 1 , & 0\end{bmatrix}\begin{bmatrix}1 \\ 0\end{bmatrix} = 1 \\
\langle1|1\rangle & = \begin{bmatrix} 0 , & 1\end{bmatrix}\begin{bmatrix}0 \\ 1\end{bmatrix} = 1
\end{aligned}
$$
</p>
</details>
In the equation above, $|x\rangle$ can be any qubit state. To find the probability of measuring $|x\rangle$, we take the inner product of $|x\rangle$ and the state we are measuring (in this case $|\psi\rangle$), then square the magnitude. This may seem a little convoluted, but it will soon become second nature.
If we look at the state $|q_0\rangle$ from before, we can see the probability of measuring $|0\rangle$ is indeed $0.5$:
$$
\begin{aligned}
|q_0\rangle & = \tfrac{1}{\sqrt{2}}|0\rangle + \tfrac{i}{\sqrt{2}}|1\rangle \\
\langle 0| q_0 \rangle & = \tfrac{1}{\sqrt{2}}\langle 0|0\rangle - \tfrac{i}{\sqrt{2}}\langle 0|1\rangle \\
& = \tfrac{1}{\sqrt{2}}\cdot 1 - \tfrac{i}{\sqrt{2}} \cdot 0\\
& = \tfrac{1}{\sqrt{2}}\\
|\langle 0| q_0 \rangle|^2 & = \tfrac{1}{2}
\end{aligned}
$$
You should verify the probability of measuring $|1\rangle$ as an exercise.
This rule governs how we get information out of quantum states. It is therefore very important for everything we do in quantum computation. It also immediately implies several important facts.
### 2.2 The Implications of this Rule <a id="implications"></a>
### #1 Normalisation
The rule shows us that amplitudes are related to probabilities. If we want the probabilities to add up to 1 (which they should!), we need to ensure that the statevector is properly normalized. Specifically, we need the magnitude of the state vector to be 1.
$$ \langle\psi|\psi\rangle = 1 \\ $$
Thus if:
$$ |\psi\rangle = \alpha|0\rangle + \beta|1\rangle $$
Then:
$$ \sqrt{|\alpha|^2 + |\beta|^2} = 1 $$
This explains the factors of $\sqrt{2}$ you have seen throughout this chapter. In fact, if we try to give `initialize()` a vector that isn’t normalised, it will give us an error:
```
vector = [1,1]
qc.initialize(vector, 0)
```
#### Quick Exercise
1. Create a state vector that will give a $1/3$ probability of measuring $|0\rangle$.
2. Create a different state vector that will give the same measurement probabilities.
3. Verify that the probability of measuring $|1\rangle$ for these two states is $2/3$.
You can check your answer in the widget below (you can use 'pi' and 'sqrt' in the vector):
```
# Run the code in this cell to interact with the widget
from qiskit_textbook.widgets import state_vector_exercise
state_vector_exercise(target=1/3)
```
### #2 Alternative measurement
The measurement rule gives us the probability $p(|x\rangle)$ that a state $|\psi\rangle$ is measured as $|x\rangle$. Nowhere does it tell us that $|x\rangle$ can only be either $|0\rangle$ or $|1\rangle$.
The measurements we have considered so far are in fact only one of an infinite number of possible ways to measure a qubit. For any orthogonal pair of states, we can define a measurement that would cause a qubit to choose between the two.
This possibility will be explored more in the next section. For now, just bear in mind that $|x\rangle$ is not limited to being simply $|0\rangle$ or $|1\rangle$.
### #3 Global Phase
We know that measuring the state $|1\rangle$ will give us the output `1` with certainty. But we are also able to write down states such as
$$\begin{bmatrix}0 \\ i\end{bmatrix} = i|1\rangle.$$
To see how this behaves, we apply the measurement rule.
$$ |\langle x| (i|1\rangle) |^2 = | i \langle x|1\rangle|^2 = |\langle x|1\rangle|^2 $$
Here we find that the factor of $i$ disappears once we take the magnitude of the complex number. This effect is completely independent of the measured state $|x\rangle$. It does not matter what measurement we are considering, the probabilities for the state $i|1\rangle$ are identical to those for $|1\rangle$. Since measurements are the only way we can extract any information from a qubit, this implies that these two states are equivalent in all ways that are physically relevant.
More generally, we refer to any overall factor $\gamma$ on a state for which $|\gamma|=1$ as a 'global phase'. States that differ only by a global phase are physically indistinguishable.
$$ |\langle x| ( \gamma |a\rangle) |^2 = | \gamma \langle x|a\rangle|^2 = |\langle x|a\rangle|^2 $$
Note that this is distinct from the phase difference _between_ terms in a superposition, which is known as the 'relative phase'. This becomes relevant once we consider different types of measurements and multiple qubits.
### #4 The Observer Effect
We know that the amplitudes contain information about the probability of us finding the qubit in a specific state, but once we have measured the qubit, we know with certainty what the state of the qubit is. For example, if we measure a qubit in the state:
$$ |q\rangle = \alpha|0\rangle + \beta|1\rangle$$
And find it in the state $|0\rangle$, if we measure again, there is a 100% chance of finding the qubit in the state $|0\rangle$. This means the act of measuring _changes_ the state of our qubits.
$$ |q\rangle = \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \xrightarrow{\text{Measure }|0\rangle} |q\rangle = |0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$$
We sometimes refer to this as _collapsing_ the state of the qubit. It is a potent effect, and so one that must be used wisely. For example, were we to constantly measure each of our qubits to keep track of their value at each point in a computation, they would always simply be in a well-defined state of either $|0\rangle$ or $|1\rangle$. As such, they would be no different from classical bits and our computation could be easily replaced by a classical computation. To achieve truly quantum computation we must allow the qubits to explore more complex states. Measurements are therefore only used when we need to extract an output. This means that we often place the all measurements at the end of our quantum circuit.
We can demonstrate this using Qiskit’s statevector simulator. Let's initialise a qubit in superposition:
```
qc = QuantumCircuit(1) # Redefine qc
initial_state = [0.+1.j/sqrt(2),1/sqrt(2)+0.j]
qc.initialize(initial_state, 0)
qc.draw('text')
```
This should initialise our qubit in the state:
$$ |q\rangle = \tfrac{i}{\sqrt{2}}|0\rangle + \tfrac{1}{\sqrt{2}}|1\rangle $$
We can verify this using the simulator:
```
state = execute(qc, backend).result().get_statevector()
print("Qubit State = " + str(state))
```
We can see here the qubit is initialised in the state `[0.+0.70710678j 0.70710678+0.j]`, which is the state we expected.
Let’s now measure this qubit:
```
qc.measure_all()
qc.draw('text')
```
When we simulate this entire circuit, we can see that one of the amplitudes is _always_ 0:
```
state = execute(qc, backend).result().get_statevector()
print("State of Measured Qubit = " + str(state))
```
You can re-run this cell a few times to reinitialise the qubit and measure it again. You will notice that either outcome is equally probable, but that the state of the qubit is never a superposition of $|0\rangle$ and $|1\rangle$. Somewhat interestingly, the global phase on the state $|0\rangle$ survives, but since this is global phase, we can never measure it on a real quantum computer.
### A Note about Quantum Simulators
We can see that writing down a qubit’s state requires keeping track of two complex numbers, but when using a real quantum computer we will only ever receive a yes-or-no (`0` or `1`) answer for each qubit. The output of a 10-qubit quantum computer will look like this:
`0110111110`
Just 10 bits, no superposition or complex amplitudes. When using a real quantum computer, we cannot see the states of our qubits mid-computation, as this would destroy them! This behaviour is not ideal for learning, so Qiskit provides different quantum simulators: The `qasm_simulator` behaves as if you are interacting with a real quantum computer, and will not allow you to use `.get_statevector()`. Alternatively, `statevector_simulator`, (which we have been using in this chapter) does allow peeking at the quantum states before measurement, as we have seen.
## 3. The Bloch Sphere <a id="bloch-sphere"></a>
### 3.1 Describing the Restricted Qubit State <a id="bloch-sphere-1"></a>
We saw earlier in this chapter that the general state of a qubit ($|q\rangle$) is:
$$
|q\rangle = \alpha|0\rangle + \beta|1\rangle
$$
$$
\alpha, \beta \in \mathbb{C}
$$
(The second line tells us $\alpha$ and $\beta$ are complex numbers). The first two implications in section 2 tell us that we cannot differentiate between some of these states. This means we can be more specific in our description of the qubit.
Firstly, since we cannot measure global phase, we can only measure the difference in phase between the states $|0\rangle$ and $|1\rangle$. Instead of having $\alpha$ and $\beta$ be complex, we can confine them to the real numbers and add a term to tell us the relative phase between them:
$$
|q\rangle = \alpha|0\rangle + e^{i\phi}\beta|1\rangle
$$
$$
\alpha, \beta, \phi \in \mathbb{R}
$$
Finally, since the qubit state must be normalised, i.e.
$$
\sqrt{\alpha^2 + \beta^2} = 1
$$
we can use the trigonometric identity:
$$
\sqrt{\sin^2{x} + \cos^2{x}} = 1
$$
to describe the real $\alpha$ and $\beta$ in terms of one variable, $\theta$:
$$
\alpha = \cos{\tfrac{\theta}{2}}, \quad \beta=\sin{\tfrac{\theta}{2}}
$$
From this we can describe the state of any qubit using the two variables $\phi$ and $\theta$:
$$
|q\rangle = \cos{\tfrac{\theta}{2}}|0\rangle + e^{i\phi}\sin{\tfrac{\theta}{2}}|1\rangle
$$
$$
\theta, \phi \in \mathbb{R}
$$
### 3.2 Visually Representing a Qubit State <a id="bloch-sphere-2"></a>
We want to plot our general qubit state:
$$
|q\rangle = \cos{\tfrac{\theta}{2}}|0\rangle + e^{i\phi}\sin{\tfrac{\theta}{2}}|1\rangle
$$
If we interpret $\theta$ and $\phi$ as spherical co-ordinates ($r = 1$, since the magnitude of the qubit state is $1$), we can plot any qubit state on the surface of a sphere, known as the _Bloch sphere._
Below we have plotted a qubit in the state $|{+}\rangle$. In this case, $\theta = \pi/2$ and $\phi = 0$.
(Qiskit has a function to plot a bloch sphere, `plot_bloch_vector()`, but at the time of writing it only takes cartesian coordinates. We have included a function that does the conversion automatically).
```
from qiskit_textbook.widgets import plot_bloch_vector_spherical
coords = [pi/2,0,1] # [Theta, Phi, Radius]
plot_bloch_vector_spherical(coords) # Bloch Vector with spherical coordinates
```
#### Warning!
When first learning about qubit states, it's easy to confuse the qubits _statevector_ with its _Bloch vector_. Remember the statevector is the vector discussed in [1.1](#notation), that holds the amplitudes for the two states our qubit can be in. The Bloch vector is a visualisation tool that maps the 2D, complex statevector onto real, 3D space.
#### Quick Exercise
Use `plot_bloch_vector()` or `plot_bloch_sphere_spherical()` to plot a qubit in the states:
1. $|0\rangle$
2. $|1\rangle$
3. $\tfrac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$
4. $\tfrac{1}{\sqrt{2}}(|0\rangle - i|1\rangle)$
5. $\tfrac{1}{\sqrt{2}}\begin{bmatrix}i\\1\end{bmatrix}$
We have also included below a widget that converts from spherical co-ordinates to cartesian, for use with `plot_bloch_vector()`:
```
from qiskit_textbook.widgets import bloch_calc
bloch_calc()
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# Informer
### Uses informer model as prediction of future.
```
import os, sys
from tqdm import tqdm
from subseasonal_toolkit.utils.notebook_util import isnotebook
if isnotebook():
# Autoreload packages that are modified
%load_ext autoreload
%autoreload 2
else:
from argparse import ArgumentParser
import pandas as pd
import numpy as np
from scipy.spatial.distance import cdist, euclidean
from datetime import datetime, timedelta
from ttictoc import tic, toc
from subseasonal_data.utils import get_measurement_variable
from subseasonal_toolkit.utils.general_util import printf
from subseasonal_toolkit.utils.experiments_util import get_id_name, get_th_name, get_first_year, get_start_delta
from subseasonal_toolkit.utils.models_util import (get_submodel_name, start_logger, log_params, get_forecast_filename,
save_forecasts)
from subseasonal_toolkit.utils.eval_util import get_target_dates, mean_rmse_to_score, save_metric
from sklearn.linear_model import *
from subseasonal_data import data_loaders
#
# Specify model parameters
#
if not isnotebook():
# If notebook run as a script, parse command-line arguments
parser = ArgumentParser()
parser.add_argument("pos_vars",nargs="*") # gt_id and horizon
parser.add_argument('--target_dates', '-t', default="std_test")
args, opt = parser.parse_known_args()
# Assign variables
gt_id = get_id_name(args.pos_vars[0]) # "contest_precip" or "contest_tmp2m"
horizon = get_th_name(args.pos_vars[1]) # "12w", "34w", or "56w"
target_dates = args.target_dates
else:
# Otherwise, specify arguments interactively
gt_id = "contest_tmp2m"
horizon = "34w"
target_dates = "std_contest"
#
# Process model parameters
#
# One can subtract this number from a target date to find the last viable training date.
start_delta = timedelta(days=get_start_delta(horizon, gt_id))
# Record model and submodel name
model_name = "informer"
submodel_name = get_submodel_name(model_name)
FIRST_SAVE_YEAR = 2007 # Don't save forecasts from years prior to FIRST_SAVE_YEAR
if not isnotebook():
# Save output to log file
logger = start_logger(model=model_name,submodel=submodel_name,gt_id=gt_id,
horizon=horizon,target_dates=target_dates)
# Store parameter values in log
params_names = ['gt_id', 'horizon', 'target_dates']
params_values = [eval(param) for param in params_names]
log_params(params_names, params_values)
printf('Loading target variable and dropping extraneous columns')
tic()
var = get_measurement_variable(gt_id)
gt = data_loaders.get_ground_truth(gt_id).loc[:,["start_date","lat","lon",var]]
toc()
printf('Pivoting dataframe to have one column per lat-lon pair and one row per start_date')
tic()
gt = gt.set_index(['lat','lon','start_date']).squeeze().unstack(['lat','lon'])
toc()
#
# Make predictions for each target date
#
from fbprophet import Prophet
from pandas.tseries.offsets import DateOffset
def get_first_fourth_month(date):
targets = {(1, 31), (5, 31), (9, 30)}
while (date.month, date.day) not in targets:
date = date - DateOffset(days=1)
return date
def get_predictions(date):
# take the first (12/31, 8/31, 4/30) right before the date.
true_date = get_first_fourth_month(date)
true_date_str = true_date.strftime("%Y-%m-%d")
cmd = f"python -u main_informer.py --model informer --data gt-{gt_id}-14d-{horizon} \
--attn prob --features S --start-date {true_date_str} --freq 'd' \
--train_epochs 20 --gpu 0 &"
os.system(cmd) # comment to not run the actual program.
# open the file where this is outputted.
folder_name = f"results/gt-{gt_id}-14d-{horizon}_{true_date_str}_informer_gt-{gt_id}-14d-{horizon}_ftM_sl192_ll96_pl48_dm512_nh8_el3_dl2_df1024_atprob_ebtimeF_dtTrue_test_0/"
# return the answer.
dates = np.load(folder_name + "dates.npy")
preds = np.load(folder_name + "preds.npy")
idx = -1
for i in range(dates):
if dates[i] == date:
idx = i
return preds[idx]
tic()
target_date_objs = pd.Series(get_target_dates(date_str=target_dates,horizon=horizon))
rmses = pd.Series(index=target_date_objs, dtype=np.float64)
preds = pd.DataFrame(index = target_date_objs, columns = gt.columns,
dtype=np.float64)
preds.index.name = "start_date"
# Sort target_date_objs by day of week
target_date_objs = target_date_objs[target_date_objs.dt.weekday.argsort(kind='stable')]
toc()
for target_date_obj in target_date_objs:
tic()
target_date_str = datetime.strftime(target_date_obj, '%Y%m%d')
# Find the last observable training date for this target
last_train_date = target_date_obj - start_delta
if not last_train_date in gt.index:
printf(f'-Warning: no persistence prediction for {target_date_str}; skipping')
continue
printf(f'Forming persistence prediction for {target_date_obj}')
# key logic here:
preds.loc[target_date_obj,:] = get_predictions(target_date_obj)
# Save prediction to file in standard format
if target_date_obj.year >= FIRST_SAVE_YEAR:
save_forecasts(
preds.loc[[target_date_obj],:].unstack().rename("pred").reset_index(),
model=model_name, submodel=submodel_name,
gt_id=gt_id, horizon=horizon,
target_date_str=target_date_str)
# Evaluate and store error if we have ground truth data
if target_date_obj in gt.index:
rmse = np.sqrt(np.square(preds.loc[target_date_obj,:] - gt.loc[target_date_obj,:]).mean())
rmses.loc[target_date_obj] = rmse
print("-rmse: {}, score: {}".format(rmse, mean_rmse_to_score(rmse)))
mean_rmse = rmses.mean()
print("-mean rmse: {}, running score: {}".format(mean_rmse, mean_rmse_to_score(mean_rmse)))
toc()
printf("Save rmses in standard format")
rmses = rmses.sort_index().reset_index()
rmses.columns = ['start_date','rmse']
save_metric(rmses, model=model_name, submodel=submodel_name, gt_id=gt_id, horizon=horizon, target_dates=target_dates, metric="rmse")
```
| github_jupyter |
```
import os
import glob
import json
import pandas as pd
def load_gpu_util(dlprof_summary_file):
with open(dlprof_summary_file) as json_file:
summary = json.load(json_file)
gpu_util_raw = summary["Summary Report"]
gpu_util = {
"sm_util": float(100 - gpu_util_raw["GPU Idle %"][0]),
"tc_util": float(gpu_util_raw["Tensor Core Kernel Utilization %"][0])
}
return gpu_util
def parse_pl_timings(pl_profile_file):
lines = [line.rstrip("\n") for line in open(pl_profile_file)]
mean_timings = {}
for l in lines[7:]:
if "|" in l:
l = l.split("|")
l = [i.strip() for i in l]
mean_timings[l[0]] = float(l[1])
return mean_timings
gpu_names = [
"v100-16gb-300w",
"a100-40gb-400w"
]
compute_types = [
"amp"
]
model_names = [
"distilroberta-base",
"roberta-base",
"roberta-large"
]
columns = ["gpu", "compute", "model", "seq_len", "batch_size",
"cpu_time", "forward", "backward", "train_loss",
"vram_usage", "vram_io",
"sm_util", "tc_util", ]
rows = []
cpu_time_sections = ["get_train_batch", "on_batch_start", "on_train_batch_start",
"training_step_end", "on_after_backward",
"on_batch_end", "on_train_batch_end"]
for gn in gpu_names:
for ct in compute_types:
for mn in model_names:
path = "/".join(["./results", gn, ct, mn])+"/*"
configs = glob.glob(path)
configs.sort(reverse=True)
for c in configs:
print(c)
try:
seq_len, batch_size = c.split("/")[-1].split("-")
row_1 = [gn, ct, mn, int(seq_len), int(batch_size)]
pl_timings = parse_pl_timings(c+"/pl_profile.txt")
cpu_time = sum([pl_timings[k] for k in cpu_time_sections])
metrics_0 = pd.read_csv(c+"/version_0/metrics.csv")
metrics_1 = pd.read_csv(c+"/version_1/metrics.csv")
sm_util = metrics_0["gpu_id: 0/utilization.gpu (%)"].mean()
vram_usage = metrics_0["gpu_id: 0/memory.used (MB)"].mean()
vram_io = metrics_0["gpu_id: 0/utilization.memory (%)"].mean()
test_loss = (metrics_0["train_loss"].mean() + metrics_1["train_loss"].mean())/2
row_2 = [cpu_time, pl_timings["model_forward"], pl_timings["model_backward"], test_loss]
util_data = load_gpu_util(c+"/dlprof_summary.json")
sm_util = (sm_util + util_data["sm_util"])/2
row_3 = [vram_usage, vram_io, sm_util, util_data["tc_util"]]
row = row_1 + row_2 + row_3
print(row)
rows.append(row)
except Exception as e:
print(e)
df = pd.DataFrame(rows, columns=columns)
df.head(20)
df.to_csv("./results.csv")
```
| github_jupyter |
# COMP305 -> 2-median problem on Optimal Placement of 2 Hospitals
## Imports
```
import time
import heapq
import numpy as np
from collections import defaultdict
from collections import Counter
from random import choice
from random import randint
```
## Data Read
```
#with open("tests/test1_new.txt") as f:
# test2 = f.read().splitlines()
#with open("tests/test2_new.txt") as f:
# test2 = f.read().splitlines()
#with open("tests/test3_new.txt") as f:
#test3 = f.read().splitlines()
#with open("tests/test1_aycan.txt") as f:
# test01 = f.read().splitlines()
#with open("tests/test_aycan.txt") as f:
# test001 = f.read().splitlines()
```
# txt -> Graph
```
with open("tests/test2_new.txt") as f:
test2 = f.read().splitlines()
lines=test2
number_of_vertices = int(lines[0])
number_of_edges = int(lines[1])
vertices = lines[2:2+number_of_vertices]
edges = lines[2+number_of_vertices:]
ids_and_populations = [tuple(map(int, vertices[i].split(" "))) for i in range(len(vertices))]
populations = dict(sorted(dict(ids_and_populations).items())) #redundant sort
mydict = lambda: defaultdict(lambda: defaultdict())
G = mydict()
for i in range(len(edges)):
source, target, weight = map(int, edges[i].split(" "))
G[source][target] = weight
G[target][source] = weight
```
## Random spawned k-th neighbor Subgraph expansion
```
def dijkstra_path(G, population_dict, source):
costs = dict()
for key in G:
costs[key] = np.inf
costs[source] = 0
#display(source,costs)
pq = []
for node in G:
heapq.heappush(pq, (node, costs[node]))
while len(pq) != 0:
current_node, current_node_distance = heapq.heappop(pq)
for neighbor_node in G[current_node]:
#print(current_node,costs[source])
weight = G[current_node][neighbor_node]
distance = current_node_distance + weight
if distance < costs[neighbor_node]:
#if source==neighbor_node:
#print('here')
costs[neighbor_node] = distance
heapq.heappush(pq, (neighbor_node, distance))
sorted_costs_lst=list(dict(sorted(costs.items())).values())
sorted_populations_lst = list(dict(sorted(population_dict.items())).values())
#print(np.array(sorted_costs_lst) ,np.array(sorted_populations_lst))
return np.array(sorted_costs_lst) * np.array(sorted_populations_lst)
#return list(dict(sorted(costs.items())).values())
# V4 because runs in V^4
def V4(G):
APSP = np.zeros((number_of_vertices,number_of_vertices))
population_dict = dict(sorted([(k, populations[k]) for k in G.keys()]))
for vertex in vertices:
vertex= int(vertex.split()[0])
APSP[vertex] = [e for e in dijkstra_path(G, population_dict,vertex)]
global glob
res = {}
n = len(APSP)
temp_arr = APSP.copy()
count=0
count2=0
for first in range(n):
for second in range(first+1,n):
if first==second:
continue
count+=1
#print(count)
#print(first,second)
temp_arr = APSP.copy()
for row in temp_arr:
if row[first]<row[second]:
row[second]=0
else:
row[first]=0
#print(temp_arr,count)
to_be_summed = temp_arr[:,[first,second]]
summed = sum(sum(to_be_summed))
res[(first,second)]=summed
ret=min(res, key=res.get)
#display(len(res))
#display(res)
#print('pick {}th and {}th vertices to place hospitals!'.format(ret[0],ret[1]))
return ret, res[ret], res
```
# ML
```
class Vertex:
def __init__(self, id, weight):
self.key = 0
self.id = id
self.visited = False
self.weight = weight
self.neighbour_count = 0
self.neighbour_weight = 0
self.distance = 0
self.sum = 0
self.neighbour_neighbour_count = 0
self.neighbour_neighbour_weight = 0
self.neighbour_distance = 0
self.neighbour_sum = 0
self.neighbour_list = []
def add_neighbour(self, Vertex, distance):
self.neighbour_list += [Vertex]
self.neighbour_count = self.neighbour_count + 1
self.distance = self.distance + distance
self.neighbour_weight = self.neighbour_weight + Vertex.weight
self.sum = self.sum + distance * Vertex.weight
self.visited = True
def add_neighbour_neighbour(self, Vertex):
self.neighbour_neighbour_count += Vertex.neighbour_count
self.neighbour_neighbour_weight += Vertex.neighbour_weight
self.neighbour_distance += Vertex.distance
self.neighbour_sum += Vertex.sum
self.visited = True
def calculate(self, a1, a2, a3):
self.key = self.weight * a1 + self.sum * a2 + self.neighbour_sum * a3
return self.key
def get_key(self):
return self.key
def __str__(self):
return "Key: " + self.key.__str__() + " Id: " + self.id.__str__() + \
" Visited: " + self.visited.__str__() + " Weight: " + self.weight.__str__() + \
" Neighbour_count: " + self.neighbour_count.__str__() + " Neighbour_weight: " + self.neighbour_weight.__str__() + \
" Distance: " + self.distance.__str__() + " Sum: " + self.sum.__str__() + \
" N_N_count: " + self.neighbour_neighbour_count.__str__() + " N_N_weight: " + self.neighbour_neighbour_weight.__str__() + \
" N_Distance: " + self.neighbour_distance.__str__() + " N_Sum: " + self.neighbour_sum.__str__() + "\n"
answer = V4(G)
answer=answer[2]
vertex_list = []
for i in range(len(populations)):
v = Vertex(i, populations[i])
vertex_list += [v]
for i in range(len(edges)):
index1, index2, distance = map(int, edges[i].split(" "))
v0 = vertex_list[index1]
v1 = vertex_list[index2]
v0.add_neighbour(v1, distance)
v1.add_neighbour(v0, distance)
for i in range(len(vertex_list)):
Vlist = vertex_list[i].neighbour_list
v0 = vertex_list[i]
for j in range(len(Vlist)):
v0.add_neighbour_neighbour(vertex_list[j])
ret=answer[min(answer.keys())]
ret_val=min(answer.keys())
dict2 = {}
arr = []
for (key, value) in answer.items():
dict2[value] = key
arr += [value]
with open("tests/test2_new.txt") as f:
lines = f.read().splitlines()
number_of_vertices = int(lines[0])
number_of_edges = int(lines[1])
vertices = lines[2:2+number_of_vertices]
edges = lines[2+number_of_vertices:]
ids_and_populations = [tuple(map(int, vertices[i].split(" "))) for i in range(len(vertices))]
populations = dict(ids_and_populations)
mydict = lambda: defaultdict(lambda: defaultdict())
G = mydict()
for i in range(len(edges)):
source, target, weight = map(int, edges[i].split(" "))
G[source][target] = weight
G[target][source] = weight
vertex_list = []
for i in range(len(populations)):
v = Vertex(i, populations[i])
vertex_list += [v]
for i in range(len(edges)):
index1, index2, distance = map(int, edges[i].split(" "))
v0 = vertex_list[index1]
v1 = vertex_list[index2]
v0.add_neighbour(v1, distance)
v1.add_neighbour(v0, distance)
for i in range(len(vertex_list)):
Vlist = vertex_list[i].neighbour_list
v0 = vertex_list[i]
for j in range(len(Vlist)):
v0.add_neighbour_neighbour(vertex_list[j])
ret=answer[min(answer.keys())]
ret_val=min(answer.keys())
dict2 = {}
arr = []
for (key, value) in answer.items():
dict2[value] = key
arr += [value]
learning_rate = 1.0e-8
epochs = 10
bias = 1
sum = 0
def Preceptator(Vertex1, Vertex2, a1, a2, a3, output):
outputP = Vertex1.calculate(a1, a2, a3) + Vertex2.calculate(a1, a2, a3)
error1 = learning_rate * (output - outputP) * a1
error2 = learning_rate * (output - outputP) * a2
error3 = learning_rate * (output - outputP) * a3
a1 += error1
a2 += error2
a3 += error3
return abs(output - outputP)
def getClosestValue(k):
b = int(k)
lst = list(range(b - 1, b + 2, 1)) # lst = [b-1,b,b+1]
return lst[min(range(len(lst)), key=lambda i: abs(lst[i] - k))]
def Predict(Vertex1, Vertex2, a1, a2, a3):
value = Vertex1.calculate(a1, a2, a3) + Vertex2.calculate(a1, a2, a3)
if (value < 0):
print("value : ", int(value) - 1)
elif value == 0:
print("value : ", value)
else:
print("value : ", int(value))
print("y = ", getClosestValue(w[0]), " + ", getClosestValue(w[1]), " * x1 + ", getClosestValue(w[2]), " * x2")
def calculate(Vertex, a1, a2, a3, a4, a5, a6, a7, a8, a9):
return Vertex.weight * a1 + Vertex.neighbour_count * a2 + \
Vertex.neighbour_weight * a3 + Vertex.distance * a4 + \
Vertex.sum * a5 + Vertex.neighbour_neighbour_count * a6 + \
Vertex.neighbour_neighbour_weight * a7 + Vertex.neighbour_distance * a8 + \
Vertex.neighbour_sum * a9
def sigmoid(X, w, w0):
return (1 / (1 + np.exp(-(np.matmul(X, w) + w0))))
def gradient_W(X, y_truth, y_predicted):
return (np.asarray(
[-np.sum(np.repeat((y_truth[:, c] - y_predicted[:, c])[:, None], X.shape[1], axis=1) * X, axis=0) for c in
range(K)]).transpose()) / 3
def gradient_w0(Y_truth, Y_predicted):
return (-np.sum(Y_truth - Y_predicted, axis=0))
# training sets
#for i in range(epochs):
a1 = np.random.rand(1) * 100
a2 = np.random.rand(1) * 10
a3 = np.random.rand(1) * 1
count = 0
for i in range(len(vertex_list)):
for j in range(int(len(vertex_list))):
if i>j:
v0 = vertex_list[i]
v1 = vertex_list[j]
value = arr[count]
count += 1
for j in range(epochs):
val = Preceptator(v0, v1, a1, a2, a3, value)
if j == epochs-1:
sum += val
print(sum/(count)/max(arr))
#Normalizing the variables
coef_sum = a1 + a2 + a3
a1 = a1 / coef_sum
a2 = a2 / coef_sum
a3 = a3 / coef_sum
#with open("tests/test3_new.txt") as f:
#lines = f.read().splitlines()
with open("tests/test3_new.txt") as f:
lines = f.read().splitlines()
number_of_vertices = int(lines[0])
number_of_edges = int(lines[1])
vertices = lines[2:2+number_of_vertices]
edges = lines[2+number_of_vertices:]
ids_and_populations = [tuple(map(int, vertices[i].split(" "))) for i in range(len(vertices))]
populations = dict(ids_and_populations)
mydict = lambda: defaultdict(lambda: defaultdict())
G = mydict()
for i in range(len(edges)):
source, target, weight = map(int, edges[i].split(" "))
G[source][target] = weight
G[target][source] = weight
start = time.time()
vertex_list = []
for i in range(len(populations)):
v = Vertex(i, populations[i])
vertex_list += [v]
for i in range(len(edges)):
index1, index2, distance = map(int, edges[i].split(" "))
v0 = vertex_list[index1]
v1 = vertex_list[index2]
v0.add_neighbour(v1, distance)
v1.add_neighbour(v0, distance)
for i in range(len(vertex_list)):
Vlist = vertex_list[i].neighbour_list
v0 = vertex_list[i]
for j in range(len(Vlist)):
v0.add_neighbour_neighbour(vertex_list[j])
a1 = 92.72
a2 = 9.87
a3 = 0.89
for i in range(len(vertex_list)):
v0 = vertex_list[i]
v0.calculate(a1, a2, a3)
K_arr = []
for i in range(len(vertex_list)):
K_arr += [vertex_list[i].get_key()]
min_val = min(K_arr)
K_arr.sort()
index1 = 0
index2 = 0
for i in range(len(vertex_list)):
v = vertex_list[i]
if K_arr[0] == v.get_key():
index1 = i
if K_arr[1] == v.get_key():
index2 = i
diff = time.time()-start
print('time took: '+str(diff))
print('first node:{}, second node:{} ; associated cost:{}'.format(index1,index2,min_val))
def select_neighbors(G, sub_graph, current_node, k):
if k == 0:
return sub_graph
for j in G[current_node].items():
sub_graph[current_node][j[0]] = j[1]
sub_graph[j[0]][current_node] = j[1]
sub_graph = select_neighbors(G, sub_graph, j[0], k - 1)
return sub_graph
def merge_graph(dict1, dict2):
for key, value in dict2.items():
for subkey, subvalue in value.items():
dict1[key][subkey] = subvalue
def dijkstra_q_impl(G, populations, source):
costs = dict()
for key in G:
costs[key] = np.inf
costs[source] = 0
pq = []
for node in G:
pq.append((node, costs[node]))
while len(pq) != 0:
current_node, current_node_distance = pq.pop(0)
for neighbor_node in G[current_node]:
weight = G[current_node][neighbor_node]
distance = current_node_distance + weight
if distance < costs[neighbor_node]:
costs[neighbor_node] = distance
pq.append((neighbor_node, distance))
#return (costs.values(),population_dict[])
sorted_costs_lst=list(dict(sorted(costs.items())).values())
populations_values_lst = list(dict(sorted(populations.items())).values())
return np.sum(np.array(sorted_costs_lst) * np.array(populations_values_lst))
def random_start(G):
res = [choice(list(G.keys())), choice(list(G.keys()))]
if res[0] == res [1]:
return random_start(G)
print(f"Random start: {res}")
return res
#return [929940, 301820]
#//2 * O((V+E)*logV) = O(E*logV) //
def allocation_cost(G, population_dict, i,j):
return [dijkstra_q_impl(G,population_dict, i),dijkstra_q_impl(G,population_dict, j)]
# V times Dijkstra
def sub_graph_apsp(G, dijkstra_func):
population_dict = dict(sorted([(k, populations[k]) for k in G.keys()]))
selected_vertex = choice(list(G.keys()))
selected_cost = dijkstra_func(G,population_dict, selected_vertex)
for node in G.keys():
if node is not selected_vertex:
this_cost = dijkstra_func(G, population_dict, node)
if this_cost < selected_cost:
selected_cost = this_cost
selected_vertex = node
return selected_vertex, selected_cost
def algorithm_sub_graph_apsp(G, starting_node, k, hop_list, dijkstra_func):
sub_graph = lambda: defaultdict(lambda: defaultdict())
sub_graph = sub_graph()
sub_graph = select_neighbors(G, sub_graph, current_node=starting_node, k=k)
next_node, cost = sub_graph_apsp(sub_graph, dijkstra_func)
#print(next_node)
if len(hop_list) > 0 and next_node == hop_list[-1][0]:
return next_node, cost
hop_list.append((next_node, cost))
return algorithm_sub_graph_apsp(G, next_node, k, hop_list, dijkstra_func)
# 2*O(V)*O(E*logV) = O(E*V*logV) #
def Greedy_Heuristic_Add_Drop(G, dijkstra_func):
population_dict = dict(sorted([(k, populations[k]) for k in G.keys()]))
#population_dict = [populations[i] for i in G.keys()]
selected_vertices = random_start(G)
selected_costs = allocation_cost(G,population_dict, selected_vertices[0],selected_vertices[1])
for not_selected in G.keys():
if not_selected not in selected_vertices:
bigger = max(selected_costs)
this_cost = dijkstra_func(G,population_dict, not_selected)
if this_cost < bigger:
bigger_index = selected_costs.index(bigger)
selected_costs[bigger_index] = this_cost
selected_vertices[bigger_index] = not_selected
return(selected_vertices,selected_costs)
def Greedy_Heuristic_Subgraph_Expansion(G, k, dijkstra_func, bootstrap_cnt=10):
nodes = []
costs = []
for i in range(bootstrap_cnt):
#print("iter")
node, cost = algorithm_sub_graph_apsp(G, choice(list(G.keys())), k, [], dijkstra_func=dijkstra_func)
nodes.append(node)
costs.append(cost)
counter = Counter(nodes)
most_commons = counter.most_common(2)
target_nodes = (most_commons[0][0], most_commons[1][0])
sub_graph1 = lambda: defaultdict(lambda: defaultdict())
sub_graph1 = sub_graph1()
sub_graph1 = select_neighbors(G, sub_graph1, target_nodes[0], k=k)
sub_graph2 = lambda: defaultdict(lambda: defaultdict())
sub_graph2 = sub_graph2()
sub_graph2 = select_neighbors(G, sub_graph2, target_nodes[1], k=k)
merge_graph(sub_graph1, sub_graph2)
points, costs = Greedy_Heuristic_Add_Drop(sub_graph1, dijkstra_func)
if np.inf in costs:
print("INF")
sub_graph1 = lambda: defaultdict(lambda: defaultdict())
sub_graph1 = sub_graph1()
sub_graph1 = select_neighbors(G, sub_graph1, current_node=points[0], k=k+1)
sub_graph2 = lambda: defaultdict(lambda: defaultdict())
sub_graph2 = sub_graph2()
sub_graph2 = select_neighbors(G, sub_graph2, current_node=points[1], k=k+1)
merge_graph(sub_graph1, sub_graph2)
points, costs = Greedy_Heuristic_Add_Drop(sub_graph1, dijkstra_func)
if np.inf not in costs:
return points, costs
else:
print("Graphs are disconnected. Total cost is inf")
return points, costs
return points, costs
start = time.time()
res = Greedy_Heuristic_Subgraph_Expansion(G, 5, bootstrap_cnt=10, dijkstra_func=dijkstra_q_impl) #q for direct Queue based PQ impl (py's pop(0))
diff = time.time()-start
print('\npick cities #'+ str(res[0]) +' with costs '+ str(res[1]))
print('\ntotal time using our Queue-based PQ: '+ str(diff)+ ' sec')
```
| github_jupyter |
# Loading Medicare and Medicaid Claims data into i2b2
[CMS RIF][] docs
This notebook is on demographics.
[CMS RIF]: https://www.resdac.org/cms-data/file-availability#research-identifiable-files
## Python Data Science Tools
especially [pandas](http://pandas.pydata.org/pandas-docs/)
```
import pandas as pd
import numpy as np
import sqlalchemy as sqla
dict(pandas=pd.__version__, numpy=np.__version__, sqlalchemy=sqla.__version__)
```
## DB Access: Luigi Config, Logging
[luigi docs](https://luigi.readthedocs.io/en/stable/)
```
# Passwords are expected to be in the environment.
# Prompt if it's not already there.
def _fix_password():
from os import environ
import getpass
keyname = getpass.getuser().upper() + '_SGROUSE'
if keyname not in environ:
environ[keyname] = getpass.getpass()
_fix_password()
import luigi
def _reset_config(path):
'''Reach into luigi guts and reset the config.
Don't ask.'''
cls = luigi.configuration.LuigiConfigParser
cls._instance = None # KLUDGE
cls._config_paths = [path]
return cls.instance()
_reset_config('luigi-sgrouse.cfg')
luigi.configuration.LuigiConfigParser.instance()._config_paths
import cx_ora_fix
help(cx_ora_fix)
cx_ora_fix.patch_version()
import cx_Oracle as cx
dict(cx_Oracle=cx.__version__, version_for_sqlalchemy=cx.version)
import logging
concise = logging.Formatter(fmt='%(asctime)s %(levelname)s %(message)s',
datefmt='%02H:%02M:%02S')
def log_to_notebook(log,
formatter=concise):
log.setLevel(logging.DEBUG)
to_notebook = logging.StreamHandler()
to_notebook.setFormatter(formatter)
log.addHandler(to_notebook)
return log
from cms_etl import CMSExtract
try:
log.info('Already logging to notebook.')
except NameError:
cms_rif_task = CMSExtract()
log = log_to_notebook(logging.getLogger())
log.info('We try to log non-trivial DB access.')
with cms_rif_task.connection() as lc:
lc.log.info('first bene_id')
first_bene_id = pd.read_sql('select min(bene_id) bene_id_first from %s.%s' % (
cms_rif_task.cms_rif, cms_rif_task.table_eg), lc._conn)
first_bene_id
```
## Demographics: MBSF_AB_SUMMARY, MAXDATA_PS
### Breaking work into groups by beneficiary
```
from cms_etl import BeneIdSurvey
from cms_pd import MBSFUpload
survey_d = BeneIdSurvey(source_table=MBSFUpload.table_name)
chunk_m0 = survey_d.results()[0]
chunk_m0 = pd.Series(chunk_m0, index=chunk_m0.keys())
chunk_m0
dem = MBSFUpload(bene_id_first=chunk_m0.bene_id_first,
bene_id_last=chunk_m0.bene_id_last,
chunk_rows=chunk_m0.chunk_rows)
dem
```
## Column Info: Value Type, Level of Measurement
```
with dem.connection() as lc:
col_data_d = dem.column_data(lc)
col_data_d.head(3)
colprops_d = dem.column_properties(col_data_d)
colprops_d.sort_values(['valtype_cd', 'column_name'])
with dem.connection() as lc:
for x, pct_in in dem.obs_data(lc, upload_id=100):
break
pct_in
x.sort_values(['instance_num', 'valtype_cd']).head(50)
```
### MAXDATA_PS: skip custom for now
```
from cms_pd import MAXPSUpload
survey_d = BeneIdSurvey(source_table=MAXPSUpload.table_name)
chunk_ps0 = survey_d.results()[0]
chunk_ps0 = pd.Series(chunk_ps0, index=chunk_ps0.keys())
chunk_ps0
dem2 = MAXPSUpload(bene_id_first=chunk_ps0.bene_id_first,
bene_id_last=chunk_ps0.bene_id_last,
chunk_rows=chunk_ps0.chunk_rows)
dem2
with dem2.connection() as lc:
col_data_d2 = dem2.column_data(lc)
col_data_d2.head(3)
```
`maxdata_ps` has many groups of columns with names ending in `_1`, `_2`, `_3`, and so on:
```
col_groups = col_data_d2[col_data_d2.column_name.str.match('.*_\d+$')]
col_groups.tail()
pd.DataFrame([dict(all_cols=len(col_data_d2),
cols_in_groups=len(col_groups),
plain_cols=len(col_data_d2) - len(col_groups))])
from cms_pd import col_valtype
def _cprop(cls, valtype_override, info: pd.DataFrame) -> pd.DataFrame:
info['valtype_cd'] = [col_valtype(c).value for c in info.column.values]
for cd, pat in valtype_override:
info.valtype_cd = info.valtype_cd.where(~ info.column_name.str.match(pat), cd)
info.loc[info.column_name.isin(cls.i2b2_map.values()), 'valtype_cd'] = np.nan
return info.drop('column', 1)
_vo = [
('@', r'.*race_code_\d$'),
('@custom_postpone', r'.*_\d+$')
]
#dem2.column_properties(col_data_d2)
colprops_d2 = _cprop(dem2.__class__, _vo, col_data_d2)
colprops_d2.query('valtype_cd != "@custom_postpone"').sort_values(['valtype_cd', 'column_name'])
colprops_d2.dtypes
```
## Patient, Encounter Mapping
```
obs_facts = obs_dx.append(obs_cd).append(obs_num).append(obs_txt).append(obs_dt)
with cc.connection('patient map') as lc:
pmap = cc.patient_mapping(lc, (obs_facts.bene_id.min(), obs_facts.bene_id.max()))
from etl_tasks import I2B2ProjectCreate
obs_patnum = obs_facts.merge(pmap, on='bene_id')
obs_patnum.sort_values('start_date').head()[[
col.name for col in I2B2ProjectCreate.observation_fact_columns
if col.name in obs_patnum.columns.values]]
with cc.connection() as lc:
emap = cc.encounter_mapping(lc, (obs_dx.bene_id.min(), obs_dx.bene_id.max()))
emap.head()
'medpar_id' in obs_patnum.columns.values
obs_pmap_emap = cc.pat_day_rollup(obs_patnum, emap)
x = obs_pmap_emap
(x[(x.encounter_num > 0) | (x.encounter_num % 8 == 0) ][::5]
.reset_index().set_index(['patient_num', 'start_date', 'encounter_num']).sort_index()
.head(15)[['medpar_id', 'start_day', 'admsn_dt', 'dschrg_dt', 'concept_cd']])
```
### Provider etc. done?
```
obs_mapped = cc.with_mapping(obs_dx, pmap, emap)
obs_mapped.columns
[col.name for col in I2B2ProjectCreate.observation_fact_columns
if not col.nullable and col.name not in obs_mapped.columns.values]
test_run = False
if test_run:
cc.run()
```
## Drugs: PDE
```
from cms_pd import DrugEventUpload
du = DrugEventUpload(bene_id_first=bene_chunks.iloc[0].bene_id_first,
bene_id_last=bene_chunks.iloc[0].bene_id_last,
chunk_rows=bene_chunks.iloc[0].chunk_rows,
chunk_size=1000)
with du.connection() as lc:
du_cols = du.column_data(lc)
du.column_properties(du_cols).sort_values('valtype_cd')
with du.connection() as lc:
for x, pct_in in du.obs_data(lc, upload_id=100):
break
x.sort_values(['instance_num', 'valtype_cd']).head(50)
```
## Performance Results
```
bulk_migrate = '''
insert /*+ parallel(24) append */ into dconnolly.observation_fact
select * from dconnolly.observation_fact_2440
'''
with cc.connection() as lc:
lc.execute('truncate table my_plan_table')
print(lc._conn.engine.url.query)
print(pd.read_sql('select count(*) from my_plan_table', lc._conn))
lc._conn.execute('explain plan into my_plan_table for ' + bulk_migrate)
plan = pd.read_sql('select * from my_plan_table', lc._conn)
plan
with cc.connection() as lc:
lc.execute('truncate table my_plan_table')
print(pd.read_sql('select * from my_plan_table', lc._conn))
db = lc._conn.engine
cx = db.dialect.dbapi
dsn = cx.makedsn(db.url.host, db.url.port, db.url.database)
conn = cx.connect(db.url.username, db.url.password, dsn,
threaded=True, twophase=True)
cur = conn.cursor()
cur.execute('explain plan into my_plan_table for ' + bulk_migrate)
cur.close()
conn.commit()
conn.close()
plan = pd.read_sql('select * from my_plan_table', lc._conn)
plan
select /*+ parallel(24) */ max(bene_enrollmt_ref_yr)
from cms_deid.mbsf_ab_summary;
select * from upload_status
where upload_id >= 2799 -- and message is not null -- 2733
order by upload_id desc;
-- order by end_date desc;
select load_status, count(*), min(upload_id), max(upload_id), min(load_date), max(end_date)
, to_char(sum(loaded_record), '999,999,999') loaded_record
, round(sum(loaded_record) / 1000 / ((max(end_date) - min(load_date)) * 24 * 60)) krows_min
from (
select upload_id, loaded_record, load_status, load_date, end_date, end_date - load_date elapsed
from upload_status
where upload_label like 'MBSFUp%'
)
group by load_status
;
```
## Reimport code into running notebook
```
import importlib
import cms_pd
import cms_etl
import etl_tasks
import eventlog
import script_lib
importlib.reload(script_lib)
importlib.reload(eventlog)
importlib.reload(cms_pd)
importlib.reload(cms_etl)
importlib.reload(etl_tasks);
```
| github_jupyter |
# Project - Feature Engineering on the Titanic
The titanic dataset has a few columns from which you can use regex to extract information from. Feature engineering involves using existing columns of data to create new columns of data. You will work on doing just that in these exercises. Read it in and then answer the following questions.
```
import pandas as pd
titanic = pd.read_csv('../data/titanic.csv')
titanic.head()
```
## Exercises
### Exercise 1
<span style="color:green; font-size:16px">Extract the first character of the `Ticket` column and save it as a new column `ticket_first`. Find the total number of survivors, the total number of passengers, and the percentage of those who survived **by this column**. Next find the total survival rate for the entire dataset. Does this new column help predict who survived?</span>
### Exercise 2
<span style="color:green; font-size:16px">If you did Exercise 2 correctly, you should see that only 7% of the people with tickets that began with 'A' survived. Find the survival rate for all those 'A' tickets by `Sex`.</span>
### Exercise 3
<span style="color:green; font-size:16px">Find the survival rate by the last letter of the ticket. Is there any predictive power here?</span>
### Exercise 4
<span style="color:green; font-size:16px">Find the length of each passengers name and assign to the `name_len` column. What is the minimum and maximum name length?</span>
### Exercise 5
<span style="color:green; font-size:16px">Pass the `name_len` column to the `pd.cut` function. Also, pass a list of equal-sized cut points to the `bins` parameter. Assign the resulting Series to the `name_len_cat` column. Find the frequency count of each bin in this column.</span>
### Exercise 6
<span style="color:green; font-size:16px">Is name length a good predictor of survival?<span>
### Exercise 7
<span style="color:green; font-size:16px">Why do you think people with longer names had a better chance at survival?</span>
### Exercise 8
<span style="color:green; font-size:16px">Using the titanic dataset, do your best to extract the title from a person's name. Examples of title are 'Mr.', 'Dr.', 'Miss', etc... Save this to a column called `title`. Find the frequency count of the titles.</span>
### Exercise 9
<span style="color:green; font-size:16px">Does the title have good predictive value of survival?</span>
### Exercise 10
<span style="color:green; font-size:16px">Create a pivot table of survival by title and sex. Use two aggregation functions, mean and size</span>
### Exercise 11
<span style="color:green; font-size:16px">Attempt to extract the first name of each passenger into the column `first_name`. Are there are males and females with the same first name?</span>
### Exercise 12
<span style="color:green; font-size:16px">The exercises have been an exercise in feature engineering. Several new features (columns) have been created from existing columns. Come up with your own feature and test it out on survival.</span>
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Join/inverted_joins.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Join/inverted_joins.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Join/inverted_joins.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as geemap
except:
import geemap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Maps`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/basemaps.py) can be added using the `Map.add_basemap()` function.
```
Map = geemap.Map(center=[40,-100], zoom=4)
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
# Load a Landsat 8 image collection at a point of interest.
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(ee.Geometry.Point(-122.09, 37.42))
# Define start and end dates with which to filter the collections.
april = '2014-04-01'
may = '2014-05-01'
june = '2014-06-01'
july = '2014-07-01'
# The primary collection is Landsat images from April to June.
primary = collection.filterDate(april, june)
# The secondary collection is Landsat images from May to July.
secondary = collection.filterDate(may, july)
# Use an equals filter to define how the collections match.
filter = ee.Filter.equals(**{
'leftField': 'system:index',
'rightField': 'system:index'
})
# Define the join.
invertedJoin = ee.Join.inverted()
# Apply the join.
invertedJoined = invertedJoin.apply(primary, secondary, filter)
# Display the result.
print('Inverted join: ', invertedJoined.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
```
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =======
```
# ETL with NVTabular
In this notebook we are going to generate synthetic data and then create sequential features with [NVTabular](https://github.com/NVIDIA-Merlin/NVTabular). Such data will be used in the next notebook to train a session-based recommendation model.
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems. It provides a high level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS cuDF library.
### Import required libraries
```
import os
import glob
import numpy as np
import pandas as pd
import cudf
import cupy as cp
import nvtabular as nvt
```
### Define Input/Output Path
```
INPUT_DATA_DIR = os.environ.get("INPUT_DATA_DIR", "/workspace/data/")
```
## Create a Synthetic Input Data
```
NUM_ROWS = 100000
long_tailed_item_distribution = np.clip(np.random.lognormal(3., 1., NUM_ROWS).astype(np.int32), 1, 50000)
# generate random item interaction features
df = pd.DataFrame(np.random.randint(70000, 80000, NUM_ROWS), columns=['session_id'])
df['item_id'] = long_tailed_item_distribution
# generate category mapping for each item-id
df['category'] = pd.cut(df['item_id'], bins=334, labels=np.arange(1, 335)).astype(np.int32)
df['timestamp/age_days'] = np.random.uniform(0, 1, NUM_ROWS)
df['timestamp/weekday/sin']= np.random.uniform(0, 1, NUM_ROWS)
# generate day mapping for each session
map_day = dict(zip(df.session_id.unique(), np.random.randint(1, 10, size=(df.session_id.nunique()))))
df['day'] = df.session_id.map(map_day)
```
- Visualize couple of rows of the synthetic dataset
```
df.head()
```
## Feature Engineering with NVTabular
Deep Learning models require dense input features. Categorical features are sparse, and need to be represented by dense embeddings in the model. To allow for that, categorical features need first to be encoded as contiguous integers `(0, ..., |C|)`, where `|C|` is the feature cardinality (number of unique values), so that their embeddings can be efficiently stored in embedding layers. We will use NVTabular to preprocess the categorical features, so that all categorical columns are encoded as contiguous integers. Note that in the `Categorify` op we set `start_index=1`, the reason for that we want the encoded null values to start from `1` instead of `0` because we reserve `0` for padding the sequence features.
Here our goal is to create sequential features. In this cell, we are creating temporal features and grouping them together at the session level, sorting the interactions by time. Note that we also trim each feature sequence in a session to a certain length. Here, we use the NVTabular library so that we can easily preprocess and create features on GPU with a few lines.
```
# Categorify categorical features
categ_feats = ['session_id', 'item_id', 'category'] >> nvt.ops.Categorify(start_index=1)
# Define Groupby Workflow
groupby_feats = categ_feats + ['day', 'timestamp/age_days', 'timestamp/weekday/sin']
# Groups interaction features by session and sorted by timestamp
groupby_features = groupby_feats >> nvt.ops.Groupby(
groupby_cols=["session_id"],
aggs={
"item_id": ["list", "count"],
"category": ["list"],
"day": ["first"],
"timestamp/age_days": ["list"],
'timestamp/weekday/sin': ["list"],
},
name_sep="-")
# Select and truncate the sequential features
sequence_features_truncated = (groupby_features['category-list', 'item_id-list', 'timestamp/age_days-list', 'timestamp/weekday/sin-list']) >>nvt.ops.ListSlice(0,20) >> nvt.ops.Rename(postfix = '_trim')
# Filter out sessions with length 1 (not valid for next-item prediction training and evaluation)
MINIMUM_SESSION_LENGTH = 2
selected_features = groupby_features['item_id-count', 'day-first', 'session_id'] + sequence_features_truncated
filtered_sessions = selected_features >> nvt.ops.Filter(f=lambda df: df["item_id-count"] >= MINIMUM_SESSION_LENGTH)
workflow = nvt.Workflow(filtered_sessions)
dataset = nvt.Dataset(df, cpu=False)
# Generating statistics for the features
workflow.fit(dataset)
# Applying the preprocessing and returning an NVTabular dataset
sessions_ds = workflow.transform(dataset)
# Converting the NVTabular dataset to a Dask cuDF dataframe (`to_ddf()`) and then to cuDF dataframe (`.compute()`)
sessions_gdf = sessions_ds.to_ddf().compute()
sessions_gdf.head(3)
```
It is possible to save the preprocessing workflow. That is useful to apply the same preprocessing to other data (with the same schema) and also to deploy the session-based recommendation pipeline to Triton Inference Server.
```
workflow.save('workflow_etl')
```
## Export pre-processed data by day
In this example we are going to split the preprocessed parquet files by days, to allow for temporal training and evaluation. There will be a folder for each day and three parquet files within each day folder: train.parquet, validation.parquet and test.parquet
```
OUTPUT_FOLDER = os.environ.get("OUTPUT_FOLDER",os.path.join(INPUT_DATA_DIR, "sessions_by_day"))
!mkdir -p $OUTPUT_FOLDER
from transformers4rec.data.preprocessing import save_time_based_splits
save_time_based_splits(data=nvt.Dataset(sessions_gdf),
output_dir= OUTPUT_FOLDER,
partition_col='day-first',
timestamp_col='session_id',
)
```
## Checking the preprocessed outputs
```
TRAIN_PATHS = sorted(glob.glob(os.path.join(OUTPUT_FOLDER, "1", "train.parquet")))
gdf = cudf.read_parquet(TRAIN_PATHS[0])
gdf.head()
```
You have just created session-level features to train a session-based recommendation model using NVTabular. Now you can move to the the next notebook,`02-session-based-XLNet-with-PyT.ipynb` to train a session-based recommendation model using [XLNet](https://arxiv.org/abs/1906.08237), one of the state-of-the-art NLP model.
| github_jupyter |
# Pixelwise Segmentation
Use the `elf.segmentation` module for feature based instance segmentation from pixels.
Note that this example is educational and there are easier and better performing method for the image used here. These segmentation methods are very suitable for pixel embeddings learned with neural networks, e.g. with methods like [Semantic Instance Segmentation with a Discriminateive Loss Function](https://arxiv.org/abs/1708.02551).
## Image and Features
Load the relevant libraries. Then load an image from the skimage examples and compute per pixel features.
```
%gui qt5
import time
import numpy as np
# import napari for data visualisation
import napari
# import vigra to compute per pixel features
import vigra
# elf segmentation functionality we need for the problem setup
import elf.segmentation.features as feats
from elf.segmentation.utils import normalize_input
# we use the coins example image
from skimage.data import coins
image = coins()
# We use blurring and texture filters from vigra.filters computed for different scales to obain pixel features.
# Note that it's certainly possible to compute better features for the segmentation problem at hand.
# But for our purposes, these features are good enough.
im_normalized = normalize_input(image)
scales = [4., 8., 12.]
image_features = [im_normalized[None]] # use the normal image as
for scale in scales:
image_features.append(normalize_input(vigra.filters.gaussianSmoothing(im_normalized, scale))[None])
feats1 = vigra.filters.hessianOfGaussianEigenvalues(im_normalized, scale)
image_features.append(normalize_input(feats1[..., 0])[None])
image_features.append(normalize_input(feats1[..., 1])[None])
feats2 = vigra.filters.structureTensorEigenvalues(im_normalized, scale, 1.5 * scale)
image_features.append(normalize_input(feats2[..., 0])[None])
image_features.append(normalize_input(feats2[..., 1])[None])
image_features = np.concatenate(image_features, axis=0)
print("Feature shape:")
print(image_features.shape)
# visualize the image and the features with napari
viewer = napari.Viewer()
viewer.add_image(im_normalized)
viewer.add_image(image_features)
```
## Segmentation Problem
Set up a graph segmentation problem based on the image and features with elf functionality.
To this end, we construct a grid graph and compute edge features for the inter pixel edges in this graph.
```
# compute a grid graph for the image
shape = image.shape
grid_graph = feats.compute_grid_graph(shape)
# compute the edge features
# elf supports three different distance metrics to compute edge features
# from the image features:
# - 'l1': the l1 distance
# - 'l2': the l2 distance
# - 'cosine': the cosine distance (= 1. - cosine similarity)
# here, we use the l2 distance
distance_type = 'l2'
# 'compute_grid-graph-image_features' returns both the edges (=list of node ids connected by the edge)
# and the edge weights. Here, the edges are the same as grid_graph.uvIds()
edges, edge_weights = feats.compute_grid_graph_image_features(grid_graph, image_features, distance_type)
# we normalize the edge weigths to the range [0, 1]
edge_weights = normalize_input(edge_weights)
# simple post-processing to ensure the background label is '0',
# filter small segments with a size of below 100 pixels
# and ensure that the segmentation ids are consecutive
def postprocess_segmentation(seg, shape, min_size=100):
if seg.ndim == 1:
seg = seg.reshape(shape)
ids, sizes = np.unique(seg, return_counts=True)
bg_label = ids[np.argmax(sizes)]
if bg_label != 0:
if 0 in seg:
seg[seg == 0] = seg.max() + 1
seg[seg == bg_label] = 0
filter_ids = ids[sizes < min_size]
seg[np.isin(seg, filter_ids)] = 0
vigra.analysis.relabelConsecutive(seg, out=seg, start_label=1, keep_zeros=True)
return seg
```
## Multicut
As the first segmentation method, we use Multicut segmentation, based on the grid graph and the edge weights we have just computed.
```
# the elf multicut funtionality
import elf.segmentation.multicut as mc
# In order to apply multicut segmentation, we need to map the edge weights from their initial value range [0, 1]
# to [-inf, inf]; where positive values represent attractive edges and negative values represent repulsive edges.
# When computing these "costs" for the multicut, we can set the threshold for when an edge is counted
# as repulsive with the so called boundary bias, or beta, parameter.
# For values smaller than 0.5 the multicut segmentation will under-segment more,
# for values larger than 0.4 it will over-segment more.
beta = .75
costs = mc.compute_edge_costs(edge_weights, beta=beta)
print("Mapped edge weights in range", edge_weights.min(), edge_weights.max(), "to multicut costs in range", costs.min(), costs.max())
# compute the multicut segmentation
t = time.time()
mc_seg = mc.multicut_kernighan_lin(grid_graph, costs)
print("Computing the segmentation with multicut took", time.time() - t, "s")
mc_seg = postprocess_segmentation(mc_seg, shape)
# visualize the multicut segmentation
viewer = napari.Viewer()
viewer.add_image(image)
viewer.add_labels(mc_seg)
```
## Long-range Segmentation Problem
For now, we have only taken "local" information into account for the segmentation problem.
More specifically, we have only solved the Multicut with edges derived from nearest neighbor pixel transitions.
Next, we will use two algorithms, Mutex Watershed and Lifted Multicut, that can take long range edges into account. This has the advantage that feature differences are often more pronounced along larger distances, thus yielding much better information with respect to label transition.
Here, we extract this information by defining a "pixel offset pattern" and comparing the pixel features for these offsets. For details about this segmentation approach check out [The Mutex Watershed: Efficient, Parameter-Free Image Partitioning](https://openaccess.thecvf.com/content_ECCV_2018/html/Steffen_Wolf_The_Mutex_Watershed_ECCV_2018_paper.html).
```
# here, we define the following offset pattern:
# straight and diagonal transitions at a radius of 3, 9 and 27 pixels
# note that the offsets [-1, 0] and [0, -1] would correspond to the edges of the grid graph
offsets = [
[-3, 0], [0, -3], [-3, 3], [3, 3],
[-9, 0], [0, -9], [-9, 9], [9, 9],
[-27, 0], [0, -27], [-27, 27], [27, 27]
]
# we have significantly more long range than normal edges.
# hence, we subsample the offsets, for which actual long range edges will be computed by setting a stride factor
strides = [2, 2]
distance_type = 'l2' # we again use l2 distance
lr_edges, lr_edge_weights = feats.compute_grid_graph_image_features(grid_graph, image_features, distance_type,
offsets=offsets, strides=strides,
randomize_strides=False)
lr_edge_weights = normalize_input(lr_edge_weights)
print("Have computed", len(lr_edges), "long range edges, compared to", len(edges), "normal edges")
```
## Mutex Watershed
We use the Mutex Watershed to segment the image. This algorithm functions similar to (Lifted) Multicut, but is greedy and hence much faster. Despite its greedy nature, for many problems the solutions are of similar quality than Multicut segmentation.
```
# elf mutex watershed functionality
import elf.segmentation.mutex_watershed as mws
t = time.time()
mws_seg = mws.mutex_watershed_clustering(edges, lr_edges, edge_weights, lr_edge_weights)
print("Computing the segmentation with mutex watershed took", time.time() - t, "s")
mws_seg = postprocess_segmentation(mws_seg, shape)
viewer = napari.Viewer()
viewer.add_image(image)
viewer.add_labels(mws_seg)
```
## Lifted Multicut
Finally, we use Lifted Multicut segmentation. The Lifted Multicut is an extension to the Multicut, which can incorporate long range edges.
```
# elf lifted multicut functionality
import elf.segmentation.lifted_multicut as lmc
# For the lifted multicut, we again need to transform the edge weights in [0, 1] to costs in [-inf, inf]
beta = .75 # we again use a boundary bias of 0.75
lifted_costs = mc.compute_edge_costs(lr_edge_weights, beta=beta)
t = time.time()
lmc_seg = lmc.lifted_multicut_kernighan_lin(grid_graph, costs, lr_edges, lifted_costs)
print("Computing the segmentation with lifted multicut took", time.time() - t, "s")
lmc_seg = postprocess_segmentation(lmc_seg, shape)
viewer = napari.Viewer()
viewer.add_image(image)
viewer.add_labels(lmc_seg)
```
## Comparing the segmentations
We can now compare the three different segmentation. Note that the comparison is not quite fair here, because we have used the beta parameter to bias the segmentation to more over-segmentation for Multicut and Lifted Multicut while applying the Mutex Watershed to unbiased edge weights.
```
viewer = napari.Viewer()
viewer.add_image(image)
viewer.add_labels(mc_seg)
viewer.add_labels(mws_seg)
viewer.add_labels(lmc_seg)
```
| github_jupyter |
# Vowpal Wabbit and LightGBM for a Regression Problem
This notebook shows how to build simple regression models by using
[Vowpal Wabbit (VW)](https://github.com/VowpalWabbit/vowpal_wabbit) and
[LightGBM](https://github.com/microsoft/LightGBM) with SynapseML.
We also compare the results with
[Spark MLlib Linear Regression](https://spark.apache.org/docs/latest/ml-classification-regression.html#linear-regression).
```
import os
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
import math
from synapse.ml.train import ComputeModelStatistics
from synapse.ml.vw import VowpalWabbitRegressor, VowpalWabbitFeaturizer
from synapse.ml.lightgbm import LightGBMRegressor
import numpy as np
import pandas as pd
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.regression import LinearRegression
from sklearn.datasets import load_boston
```
## Prepare Dataset
We use [*Boston house price* dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html)
.
The data was collected in 1978 from Boston area and consists of 506 entries with 14 features including the value of homes.
We use `sklearn.datasets` module to download it easily, then split the set into training and testing by 75/25.
```
boston = load_boston()
feature_cols = ['f' + str(i) for i in range(boston.data.shape[1])]
header = ['target'] + feature_cols
df = spark.createDataFrame(pd.DataFrame(data=np.column_stack((boston.target, boston.data)), columns=header)).repartition(1)
print("Dataframe has {} rows".format(df.count()))
display(df.limit(10).toPandas())
train_data, test_data = df.randomSplit([0.75, 0.25], seed=42)
```
Following is the summary of the training set.
```
display(train_data.summary().toPandas())
```
Plot feature distributions over different target values (house prices in our case).
```
features = train_data.columns[1:]
values = train_data.drop('target').toPandas()
ncols = 5
nrows = math.ceil(len(features) / ncols)
```
## Baseline - Spark MLlib Linear Regressor
First, we set a baseline performance by using Linear Regressor in Spark MLlib.
```
featurizer = VectorAssembler(inputCols=feature_cols, outputCol='features')
lr_train_data = featurizer.transform(train_data)['target', 'features']
lr_test_data = featurizer.transform(test_data)['target', 'features']
display(lr_train_data.limit(10).toPandas())
# By default, `maxIter` is 100. Other params you may want to change include: `regParam`, `elasticNetParam`, etc.
lr = LinearRegression(labelCol='target')
lr_model = lr.fit(lr_train_data)
lr_predictions = lr_model.transform(lr_test_data)
display(lr_predictions.limit(10).toPandas())
```
We evaluate the prediction result by using `synapse.ml.train.ComputeModelStatistics` which returns four metrics:
* [MSE (Mean Squared Error)](https://en.wikipedia.org/wiki/Mean_squared_error)
* [RMSE (Root Mean Squared Error)](https://en.wikipedia.org/wiki/Root-mean-square_deviation) = sqrt(MSE)
* [R quared](https://en.wikipedia.org/wiki/Coefficient_of_determination)
* [MAE (Mean Absolute Error)](https://en.wikipedia.org/wiki/Mean_absolute_error)
```
metrics = ComputeModelStatistics(
evaluationMetric='regression',
labelCol='target',
scoresCol='prediction').transform(lr_predictions)
results = metrics.toPandas()
results.insert(0, 'model', ['Spark MLlib - Linear Regression'])
display(results)
```
## Vowpal Wabbit
Perform VW-style feature hashing. Many types (numbers, string, bool, map of string to (number, string)) are supported.
```
vw_featurizer = VowpalWabbitFeaturizer(
inputCols=feature_cols,
outputCol='features')
vw_train_data = vw_featurizer.transform(train_data)['target', 'features']
vw_test_data = vw_featurizer.transform(test_data)['target', 'features']
display(vw_train_data.limit(10).toPandas())
```
See [VW wiki](https://github.com/vowpalWabbit/vowpal_wabbit/wiki/Command-Line-Arguments) for command line arguments.
```
# Use the same number of iterations as Spark MLlib's Linear Regression (=100)
args = "--holdout_off --loss_function quantile -l 7 -q :: --power_t 0.3"
vwr = VowpalWabbitRegressor(
labelCol='target',
passThroughArgs=args,
numPasses=100)
# To reduce number of partitions (which will effect performance), use `vw_train_data.repartition(1)`
vw_train_data_2 = vw_train_data.repartition(1).cache()
print(vw_train_data_2.count())
vw_model = vwr.fit(vw_train_data_2.repartition(1))
vw_predictions = vw_model.transform(vw_test_data)
display(vw_predictions.limit(10).toPandas())
metrics = ComputeModelStatistics(
evaluationMetric='regression',
labelCol='target',
scoresCol='prediction').transform(vw_predictions)
vw_result = metrics.toPandas()
vw_result.insert(0, 'model', ['Vowpal Wabbit'])
results = results.append(
vw_result,
ignore_index=True)
display(results)
```
## LightGBM
```
lgr = LightGBMRegressor(
objective='quantile',
alpha=0.2,
learningRate=0.3,
numLeaves=31,
labelCol='target',
numIterations=100)
# Using one partition since the training dataset is very small
repartitioned_data = lr_train_data.repartition(1).cache()
print(repartitioned_data.count())
lg_model = lgr.fit(repartitioned_data)
lg_predictions = lg_model.transform(lr_test_data)
display(lg_predictions.limit(10).toPandas())
metrics = ComputeModelStatistics(
evaluationMetric='regression',
labelCol='target',
scoresCol='prediction').transform(lg_predictions)
lg_result = metrics.toPandas()
lg_result.insert(0, 'model', ['LightGBM'])
results = results.append(
lg_result,
ignore_index=True)
display(results)
```
Following figure shows the actual-vs.-prediction graphs of the results:
<img width="1102" alt="lr-vw-lg" src="https://user-images.githubusercontent.com/42475935/64071975-4c3e9600-cc54-11e9-8b1f-9a1ee300f445.png" />
```
if os.environ.get("AZURE_SERVICE", None) != "Microsoft.ProjectArcadia":
from matplotlib.colors import ListedColormap, Normalize
from matplotlib.cm import get_cmap
import matplotlib.pyplot as plt
f, axes = plt.subplots(nrows, ncols, sharey=True, figsize=(30,10))
f.tight_layout()
yy = [r['target'] for r in train_data.select('target').collect()]
for irow in range(nrows):
axes[irow][0].set_ylabel('target')
for icol in range(ncols):
try:
feat = features[irow*ncols + icol]
xx = values[feat]
axes[irow][icol].scatter(xx, yy, s=10, alpha=0.25)
axes[irow][icol].set_xlabel(feat)
axes[irow][icol].get_yaxis().set_ticks([])
except IndexError:
f.delaxes(axes[irow][icol])
cmap = get_cmap('YlOrRd')
target = np.array(test_data.select('target').collect()).flatten()
model_preds = [
("Spark MLlib Linear Regression", lr_predictions),
("Vowpal Wabbit", vw_predictions),
("LightGBM", lg_predictions)]
f, axes = plt.subplots(1, len(model_preds), sharey=True, figsize=(18, 6))
f.tight_layout()
for i, (model_name, preds) in enumerate(model_preds):
preds = np.array(preds.select('prediction').collect()).flatten()
err = np.absolute(preds - target)
norm = Normalize()
clrs = cmap(np.asarray(norm(err)))[:, :-1]
axes[i].scatter(preds, target, s=60, c=clrs, edgecolors='#888888', alpha=0.75)
axes[i].plot((0, 60), (0, 60), linestyle='--', color='#888888')
axes[i].set_xlabel('Predicted values')
if i ==0:
axes[i].set_ylabel('Actual values')
axes[i].set_title(model_name)
```
| github_jupyter |
# CC3501 - Aux 7: Método de Diferencias Finitas
#### **Profesor: Daniel Calderón**
#### **Auxiliares: Diego Donoso y Pablo Pizarro**
#### **Ayudantes: Francisco Muñoz, Matías Rojas y Sebastián Contreras**
##### Fecha: 31/05/2019
---
#### Objetivos:
* Ejercitar el método de diferencias finitas en una aplicación práctica
* Aprender a escribir ecuaciones en un sistema lineal y a resolverlo con numpy
* Visualizar la solución utilizando matplotlib/mayavi
[Markdowns for Jupyter Notebooks](https://medium.com/ibm-data-science-experience/markdown-for-jupyter-notebooks-cheatsheet-386c05aeebed)
_Cuando te enteras de que puedes poner gifs en Notebooks_
```
from IPython.display import HTML
HTML('<center><img src="https://media.giphy.com/media/2xRWvsvjyrO2k/giphy.gif"></center>')
```
#### Problemas
1. Estudie el ejemplo ex_finite_differences.py. La figura ex_finite_differences.png es complementaria a dicha solución.
_Nota: Se usa comando %load filename para cargar archivo en notebook_
```
# %load ex_finite_differences_laplace_neumann.py
%matplotlib inline
"""
Daniel Calderon, CC3501, 2019-1
Finite Differences for Partial Differential Equations
Solving the Laplace equation in 2D with Dirichlet and
Neumann border conditions over a square domain.
"""
import numpy as np
import matplotlib.pyplot as mpl
# Problem setup
H = 4
W = 3
F = 2
h = 0.1
# Boundary Dirichlet Conditions:
TOP = 20
BOTTOM = 0
LEFT = 5
RIGHT = 15
# Number of unknowns
# left, bottom and top sides are known (Dirichlet condition)
# right side is unknown (Neumann condition)
nh = int(W / h)
nv = int(H / h) - 1
print(nh, nv)
# In this case, the domain is just a rectangle
N = nh * nv
# We define a function to convert the indices from i,j to k and viceversa
# i,j indexes the discrete domain in 2D.
# k parametrize those i,j, this way we can tidy the unknowns
# in a column vector and use the standard algebra
def getK(i,j):
return j * nh + i
def getIJ(k):
i = k % nh
j = k // nh
return (i, j)
"""
# This code is useful to debug the indexation functions above
print("="*10)
print(getK(0,0), getIJ(0))
print(getK(1,0), getIJ(1))
print(getK(0,1), getIJ(2))
print(getK(1,1), getIJ(3))
print("="*10)
import sys
sys.exit(0)
"""
# In this matrix we will write all the coefficients of the unknowns
A = np.zeros((N,N))
# In this vector we will write all the right side of the equations
b = np.zeros((N,))
# Note: To write an equation is equivalent to write a row in the matrix system
# We iterate over each point inside the domain
# Each point has an equation associated
# The equation is different depending on the point location inside the domain
for i in range(0, nh):
for j in range(0, nv):
# We will write the equation associated with row k
k = getK(i,j)
# We obtain indices of the other coefficients
k_up = getK(i, j+1)
k_down = getK(i, j-1)
k_left = getK(i-1, j)
k_right = getK(i+1, j)
# Depending on the location of the point, the equation is different
# Interior
if 1 <= i and i <= nh - 2 and 1 <= j and j <= nv - 2:
A[k, k_up] = 1
A[k, k_down] = 1
A[k, k_left] = 1
A[k, k_right] = 1
A[k, k] = -4
b[k] = 0
# left side
elif i == 0 and 1 <= j and j <= nv - 2:
A[k, k_up] = 1
A[k, k_down] = 1
A[k, k_right] = 1
A[k, k] = -4
b[k] = -LEFT
# right side
elif i == nh - 1 and 1 <= j and j <= nv - 2:
A[k, k_up] = 1
A[k, k_down] = 1
A[k, k_left] = 2
A[k, k] = -4
b[k] = -2 * h * F
# bottom side
elif 1 <= i and i <= nh - 2 and j == 0:
A[k, k_up] = 1
A[k, k_left] = 1
A[k, k_right] = 1
A[k, k] = -4
b[k] = -BOTTOM
# top side
elif 1 <= i and i <= nh - 2 and j == nv - 1:
A[k, k_down] = 1
A[k, k_left] = 1
A[k, k_right] = 1
A[k, k] = -4
b[k] = -TOP
# corner lower left
elif (i, j) == (0, 0):
A[k, k] = 1
b[k] = (BOTTOM + LEFT) / 2
# corner lower right
elif (i, j) == (nh - 1, 0):
A[k, k] = 1
b[k] = BOTTOM
# corner upper left
elif (i, j) == (0, nv - 1):
A[k, k] = 1
b[k] = (TOP + LEFT) / 2
# corner upper right
elif (i, j) == (nh - 1, nv - 1):
A[k, k] = 1
b[k] = TOP
else:
print("Point (" + str(i) + ", " + str(j) + ") missed!")
print("Associated point index is " + str(k))
raise Exception()
# A quick view of a sparse matrix
#mpl.spy(A)
# Solving our system
x = np.linalg.solve(A, b)
# Now we return our solution to the 2d discrete domain
# In this matrix we will store the solution in the 2d domain
u = np.zeros((nh,nv))
for k in range(0, N):
i,j = getIJ(k)
u[i,j] = x[k]
# Adding the borders, as they have known values
ub = np.zeros((nh + 1, nv + 2))
ub[1:nh + 1, 1:nv + 1] = u[:,:]
# Dirichlet boundary condition
# top
ub[0:nh + 2, nv + 1] = TOP
# bottom
ub[0:nh + 2, 0] = BOTTOM
# left
ub[0, 1:nv + 1] = LEFT
# this visualization locates the (0,0) at the lower left corner
# given all the references used in this example.
fig, ax = mpl.subplots(1,1)
pcm = ax.pcolormesh(ub.T, cmap='RdBu_r')
fig.colorbar(pcm)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('Laplace equation solution.\n Neumann Condition at the right side.')
ax.set_aspect('equal', 'datalim')
# Note:
# imshow is also valid but it uses another coordinate system,
# a data transformation is required
#ax.imshow(ub.T)
mpl.show()
%run ex_finite_differences_laplace_neumann.py
from ex_finite_differences_laplace_neumann import *
print(getK(1, 2))
```
2. Añada gráficos para visualizar la solución como:
1. Superficie
2. Curvas de nivel
> Utilice distintas paletas de colores y rotule correctamente cada eje.
_Hint: Guía de Laplace-Dirichlet o revisar los links_
_[Link](https://matplotlib.org/3.1.0/gallery/mplot3d/surface3d.html)_
[Link](https://matplotlib.org/3.1.0/api/_as_gen/matplotlib.pyplot.contour.html)
```
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
fig = plt.figure()
ax = fig.gca(projection='3d')
# Make data.
X = np.arange(0, nh + 1, 1)
Y = np.arange(0, nv + 2, 1)
X, Y = np.meshgrid(X, Y)
Z = ub.T
# Plot the surface.
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
plt.contour(X, Y, Z)
plt.xlabel('x')
plt.ylabel('y')
```
3. Modifique el programa para que resuelva el sistema utilizando h = 0.5, 0.1, 0.05. Adjunte gráficos de todas sus soluciones.
_Nota: Para espaciados más pequeños se hace necesario utilizar otra forma para guardar las matrices, de ahí sale sparse_
_Nota 2: Con h= 0.01 se cae (probablemente)_
```
def problem(h_p):
# Problem setup
H = 4
W = 3
F = 2
# Boundary Dirichlet Conditions:
TOP = 20
BOTTOM = 0
LEFT = 5
RIGHT = 15
# Number of unknowns
# left, bottom and top sides are known (Dirichlet condition)
# right side is unknown (Neumann condition)
nh_p = int(W / h_p) - 1
nv_p = int(H / h_p) - 1
print(nh_p, nv_p)
# In this case, the domain is just a rectangle
N_p = nh_p * nv_p
# We define a function to convert the indices from i,j to k and viceversa
# i,j indexes the discrete domain in 2D.
# k parametrize those i,j, this way we can tidy the unknowns
# in a column vector and use the standard algebra
def newgetK(i, j):
return j * nh_p + i
def newgetIJ(k):
i = k % nh_p
j = k // nh_p
return (i, j)
"""
# This code is useful to debug the indexation functions above
print("="*10)
print(getK(0,0), getIJ(0))
print(getK(1,0), getIJ(1))
print(getK(0,1), getIJ(2))
print(getK(1,1), getIJ(3))
print("="*10)
import sys
sys.exit(0)
"""
# In this matrix we will write all the coefficients of the unknowns
A_p = np.zeros((N_p, N_p))
# In this vector we will write all the right side of the equations
b_p = np.zeros((N_p,))
# Note: To write an equation is equivalent to write a row in the matrix system
# We iterate over each point inside the domain
# Each point has an equation associated
# The equation is different depending on the point location inside the domain
for i in range(0, nh_p):
for j in range(0, nv_p):
# We will write the equation associated with row k
k = newgetK(i, j)
# We obtain indices of the other coefficients
k_up = newgetK(i, j + 1)
k_down = newgetK(i, j - 1)
k_left = newgetK(i - 1, j)
k_right = newgetK(i + 1, j)
# Depending on the location of the point, the equation is different
# Interior
if 1 <= i and i <= nh_p - 2 and 1 <= j and j <= nv_p - 2:
A_p[k, k_up] = 1
A_p[k, k_down] = 1
A_p[k, k_left] = 1
A_p[k, k_right] = 1
A_p[k, k] = -4
b_p[k] = 0
# left side
elif i == 0 and 1 <= j and j <= nv_p - 2:
A_p[k, k_up] = 1
A_p[k, k_down] = 1
A_p[k, k_right] = 1
A_p[k, k] = -4
b_p[k] = -LEFT
# right side
elif i == nh_p - 1 and 1 <= j and j <= nv_p - 2:
A_p[k, k_up] = 1
A_p[k, k_down] = 1
A_p[k, k_left] = 2
A_p[k, k] = -4
b_p[k] = -2 * h_p * F
# bottom side
elif 1 <= i and i <= nh_p - 2 and j == 0:
A_p[k, k_up] = 1
A_p[k, k_left] = 1
A_p[k, k_right] = 1
A_p[k, k] = -4
b_p[k] = -BOTTOM
# top side
elif 1 <= i and i <= nh_p - 2 and j == nv_p - 1:
A_p[k, k_down] = 1
A_p[k, k_left] = 1
A_p[k, k_right] = 1
A_p[k, k] = -4
b_p[k] = -TOP
# corner lower left
elif (i, j) == (0, 0):
A_p[k, k] = 1
b_p[k] = (BOTTOM + LEFT) / 2
# corner lower right
elif (i, j) == (nh_p - 1, 0):
A_p[k, k] = 1
b_p[k] = BOTTOM
# corner upper left
elif (i, j) == (0, nv_p - 1):
A_p[k, k] = 1
b_p[k] = (TOP + LEFT) / 2
# corner upper right
elif (i, j) == (nh_p - 1, nv_p - 1):
A_p[k, k] = 1
b_p[k] = TOP
else:
print("Point (" + str(i) + ", " + str(j) + ") missed!")
print("Associated point index is " + str(k))
raise Exception()
# A quick view of a sparse matrix
# mpl.spy(A)
# Solving our system
x_p = np.linalg.solve(A_p, b_p)
# Now we return our solution to the 2d discrete domain
# In this matrix we will store the solution in the 2d domain
u_p = np.zeros((nh_p, nv_p))
for k in range(0, N_p):
i, j = newgetIJ(k)
u_p[i, j] = x_p[k]
# Adding the borders, as they have known values
ub_p = np.zeros((nh_p + 2, nv_p + 2))
ub_p[1:nh_p + 1, 1:nv_p + 1] = u_p[:, :]
# Dirichlet boundary condition
# top
ub_p[0:nh_p + 2, nv_p + 1] = TOP
# bottom
ub_p[0:nh_p + 2, 0] = BOTTOM
# left
ub_p[0, 1:nv_p + 1] = LEFT
# right
ub_p[nh_p + 1, 1:nv_p + 1] = RIGHT
# this visualization locates the (0,0) at the lower left corner
# given all the references used in this example.
return ub_p
res = []
hs = [0.5, 0.1, 0.05]
for hi in hs:
res.append(problem(hi))
```
4. Utilizando el módulo time de Python, registre el tiempo que tarda resolver el problema para los espaciados h indicados en el problema anterior. Genere un gráfico que relacione h con el tiempo que se tarda.
```
import time
times = []
for hi in hs:
start = time.time()
problem(hi)
end = time.time()
times.append(end - start)
print("Tiempos :", times)
plt.plot(hs, times)
```
5. Modifique el programa para que el problema modelado posea sólo condiciones Dirichlet:
1. Borde superior: 10
2. Borde inferior: 5
3. Borde derecho: 0
4. Borde izquierdo: $f(y)=\sin(\pi\cdot y/H)$
5. Y se mide desde la esquina inferior izquierda hacia arriba.
> Presente su solución utilizando h=0.1
_Para esta pregunta se deben reemplazar todas las condiciones de Dirichlet con las mencionadas en el enunciado, para E) en vez de usar la constante, se debe llamar a una función evaluada en la altura del punto_
6. Modifique el programa de ejemplo para que modele la ecuación de Poisson utilizando $f(x,y) = \cos(x)\cdot\sin(y)$
_Ahora las ecuaciones deben ser de la forma: (en el interior, para los bordes es similar)_
$$U_{i-1, j} + U_{i+1, j} + U_{i, j-1} + U_{i, j+1} - 4\cdot U_{i, j} = h^2 f_{i, j}$$
7. Volviendo al programa de ejemplo, modifíquelo para que los espaciados horizontal y vertical sean diferentes. Es decir, su programa debe utilizar un hx y un hy.
8. ¿Como cambian sus ecuaciones si el borde izquierdo también posee una condición de Neumann?. Implemente este caso, considerando que el borde izquierdo cumple con la siguiente condición de borde:
$$F(y) = \sin\left( 2 \pi \cdot \frac{x}{H}\right)$$
9. Modifique el programa de ejemplo para que el dominio representado sea una L con condiciones de borde exclusivamente Dirichlet. Para esto:
1. Será necesario calcular correctamente la cantidad de incógnitas.
2. Reservar memoria para la matriz de coeficientes A y el vector del lado derecho b
3. Generar una indexación del dominio. Esto es, asociar a cada i,j del interior del dominio un índice k. Esto se logra modificando convenientemente las funciones getIJ y getK.
4. Una forma simple de abordar este problema es ir construyendo una tabla a medida que se recorre el dominio. Esta tabla debe almacenar el valor de i y j en cada fila k. De esta forma, dado un k, encontramos los i y j asociados. Y por otro lado, una búsqueda simple le permitirá encontrar la k dados un i y j.
5. Para generar los gráficos 2D, necesitara asignar un valor a puntos que se encuentran fuera del dominio. Para evitar que matplotlib los grafique, puede utilizar el valor NaN (Not a Number).
##### Si usted realizó exitosamente todas las actividades anteriores: ¡Felicitaciones!, ¡es un experto en diferencias finitas!.
```
HTML('<center><img src="https://media.giphy.com/media/3o8doT9BL7dgtolp7O/giphy.gif"></center>')
```
Hints:
- matplotlib.pyplot.spy permite visualizar rápidamente el contenido de una matriz esparsa
| github_jupyter |
```
# Discretization example
# We will use the titanic dataset
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
# Load the dataset. We will only load 4 features.
# The class is "Survived" (0 or 1)
data = pd.read_csv('./data/titanic.csv',usecols =['Name','Sex','Age','Fare','Survived'])
# Ignore rows with null values
data = data.dropna()
# Keep data where Age >= 1
data = data.loc[data['Age'] >= 1]
data.head()
# We want to discretize "Age"
# So let's build a decision tree using the Age to predict Survive
# because the decision tree takes into account the class label
# Create a model with max depth = 2
tree_model = DecisionTreeClassifier(max_depth=2)
# Fit only using Age data
tree_model.fit(data.Age.to_frame(), data.Survived)
# And use the Age variable to predict the class
data['Age_DT']=tree_model.predict_proba(data.Age.to_frame())[:,1]
data.head(10)
# The "Age_DT" column contains the probability of the data point belonging to the corresponding class
# Check the unique values of the Age_DT attribute
data.Age_DT.unique()
# We have 4 unique values of probabilities.
# A tree of depth 2, makes 2 splits, therefore generating 4 buckets.
# That is why we see 4 different probabilities in the output above.
# Check the number of samples per probabilistic bucket
data.groupby(['Age_DT'])['Survived'].count()
# Check the age limits for each bucket
# i.e. let's see the boundaries of each bucket
pd.concat( [data.groupby(['Age_DT'])['Age'].min(),
data.groupby(['Age_DT'])['Age'].max()], axis=1)
# Visualize the tree
from six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
dot_data = StringIO()
export_graphviz(tree_model, out_file=dot_data,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
# We obtained 4 bins (the leaf nodes).
##########################################
##########################################
##########################################
##########################################
##########################################
##########################################
##########################################
# Let's see a classification example now
##########################################
##########################################
##########################################
##########################################
##########################################
##########################################
##########################################
# We will use the iris dataset.
# This is perhaps the best known database to be found in the pattern recognition literature.
# Classify the Iris plant based on the 4 features:
# sepal length in cm
# sepal width in cm
# petal length in cm
# petal width in cm
# To one of these classes:
# -- Iris Setosa
# -- Iris Versicolour
# -- Iris Virginica
from sklearn import datasets
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
from sklearn.model_selection import train_test_split
from helper_funcs import plot_decision_regions
# Load dataset (this comes directly in sklearn.datasets)
# We will only load 2 features: petal length and petal width
iris = datasets.load_iris()
X = iris.data[:, [2, 3]]
y = iris.target
print(f'X shape: {X.shape}')
print('Class labels:', np.unique(y))
# Split the data to training and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# Fit decision tree
tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0)
tree.fit(X_train, y_train)
X_combined = np.vstack((X_train, X_test))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined, y_combined,
classifier=tree, test_idx=range(105, 150))
plt.xlabel('petal length [cm]')
plt.ylabel('petal width [cm]')
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
export_graphviz(tree,
out_file='tree.dot',
feature_names=['petal length', 'petal width'])
dot_data = export_graphviz(
tree,
out_file=None,
feature_names=['petal length', 'petal width'],
class_names=['setosa', 'versicolor', 'virginica'],
filled=True,
rounded=True)
graph = pydotplus.graph_from_dot_data(dot_data)
Image(graph.create_png())
# RULES EXTRACTION
from sklearn.tree import export_text
tree_rules = export_text(tree, feature_names=['petal length', 'petal width'])
print(tree_rules)
# 5 Rules:
# Just follow the paths:
# R1: IF petal width <= 0.75 THEN class=0 (setosa)
# R2: IF petal width > 0.75 AND petal length <= 4.95 AND petal width <= 1.65 THEN class = 1 (versicolor)
# R3: IF petal width > 0.75 AND petal length <= 4.95 AND petal width > 1.65 THEN class = 2 (virginica)
# R4: IF petal width > 0.75 AND petal length > 4.95 AND petal length <= 5.05 THEN class = 2 (virginica)
# R5: IF petal width > 0.75 AND petal length > 4.95 AND petal length > 5.05 THEN class = 2 (virginica)
```
| github_jupyter |
## test.ipynb: Test the training result and Evaluate model
```
# Import the necessary libraries
from sklearn.decomposition import PCA
import os
import scipy.io as sio
import numpy as np
from keras.models import load_model
from keras.utils import np_utils
from sklearn.metrics import classification_report, confusion_matrix
import itertools
import spectral
# Define the neccesary functions for later use
# load the Indian pines dataset which is the .mat format
def loadIndianPinesData():
data_path = os.path.join(os.getcwd(),'data')
data = sio.loadmat(os.path.join(data_path, 'Indian_pines.mat'))['indian_pines']
labels = sio.loadmat(os.path.join(data_path, 'Indian_pines_gt.mat'))['indian_pines_gt']
return data, labels
# load the Indian pines dataset which is HSI format
# refered from http://www.spectralpython.net/fileio.html
def loadHSIData():
data_path = os.path.join(os.getcwd(), 'HSI_data')
data = spectral.open_image(os.path.join(data_path, '92AV3C.lan')).load()
data = np.array(data).astype(np.int32)
labels = spectral.open_image(os.path.join(data_path, '92AV3GT.GIS')).load()
labels = np.array(labels).astype(np.uint8)
labels.shape = (145, 145)
return data, labels
# Get the model evaluation report,
# include classification report, confusion matrix, Test_Loss, Test_accuracy
target_names = ['Alfalfa', 'Corn-notill', 'Corn-mintill', 'Corn'
,'Grass-pasture', 'Grass-trees', 'Grass-pasture-mowed',
'Hay-windrowed', 'Oats', 'Soybean-notill', 'Soybean-mintill',
'Soybean-clean', 'Wheat', 'Woods', 'Buildings-Grass-Trees-Drives',
'Stone-Steel-Towers']
def reports(X_test,y_test):
Y_pred = model.predict(X_test)
y_pred = np.argmax(Y_pred, axis=1)
classification = classification_report(np.argmax(y_test, axis=1), y_pred, target_names=target_names)
confusion = confusion_matrix(np.argmax(y_test, axis=1), y_pred)
score = model.evaluate(X_test, y_test, batch_size=32)
Test_Loss = score[0]*100
Test_accuracy = score[1]*100
return classification, confusion, Test_Loss, Test_accuracy
# apply PCA preprocessing for data sets
def applyPCA(X, numComponents=75):
newX = np.reshape(X, (-1, X.shape[2]))
pca = PCA(n_components=numComponents, whiten=True)
newX = pca.fit_transform(newX)
newX = np.reshape(newX, (X.shape[0],X.shape[1], numComponents))
return newX, pca
def Patch(data,height_index,width_index):
#transpose_array = data.transpose((2,0,1))
#print transpose_array.shape
height_slice = slice(height_index, height_index+PATCH_SIZE)
width_slice = slice(width_index, width_index+PATCH_SIZE)
patch = data[height_slice, width_slice, :]
return patch
# Global Variables
windowSize = 5
numPCAcomponents = 30
testRatio = 0.50
# show current path
PATH = os.getcwd()
print (PATH)
# Read PreprocessedData from file
X_test = np.load("./predata/XtestWindowSize"
+ str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
y_test = np.load("./predata/ytestWindowSize"
+ str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
# X_test = np.load("./predata/XAllWindowSize"
# + str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
# y_test = np.load("./predata/yAllWindowSize"
# + str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) + ".npy")
X_test = np.reshape(X_test, (X_test.shape[0], X_test.shape[3], X_test.shape[1], X_test.shape[2]))
y_test = np_utils.to_categorical(y_test)
# load the model architecture and weights
model = load_model('./model/HSI_model_epochs100.h5')
# calculate result, loss, accuray and confusion matrix
classification, confusion, Test_loss, Test_accuracy = reports(X_test,y_test)
classification = str(classification)
confusion_str = str(confusion)
# show result and save to file
print('Test loss {} (%)'.format(Test_loss))
print('Test accuracy {} (%)'.format(Test_accuracy))
print("classification result: ")
print('{}'.format(classification))
print("confusion matrix: ")
print('{}'.format(confusion_str))
file_name = './result/report' + "WindowSize" + str(windowSize) + "PCA" + str(numPCAcomponents) + "testRatio" + str(testRatio) +".txt"
with open(file_name, 'w') as x_file:
x_file.write('Test loss {} (%)'.format(Test_loss))
x_file.write('\n')
x_file.write('Test accuracy {} (%)'.format(Test_accuracy))
x_file.write('\n')
x_file.write('\n')
x_file.write(" classification result: \n")
x_file.write('{}'.format(classification))
x_file.write('\n')
x_file.write(" confusion matrix: \n")
x_file.write('{}'.format(confusion_str))
import matplotlib.pyplot as plt
%matplotlib inline
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.get_cmap("Blues")):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
Normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
if normalize:
cm = Normalized
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(Normalized, interpolation='nearest', cmap=cmap)
plt.colorbar()
plt.title(title)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.4f' if normalize else 'd'
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
thresh = cm[i].max() / 2.
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.figure(figsize=(10,10))
plot_confusion_matrix(confusion, classes=target_names, normalize=False,
title='Confusion matrix, without normalization')
plt.savefig("./result/confusion_matrix_without_normalization.svg")
plt.show()
plt.figure(figsize=(15,15))
plot_confusion_matrix(confusion, classes=target_names, normalize=True,
title='Normalized confusion matrix')
plt.savefig("./result/confusion_matrix_with_normalization.svg")
plt.show()
# load the original image
# X, y = loadIndianPinesData()
X, y = loadHSIData()
X, pca = applyPCA(X, numComponents=numPCAcomponents)
height = y.shape[0]
width = y.shape[1]
PATCH_SIZE = 5
numComponents = 30
# calculate the predicted image
outputs = np.zeros((height,width))
for i in range(height-PATCH_SIZE+1):
for j in range(width-PATCH_SIZE+1):
p = int(PATCH_SIZE/2)
# print(y[i+p][j+p])
# target = int(y[i+PATCH_SIZE/2, j+PATCH_SIZE/2])
target = y[i+p][j+p]
if target == 0 :
continue
else :
image_patch=Patch(X,i,j)
# print (image_patch.shape)
X_test_image = image_patch.reshape(1,image_patch.shape[2],image_patch.shape[0],image_patch.shape[1]).astype('float32')
prediction = (model.predict_classes(X_test_image))
outputs[i+p][j+p] = prediction+1
ground_truth = spectral.imshow(classes=y, figsize=(10, 10))
predict_image = spectral.imshow(classes=outputs.astype(int), figsize=(10, 10))
```
| github_jupyter |
# Pendulum Environment, OpenAI Gym
* Left force: -50N, Right force: 50N, Nothing: 0N, with some amount of noise added to the action
* Generate trajectories by starting upright, and then applying random forces.
* Failure if the pendulum exceeds +/- pi/2
* Setting this problem up: how to encode Newtons? I'm starting things upright- how do we determine success?
```
import configargparse
import torch
import torch.optim as optim
import sys
sys.path.append('../')
from environments import PendulumEnv
from models.agents import NFQAgent
from models.networks import NFQNetwork, ContrastiveNFQNetwork
from util import get_logger, close_logger, load_models, make_reproducible, save_models
import matplotlib.pyplot as plt
import numpy as np
import itertools
import seaborn as sns
import tqdm
env = PendulumEnv()
rollouts = []
for i in range(10):
rollout, episode_cost = env.generate_rollout()
rollouts.extend(rollout)
rewards = [r[2] for r in rollouts]
sns.distplot(rewards)
def generate_data(init_experience=100, bg_only=False, agent=None):
env_bg = PendulumEnv(group=0)
env_fg = PendulumEnv(group=1)
bg_rollouts = []
fg_rollouts = []
if init_experience > 0:
for _ in range(init_experience):
rollout_bg, episode_cost = env_bg.generate_rollout(
agent, render=False, group=0
)
bg_rollouts.extend(rollout_bg)
if not bg_only:
rollout_fg, episode_cost = env_fg.generate_rollout(
agent, render=False, group=1
)
fg_rollouts.extend(rollout_fg)
bg_rollouts.extend(fg_rollouts)
all_rollouts = bg_rollouts.copy()
return all_rollouts, env_bg, env_fg
train_rollouts, train_env_bg, train_env_fg = generate_data(init_experience=200, bg_only=True)
test_rollouts, eval_env_bg, eval_env_fg = generate_data(init_experience=200, bg_only=True)
is_contrastive=True
epoch = 1000
hint_to_goal = False
if hint_to_goal:
goal_state_action_b_bg, goal_target_q_values_bg, group_bg = train_env_bg.get_goal_pattern_set(group=0)
goal_state_action_b_fg, goal_target_q_values_fg, group_fg = train_env_fg.get_goal_pattern_set(group=1)
goal_state_action_b_bg = torch.FloatTensor(goal_state_action_b_bg)
goal_target_q_values_bg = torch.FloatTensor(goal_target_q_values_bg)
goal_state_action_b_fg = torch.FloatTensor(goal_state_action_b_fg)
goal_target_q_values_fg = torch.FloatTensor(goal_target_q_values_fg)
nfq_net = ContrastiveNFQNetwork(state_dim=train_env_bg.state_dim, is_contrastive=is_contrastive, deep=False)
optimizer = optim.Adam(nfq_net.parameters(), lr=1e-1)
nfq_agent = NFQAgent(nfq_net, optimizer)
bg_success_queue = [0] * 3
fg_success_queue = [0] * 3
eval_fg = 0
evaluations = 5
for k, ep in enumerate(tqdm.tqdm(range(epoch + 1))):
state_action_b, target_q_values, groups = nfq_agent.generate_pattern_set(train_rollouts)
if hint_to_goal:
goal_state_action_b = torch.cat([goal_state_action_b_bg, goal_state_action_b_fg], dim=0)
goal_target_q_values = torch.cat([goal_target_q_values_bg, goal_target_q_values_fg], dim=0)
state_action_b = torch.cat([state_action_b, goal_state_action_b], dim=0)
target_q_values = torch.cat([target_q_values, goal_target_q_values], dim=0)
goal_groups = torch.cat([group_bg, group_fg], dim=0)
groups = torch.cat([groups, goal_groups], dim=0)
if not nfq_net.freeze_shared:
loss = nfq_agent.train((state_action_b, target_q_values, groups))
eval_episode_length_fg, eval_success_fg, eval_episode_cost_fg = 0, 0, 0
if nfq_net.freeze_shared:
eval_fg += 1
if eval_fg > 50:
loss = nfq_agent.train((state_action_b, target_q_values, groups))
(eval_episode_length_bg, eval_success_bg, eval_episode_cost_bg) = nfq_agent.evaluate_pendulum(eval_env_bg, render=False)
bg_success_queue = bg_success_queue[1:]
bg_success_queue.append(1 if eval_success_bg else 0)
(eval_episode_length_fg, eval_success_fg, eval_episode_cost_fg) = nfq_agent.evaluate_pendulum(eval_env_fg, render=False)
fg_success_queue = fg_success_queue[1:]
fg_success_queue.append(1 if eval_success_fg else 0)
if sum(bg_success_queue) == 3 and not nfq_net.freeze_shared == True:
nfq_net.freeze_shared = True
print("FREEZING SHARED")
if is_contrastive:
for param in nfq_net.layers_shared.parameters():
param.requires_grad = False
for param in nfq_net.layers_last_shared.parameters():
param.requires_grad = False
for param in nfq_net.layers_fg.parameters():
param.requires_grad = True
for param in nfq_net.layers_last_fg.parameters():
param.requires_grad = True
else:
for param in nfq_net.layers_fg.parameters():
param.requires_grad = False
for param in nfq_net.layers_last_fg.parameters():
param.requires_grad = False
optimizer = optim.Adam(
itertools.chain(
nfq_net.layers_fg.parameters(),
nfq_net.layers_last_fg.parameters(),
),
lr=1e-1,
)
nfq_agent._optimizer = optimizer
if sum(fg_success_queue) == 3:
print("Done Training")
break
if ep % 300 == 0:
perf_bg = []
perf_fg = []
for it in range(evaluations):
(eval_episode_length_bg,eval_success_bg,eval_episode_cost_bg) = nfq_agent.evaluate_pendulum(eval_env_bg, render=False)
(eval_episode_length_fg,eval_success_fg,eval_episode_cost_fg) = nfq_agent.evaluate_pendulum(eval_env_fg, render=False)
perf_bg.append(eval_episode_cost_bg)
perf_fg.append(eval_episode_cost_fg)
train_env_bg.close()
train_env_fg.close()
eval_env_bg.close()
eval_env_fg.close()
print("Evaluation bg: " + str(perf_bg) + " Evaluation fg: " + str(perf_fg))
perf_bg = []
perf_fg = []
for it in range(evaluations*10):
(eval_episode_length_bg,eval_success_bg,eval_episode_cost_bg) = nfq_agent.evaluate_car(eval_env_bg, render=False)
(eval_episode_length_fg,eval_success_fg,eval_episode_cost_fg) = nfq_agent.evaluate_car(eval_env_fg, render=False)
perf_bg.append(eval_episode_cost_bg)
perf_fg.append(eval_episode_cost_fg)
eval_env_bg.close()
eval_env_fg.close()
print("Evaluation bg: " + str(sum(perf_bg)/len(perf_bg)) + " Evaluation fg: " + str(sum(perf_fg)/len(perf_fg)))
```
| github_jupyter |
# __Conceptos de estadística e introducción al análisis estadístico de datos usando Python__
```
#Importa las paqueterías necesarias
import numpy as np
import matplotlib.pyplot as pit
import pandas as pd
import seaborn as sns
import pandas_profiling as pp
from joblib import load, dump
import statsmodels.api as sm
```
Para este ejemplo ocuparemos bases de datos abiertas de crimen, registrados en Estados Unidos, específicamente una submuestra de la base de datos de crimen en Nueva York.
```
#Usar pandas para leer los datos y guardarlos como data frame
df_NY = load( "./datos_NY_crimen_limpios.pkl")
#Revisar si los datos fueron leidos correctamente
df_NY.head()
```
Diccionario de variables de la base de datos de crimenes en NY.
1. Ciudad: lugar en el que ocurrio el incidente
2. Fecha: año, mes y día en el que ocurrio el incidente
3. Hora: hora en la que ocurrio el incidente
4. Estatus: indicador de si el incidente fue completado o no
5. Gravedad: nivel del incidente; violación, delito mayor, delito menor
6. Lugar: lugar de ocurrencia del incidente; dentro, detras de, enfrente a y opuesto a...
7. Lugar especifico: lugar específico dónde ocurrio el incidente; tienda, casa habitación...
8. Crimen_tipo: descripción del tipo de delito
9. Edad_sospechoso: grupo de edad del sospechoso
10. Raza_sospechoso: raza del sospechoso
11. Sexo_sospechoso: sexo del sospechoso; M hombre, F mujer, U desconocido
12. Edad_victima: grupo de edad de la victima
13. Raza_victima: raza a la que pertenece la víctima
14. Sexo_victima: sexo de la victima; M hombre, F mujer, U desconocido
## 1.0 __Estadística descriptiva__
## 1.1 Conceptos de estadística descriptiva:
**Población**: conjunto de todos los elementos de interés (N).
**Parámetros**: métricas que obtenemos al trabajar con una población.
**Muestra**: subgrupo de la población (n).
**Estadísticos**: métricas que obtenemos al trabajar con poblaciones.

## 1.2 Una muestra debe ser:
**Representativa**: una muestra representativa es un subgrupo de la poblaciòn que refleja exactamente a los miembros de toda la población.
**Tomada al azar**: una muestra azarosa es recolectada cuando cada miembro de la muestra es elegida de la población estrictamente por casualidad
*¿Cómo sabemos que una muestra es representativa?¿Cómo calculamos el tamaño de muestra?*
Depende de los siguientes factores:
1. **Nivel de confianza**: ¿qué necesitamos para estar seguros de que nuestros resultados no ocurrieron solo por azar? Tipicamente se utiliza un nivel de confianza del _95% al 99%_
2. **Porcentaje de diferencia que deseemos detectar**: entre más pequeña sea la diferencia que quieres detectar, más grande debe ser la muestra
3. **Valor absoluto de las probabilidades en las que desea detectar diferencias**: depende de la prueba con la que estamos trabajando. Por ejemplo, detectar una diferencia entre 50% y 51% requiere un tamaño de muestra diferente que detectar una diferencia entre 80% y 81%. Es decir que, el tamaño de muestra requerido es una función de N1.
4. **La distribución de los datos (principalmente del resultado)**
## 1.3 ¿Qué es una variable?
**Variable**: es una característica, número o cantidad que puede ser descrita, medida o cuantíficada.
__Tipos de variables__:
1. Cualitativas o catégoricas: ordinales y nominales
2. Cuantitativas o numericas: discretas y continuas
```
#ORDINALES
#
#NOMINALES
#
#DISCRETAS
#
#CONTINUA
#
```
Variables de nuestra base de datos
```
df_NY.columns
```
## 1.4 ¿Cómo representar correctamente los diferentes tipos de variables?
__Datos categóricos:__ gráfica de barras, pastel, diagrama de pareto (tienen ambas barras y porcentajes)
__Datos numéricos:__ histograma y scatterplot
## 1.5 Atributos de las variables: medidas de tendencia central
Medidas de tendencia central: __media, mediana y moda__
1. **Media**: es la más común y la podemos obtener sumando todos los elementos de una variable y dividiéndola por el número de ellos. Es afectada por valores extremos
2. **Mediana**: número de la posición central de las observaciones (en orden ascendente). No es afectada por valores extremos.
3. **Moda**: el dato más común (puede existir más de una moda).

## 1.6 Atributos de las variables: medidas de asimetría (sesgo) o dispersión
__Sesgo__: indica si los datos se concentran en un lado de la curva
Por ejemplo:
1) cuando la medias es > que la mediana los datos se concentran del lado izquierdo de la curva, es decir que los outlier se encuentra del lado derecho de la distribución.
2) cuando la mediana < que la media, la mayor parte de los datos se concentran del lado derecho de la distribución y los outliers se encuentran en el lado izquierdo de la distribución.
En ambos casos la moda es la medida con mayor representación.
__Sin sesgo__: cuando la mediana, la moda y la media son iguales, la distribución es simétrica.
__El sesgo nos habla de donde se encuentran nuestros datos!__
## 1.7 Varianza
La __varianza__ es una medida de dispersión de un grupo de datos alrededor de la media.
Una forma más fácil de “visualizar” la varianza es por medio de la __desviación estandar__, en la mayoría de los casos esta es más significativa.
El __coeficiente de variación__ es igual a la desviación estándar dividida por el promedio
La desviación estandar es la medida más común de variabilidad para una base de datos única. Una de las principales ventajas de usar desviación estandar es que las unidades no estan elevadas al cuadrado y son más facil de interpretar
## 1.8 Relación entre variables
__Covarianza y Coeficiente de correlación lineal__
La covarianza puede ser >0, =0 o <0:
1. >0 las dos variables se mueven juntas
2. <0 las dos variables se mueven en direcciones opuestas
3. =0 las dos variables son independientes
El coeficiente de correlación va de -1 a 1
__Para explorar los atributos de cada una de las variables dentro de nuestra base de datos podemos hacer un profile report (podemos resolver toda la estadística descrptiva con un solo comando!!). Este reporte es el resultado de un análisis de cada una de las variables que integran la base de datos. Por medio de este, podemos verificar a que tipo de dato pertenece cada variable y obtener las medidas de tendencia central y asímetria. Con el fin de tener una idea general del comportamiento de nuestras variables.
Además, el profile report arroja un análisis de correlación entre variables (ver más adelante), que nos indica que tan relacionadas están entre si dos pares de variables__.
```
#pp.ProfileReport(df_NY[['Ciudad', 'Fecha', 'Hora', 'Estatus', 'Gravedad', 'Lugar','Crimen_tipo', 'Lugar_especifico', 'Edad_sospechoso', 'Raza_sospechoso','Sexo_sospechoso', 'Edad_victima', 'Raza_victima', 'Sexo_victima']])
```
## __2.0 Estadística inferencial__
## 2.1 Distribuciónes de probabilidad
Una __distribución__ es una función que muestra los valores posibles de una variable y que tan frecuentemente ocurren.
Es decir la __frecuencia__ en la que los posibles valores de una variable ocurren en un intervalo.
Las distribución más famosa en estadística(no precisamente la más común)es la __distribución normal__, donde la media moda y mediana son =. Es decir no hay sesgo
Frecuentemente, cuando los valores de una variable no tienen una distribución normal se recurre a transformaciones o estandarizaciones.
## 2.2 Regresión lineal
Una __regresión lineal__ es un modelo matemático para aproximar la relación de dependencia entre dos variables, una variable independiente y otra dependiente.
*Los valores de las variables dependientes dependen de los valores de las variables independientes*
## 2.3 Análisis de varianza
__Analisis de Varianza (ANOVA)__ se utiliza para comparar los promedios de dos o más grupos. Una prueba de ANOVA puede indicarte si hay diferencia en el promedio entre los grupos. Sin embargo,no nos da información sobre dónde se encuentra la diferencia (entre cuál y cuál grupo). Para resolver esto, podemos realizar una prueba post-hoc.
## __Análisis de base de datos abierta de delitos en NY__
### 1.0 Evaluar frecuencia de delitos
Podemos empezar por análizar los tipos de crimenes registrados, así como frecuencia de cada tipo de crimen.
```
#Usar value_counts en Pandas para cuantificar y organizar el tipo de crimenes
df_NY.Crimen_tipo.value_counts().iloc[:10]
df_NY.Crimen_tipo.value_counts().iloc[:10]
```
Ahora vamos a crear una grafica de los resultados para tener una mejor visualización de los datos.
```
df_NY.Crimen_tipo.value_counts().iloc[:10].plot(kind= "barh")
```
Podemos observar que los crimenes con mayor ocurrencia son "Petit larceny" y "Harraament 2"
### 1.1 Evaluar frecuencia de un delito específico: por ejemplo "Harrassment"
```
df_NY.dropna(inplace=True)
acoso = df_NY[df_NY["Crimen_tipo"].str.contains("HARRASSMENT 2")]
acoso.head(5)
```
## 2.0 Relaciones entre dos variables dependiente e independiente (de manera visual).
### 2.1 Análisis de la ocurrencia del __delito__ por __sitio__
¿Existen diferencias en la frecuencia de acoso en las diferentes localidades en NY? Es decir, qué lugares son más peligrosos.
En este ejemplo, la variable dependiente sería la ocurrecia del delito y la indenpendiente el sitio.
Para ello, usaremos la función __"groupby"__ de Pandas para agrupar por el tipo de localidades, y la función __size__ para revisar el número registrado en cada localidad.
```
acoso.columns
acoso.head()
acoso.groupby("Ciudad").size().sort_values(ascending=False)
acoso.Ciudad.value_counts().iloc[:10].plot(kind= "barh")
```
Al observar los resultados podemos distinguir en cuál de las localidades de NY hay mayores reportes de acoso. Brooklyn presenta más reportes de acoso.
```
acoso.Lugar_especifico.value_counts().iloc[:10].plot(kind= "barh")
```
El acoso ocurrió con mayor frecuencia dentro de casas y lugares de residencia.
### 2.2. Análisis de la ocurrencia del delito en el tiempo
Si queremos saber la frecuencia de ocurrencia del delito en diferentes años (2004-2018) y meses del año.
Aquí la variable dependiente es nuevamente la ocurrencia del delito y la independiente el tiempo.
```
acoso.groupby("anio").size().plot(kind="bar")
```
Podemos observar la mayoria de los resportes de acoso ocurrieron del 2016 al 2018. El 2011 fue el año con menor número de reportes de la ocurrencia de acoso
### 2.3. Analisis de ocurrencia del delito por sexo de la víctima y del agresor
En este ejemplo, la variable dependiente es el sexo de la víctima y la independiente el sexo del agresor
#### VICTIMAS
```
acoso.groupby("Sexo_victima").size().sort_values(ascending=False)
acoso.Sexo_victima.value_counts().iloc[:10].plot(kind= "pie")
acoso.groupby("Edad_victima").size().sort_values(ascending=False)
acoso.Edad_victima.value_counts().iloc[:10].plot(kind= "pie")
```
#### SOSPECHOSOS
```
acoso.groupby("Sexo_sospechoso").size().sort_values(ascending=False)
acoso.Sexo_sospechoso.value_counts().iloc[:10].plot(kind= "pie")
acoso.groupby("Edad_sospechoso").size().sort_values(ascending=False)
acoso.Edad_sospechoso.value_counts().iloc[:10].plot(kind= "pie")
```
### 2.4. Analisis de ocurrencia del delito por raza de la víctima y del agresor
En este ultimo ejemplo de relación entre variables, la variable dependiente es la raza de la víctima y la independiente es la raza del agresor.
#### VICTIMAS
```
acoso.groupby("Raza_victima").size().sort_values(ascending=False)
acoso.Raza_victima.value_counts().iloc[:10].plot(kind= "pie")
```
#### SOSPECHOSOS
```
acoso.groupby("Raza_sospechoso").size().sort_values(ascending=False)
acoso.Raza_sospechoso.value_counts().iloc[:10].plot(kind= "pie")
```
## 3.0 Regresión lineal
Pongamos a prueba la relación entre un par de variables. Por ejemplo, pero de la victima y peso del agresor. La relación puede ser negativa o positiva.
```
import pandas as pd
import statsmodels.api as sm
from sklearn import datasets, linear_model
df_w = pd.read_csv('Weight.csv')
df_w.head()
model = sm.OLS(y,X).fit()
predictions = model.predict(X)
print_model = model.summary()
print(print_model)
from scipy.stats import shapiro
stat, p = shapiro (y)
print('statistics=%.3f, p=%.3f' % (stat, p))
alpha = 0.05
if p > alpha:
print('its Gaussian')
else:
print('not Gaussian')
import statsmodels.api as sm
import pylab
sm.qqplot(y, loc = 4, scale = 3, line = 's')
pylab.show()
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(color_codes = True)
sns.regplot(x = "AGRE_Weight ", y = "VIC_Weight", data = tamano);
```
## 4.0 ANOVA
Para realizar un análisis de variaza utilizando nuestros datos inicialmente debemos plantearnos una hipótesis. Por ejemplo: Existe diferencias en la edad de las víctimas entre los sitios donde ocurre ocoso.
Podemos probar nuetra hipótesis de manera estadística.
En este caso generaremos una columna extra de datos numericos continuos aproximados de "Edad_calculada_victima" y "Edad_calculada_agresor" para hacer el análisis
```
import pandas as pd
import scipy.stats as stats
import statsmodels. api as sm
from statsmodels.formula.api import ols
acoso["Edad_sospechoso"].unique()
from random import randint
def rango_a_random(s):
if type(s)==str:
s = s.split('-')
s = [int(i) for i in s]
s = randint(s[0],s[1]+1)
return s
acoso["Edad_calculada_victima"] = acoso["Edad_victima"]
acoso["Edad_calculada_victima"] = acoso["Edad_calculada_victima"].replace("65+","65-90").replace("<18","15-18").replace("UNKNOWN",np.nan)
acoso["Edad_calculada_victima"] = acoso["Edad_calculada_victima"].apply(rango_a_random)
acoso["Edad_calculada_sospechoso"] = acoso["Edad_sospechoso"]
acoso["Edad_calculada_sospechoso"] = acoso["Edad_calculada_sospechoso"].replace("65+","65-90").replace("<18","15-18").replace("UNKNOWN",np.nan)
acoso["Edad_calculada_sospechoso"] = acoso["Edad_calculada_sospechoso"].apply(rango_a_random)
acoso.head(5)
acoso.dropna ()
results = ols('Edad_calculada_victima ~ C(Ciudad)', data = acoso).fit()
results.summary()
```
En un análisis de varianza los dos "datos" de mayor importancia son el valor de F (F-statistic) y el valor de P (Prof F-statistic). Debemos obtener un avalor de P <0.05 para poder aceptar nuestra hipótesis.
En el ejemplo nuestro valor de F=4.129 y el de P=0.002. Es decir que podemos aceptar nuestra hipótesis.
| github_jupyter |
```
import json
import numpy as np
import tensorflow as tf
import collections
from sklearn.cross_validation import train_test_split
with open('ctexts.json','r') as fopen:
ctexts = json.loads(fopen.read())[:200]
with open('headlines.json','r') as fopen:
headlines = json.loads(fopen.read())[:200]
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
count.extend(collections.Counter(words).most_common(n_words))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
concat_from = ' '.join(ctexts).split()
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
concat_to = ' '.join(headlines).split()
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab to size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
for i in range(len(headlines)):
headlines[i] = headlines[i] + ' EOS'
headlines[0]
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i.split():
try:
ints.append(dic[k])
except Exception as e:
print(e)
ints.append(UNK)
X.append(ints)
return X
X = str_idx(ctexts, dictionary_from)
Y = str_idx(headlines, dictionary_to)
train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size = 0.2)
class Summarization:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, batch_size):
def lstm_cell(reuse=False):
return tf.nn.rnn_cell.LSTMCell(size_layer, initializer=tf.orthogonal_initializer(),
reuse=reuse)
def attention(encoder_out, seq_len, reuse=False):
attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units = size_layer,
memory = encoder_out,
memory_sequence_length = seq_len)
return tf.contrib.seq2seq.AttentionWrapper(
cell = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(reuse) for _ in range(num_layers)]),
attention_mechanism = attention_mechanism,
attention_layer_size = size_layer)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.placeholder(tf.int32, [None])
self.Y_seq_len = tf.placeholder(tf.int32, [None])
# encoder
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
encoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
self.encoder_out, self.encoder_state = tf.nn.dynamic_rnn(cell = encoder_cells,
inputs = encoder_embedded,
sequence_length = self.X_seq_len,
dtype = tf.float32)
self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
# decoder
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
decoder_cell = attention(self.encoder_out, self.X_seq_len)
dense_layer = tf.layers.Dense(to_dict_size)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cell,
helper = training_helper,
initial_state = decoder_cell.zero_state(batch_size, tf.float32).clone(cell_state=self.encoder_state),
output_layer = dense_layer)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = encoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cell,
helper = predicting_helper,
initial_state = decoder_cell.zero_state(batch_size, tf.float32).clone(cell_state=self.encoder_state),
output_layer = dense_layer)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer().minimize(self.cost)
size_layer = 128
num_layers = 2
embedded_size = 32
batch_size = 32
epoch = 5
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Summarization(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), batch_size)
sess.run(tf.global_variables_initializer())
def pad_sentence_batch(sentence_batch, pad_int, maxlen=500):
padded_seqs = []
seq_lens = []
max_sentence_len = min(max([len(sentence) for sentence in sentence_batch]),maxlen)
for sentence in sentence_batch:
sentence = sentence[:maxlen]
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
def check_accuracy(logits, Y):
acc = 0
for i in range(logits.shape[0]):
internal_acc = 0
for k in range(len(Y[i])):
try:
if logits[i][k] == -1 and Y[i][k] == 1:
internal_acc += 1
elif Y[i][k] == logits[i][k]:
internal_acc += 1
except:
continue
acc += (internal_acc / len(Y[i]))
return acc / logits.shape[0]
for i in range(epoch):
total_loss, total_accuracy = 0, 0
for k in range(0, (len(train_X) // batch_size) * batch_size, batch_size):
batch_x, seq_x = pad_sentence_batch(train_X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(train_Y[k: k+batch_size], PAD)
step, predicted, loss, _ = sess.run([model.global_step,
model.predicting_ids, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
total_loss += loss
total_accuracy += check_accuracy(predicted,batch_y)
if step % 5 == 0:
rand = np.random.randint(0, len(test_Y)-batch_size)
batch_x, seq_x = pad_sentence_batch(test_X[rand:rand+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(test_Y[rand:rand+batch_size], PAD)
predicted, test_loss = sess.run([model.predicting_ids,model.cost], feed_dict={model.X:batch_x,
model.Y:batch_y,
model.X_seq_len:seq_x,
model.Y_seq_len:seq_y})
print('epoch %d, step %d, train loss %f, valid loss %f'%(i+1,step,loss,test_loss))
print('expected output:',' '.join([rev_dictionary_to[n] for n in batch_y[0] if n not in [-1,0,1,2,3]]))
print('predicted output:',' '.join([rev_dictionary_to[n] for n in predicted[0] n not in [-1,0,1,2,3]]),'\n')
total_loss /= (len(train_X) // batch_size)
total_accuracy /= (len(train_X) // batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(i+1, total_loss, total_accuracy))
```
| github_jupyter |
```
import random
from collections import deque
from copy import deepcopy
import gym
import matplotlib.pyplot as plt
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader
from torch.distributions import Categorical
from IPython.display import clear_output
SEED = 1
BATCH_SIZE = 256
LR = 0.0003
UP_COEF = 0.25
GAMMA = 0.99
EPS = 1e-6
GRAD_NORM = False
# set device
use_cuda = torch.cuda.is_available()
device = torch.device('cuda' if use_cuda else 'cpu')
# random seed
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
if use_cuda:
torch.cuda.manual_seed_all(SEED)
class DQN(nn.Module):
def __init__(self, obs_space, action_space):
super().__init__()
H = 32
self.head = nn.Sequential(
nn.Linear(obs_space, H),
nn.Tanh()
)
self.fc = nn.Sequential(
nn.Linear(H, H),
nn.Tanh(),
nn.Linear(H, action_space)
)
def forward(self, x):
out = self.head(x)
q = self.fc(out).reshape(out.shape[0], -1)
return q
losses = []
def learn(net, tgt_net, optimizer, rep_memory):
global action_space
net.train()
tgt_net.train()
train_data = random.sample(rep_memory, BATCH_SIZE)
dataloader = DataLoader(
train_data, batch_size=BATCH_SIZE, pin_memory=use_cuda)
for i, (s, a, r, _s, d) in enumerate(dataloader):
s_batch = s.to(device).float()
a_batch = a.to(device).long()
_s_batch = _s.to(device).float()
r_batch = r.to(device).float()
done_mask = 1 - d.to(device).float()
with torch.no_grad():
_q_batch_tgt = tgt_net(_s_batch)
_q_max = torch.max(_q_batch_tgt, dim=1)[0]
q_batch = net(s_batch)
q_acting = q_batch[range(BATCH_SIZE), a_batch]
# loss
loss = (r_batch + GAMMA * done_mask * _q_max - q_acting).pow(2).mean()
losses.append(loss)
optimizer.zero_grad()
loss.backward()
if GRAD_NORM:
nn.utils.clip_grad_norm_(net.parameters(), max_norm=0.5)
optimizer.step()
def select_action(obs, tgt_net):
tgt_net.eval()
with torch.no_grad():
state = torch.tensor([obs]).to(device).float()
q = tgt_net(state)
action = torch.argmax(q)
return action.item()
def plot():
clear_output(True)
plt.figure(figsize=(16, 5))
plt.subplot(121)
plt.plot(rewards)
plt.title('Reward')
plt.subplot(122)
plt.plot(losses)
plt.title('Loss')
plt.show()
```
## Main
```
# make an environment
# env = gym.make('CartPole-v0')
env = gym.make('CartPole-v1')
# env = gym.make('MountainCar-v0')
# env = gym.make('LunarLander-v2')
env.seed(SEED)
obs_space = env.observation_space.shape[0]
action_space = env.action_space.n
# hyperparameter
n_episodes = 1000
learn_start = 1500
memory_size = 50000
update_frq = 1
use_eps_decay = False
epsilon = 0.001
eps_min = 0.001
decay_rate = 0.0001
n_eval = 10
# global values
total_steps = 0
learn_steps = 0
rewards = []
reward_eval = deque(maxlen=n_eval)
is_learned = False
is_solved = False
# make two nerual networks
net = DQN(obs_space, action_space).to(device)
target_net = deepcopy(net)
# make optimizer
optimizer = optim.AdamW(net.parameters(), lr=LR, eps=EPS)
# make a memory
rep_memory = deque(maxlen=memory_size)
use_cuda
env.spec.max_episode_steps
env.spec.reward_threshold
# play
for i in range(1, n_episodes + 1):
obs = env.reset()
done = False
ep_reward = 0
while not done:
# env.render()
if np.random.rand() < epsilon:
action = env.action_space.sample()
else:
action = select_action(obs, target_net)
_obs, reward, done, _ = env.step(action)
rep_memory.append((obs, action, reward, _obs, done))
obs = _obs
total_steps += 1
ep_reward += reward
if use_eps_decay:
epsilon -= epsilon * decay_rate
epsilon = max(eps_min, epsilon)
if len(rep_memory) >= learn_start:
if len(rep_memory) == learn_start:
print('\n============ Start Learning ============\n')
learn(net, target_net, optimizer, rep_memory)
learn_steps += 1
if learn_steps == update_frq:
# target smoothing update
with torch.no_grad():
for t, n in zip(target_net.parameters(), net.parameters()):
t.data = UP_COEF * n.data + (1 - UP_COEF) * t.data
learn_steps = 0
if done:
rewards.append(ep_reward)
reward_eval.append(ep_reward)
plot()
# print('{:3} Episode in {:5} steps, reward {:.2f}'.format(
# i, total_steps, ep_reward))
if len(reward_eval) >= n_eval:
if np.mean(reward_eval) >= env.spec.reward_threshold:
print('\n{} is sloved! {:3} Episode in {:3} steps'.format(
env.spec.id, i, total_steps))
torch.save(target_net.state_dict(),
f'./test/saved_models/{env.spec.id}_ep{i}_clear_model_dqn.pt')
break
env.close()
[
('CartPole-v0', 207, 0.25),
('CartPole-v1', 346, 0.25),
('MountainCar-v0', 304, 0.25),
('LunarLander-v2', 423, 0.25)
]
```
| github_jupyter |
# Basic chemical, electrical, and thermodynamic principles
To develop a quantitative understanding of how these processes work, we start with a set of definitions of the some quantities and concepts with which we are concerned. Specifically, this section reviews basic biochemical, thermodynamic, and related concepts that are particularly relevant to the quantitative analysis of mitochondrial ATP synthesis.
```{figure} Figure1.png
------
name: mitofig
------
Diagram of a mitochondrion with the cytosol, intermembrane space (IMS), and matrix indicated. *Inset from left to right:* Protein channels and complexes associated with oxidative phosphorylation in the cristae of the mitochondrion. Complex I (C1) catalyzes the oxidation of NADH$^{2-}$ to NAD$^{-}$ and reduction of ubiquinone (Q) to QH$_2$. Complex II (C2) catalyzes the oxidation of FADH$_2$ to FAD coupled to the reduction of Q. Complex III (C3) catalyzes the oxidation of QH$_2$ coupled to the reduction of cytochrome c (Cyt c). Complex IV (C4) catalyzes the oxidation of Cyt c coupled to the reduction of oxygen to water. These redox transfers drive pumping of H$^+$ ions out of the matrix, establishing the proton motive force across the inner mitochondrial membrane (IMM) that drives ATP synthesis at complex V, or the F$_0$F$_1$-ATPase (F$_0$F$_1$). The adenine nucleotide translocase (ANT) exchanges matrix ATP for IMS ADP. The inorganic phosphate cotransporter (PiC) brings protons and Pi from the IMS to the matrix. Lastly, there is a passive H$^{+}$ leak across the IMM. (Figure created with Biorender.com.)
```
## Mitochondrial anatomy
The mitochondrion is a membrane-bound, rod-shaped organelle that is responsible for generating most of the chemical energy needed to power the cell's biochemical reactions by respiration {cite}`Nicholls2013`. Mitochondria are comprised of an outer and inner membrane that are separated by the intermembrane space (IMS) ({numref}`mitofig`). The outer mitochondrial membrane is freely permeable to small molecules and ions. The IMM folds inward to make cristae that extend into the matrix. Transmembrane channels called porins and the respiratory complexes involved in oxidative phosphorylation and ATP synthesis allow for more selective IMM permeability. The IMM encloses the mitochondrial matrix, which contains mitochondrial deoxyribonucleic acid (DNA), the majority of mitochondrial proteins, soluble metabolic intermediates including ATP, ADP, and Pi, and the enzymes catalyzing the tricarboxylic acid (TCA) cycle and $\beta$-oxidation.
## IMM capacitance
The IMM acts as an electrical capacitor to store energy in an electrostatic potential difference between the milieu on each side. Electrical capacitance of a membrane ($C_m$) is the proportionality between the rate of charge transport across the membrane, i.e. current ($I$), to the rate of membrane potential ($\Delta \Psi$) change, that is,
```{math}
C_m \dfrac{ {\rm d} {\Delta\Psi}}{{\rm d} t} = I.
```
In the model and associated calculations presented below, we express fluxes in units of moles per unit time per unit volume of mitochondria. Thus, it is convenient to obtain an estimate of $C_m$ in units of mole per volt per volume of mitochondria. Mitochondria take on a roughly ellipsoid shape in vivo, and a more spherical morphometry in suspension of purified mitochondria {cite}`Picard2011`. To estimate the mitochondrial surface area-to-volume ratio, we take a representative mitochondrion as a sphere with radius $r = 1 \ \mu\text{m}$ and obtain a surface area-to-volume ratio of $3 \ \mu\text{m}^{-1}$. Furthermore, we estimate that the IMM has ten-fold greater surface area than the outer membrane, yielding a surface area to volume ratio of $30 \ \mu\text{m}^{-1}$ for the IMM. Since the capacitance density of biological membranes ranges from $0.5\text{-}1.0 \mu\text{F cm}^{-2}$, or $0.5 \text{-} 1.0 \times \ 10^{-8} \ \mu\text{F} \ \mu\text{m}^{-2}$ {cite}`Nicholls2013`, $C_m$ is approximately $30 \times 10^{-8} \ \mu\text{F} \ \mu\text{m}^{-3} = 300 \ \text{F (L mito)}^{-1}$. To convert to the units used in the calculations below, we have
```{math}
C_m = 300 \ \frac{\rm F}{\rm L \ mito} = 300 \ \frac{\rm C}{\rm V \cdot L \, mito}\cdot
\frac{1}{F}\, \frac{\rm mol}{\rm C} =
3.1 \times 10^{-3} \,
\frac{\rm mol}{\rm V \cdot L \, mito}, \,
```
where $F = 96,485 \ \text{C mol}^{-1}$ is Faraday's constant.
## Gibbs free energy
A *free energy* is a thermodynamic quantity that relates a change in the thermodynamic state of a system to an associated change in total entropy of the system plus its environment. Chemical reaction processes necessarily proceed in the direction associated with a reduction in free energy {cite}`Nicholls2013`. When free energy of a system is reduced, total entropy (of the universe) is increased. The form of free energy that is operative in constant-temperature and constant-pressure systems (most relevant for biochemistry) is the Gibbs free energy, or simply the *Gibbs energy*.
For a chemical reaction of reactants $A_i$ and products $B_j$,
```{math}
\sum_{i = 1}^M m_i A_i \rightleftharpoons \sum_{j = 1}^N n_j B_j
```
where $M$ and $N$ are the total number of reactants and products, respectively, and $m_i$ and $n_j$ are the coefficients of reactant $i$ and product $j$, respectively, the Gibbs energy can be expressed as
```{math}
:label: Delta_rG
\Delta_r G = \Delta_r G^\circ + R{\rm T} \ln \left( \dfrac{ \prod_{i = 1}^{N} [\text{B}_j]^{n_i}}{ \prod_{i = 1}^{M} [\text{A}_i]^{m_i}} \right),
```
where $\Delta_r G^\circ$ is the reference Gibbs energy for the reaction (a constant at given constant chemical conditions of temperature, pressure, ionic conditions, etc.), $R = 8.314 \ \text{J mol}^{-1} \ \text{K}^{-1}$ is the gas constant, and $\text{T} = 310.15 \ \text{K}$ is the temperature. The second term on the right hand side of Equation {eq}`Delta_rG` governs how changes in concentrations of species affects $\Delta_r G$. Applications of Equation {eq}`Delta_rG` to reactions in aqueous solution usually adopt the convention that all solute concentrations are measured relative to 1 Molar, ensuring that the argument of the logarithm is unitless regardless of the stoichiometry of the reaction.
A system is in chemical equilibrium when there is no thermodynamic driving force, that is, $\Delta_r G = 0$. Thus, for this chemical reaction the reference Gibbs energy is related to the equilibrium constant as
```{math}
K_{eq} = \left( \frac{\prod_{i = 1}^{N} [\text{B}_j]^{n_i}}{\prod_{i = 1}^{M} [\text{A}_i]^{m_i}} \right)_{eq}
= \exp\left\{ -\frac{\Delta_r G^\circ}{R{\rm T}} \right\} .
```
## Membrane potential and proton motive force
Free energy associated with the oxidation of primary fuels is transduced to generate the chemical potential across the IMM known as the {\em proton motive force}, which is used to synthesize ATP in the matrix and transport ATP out of the matrix to the cytosol {cite}`Nicholls2013`. The thermodynamic driving force for translocation of hydrogen ions ($\text{H}^{+}$) across the IMM has two components: the difference in electrostatic potential across the membrane, $\Delta\Psi$ (V), and the difference in $\text{H}^{+}$ concentration (or activity) between the media on either side of the membrane, $\Delta\text{pH}$, that is
```{math}
:label: DG_H
\Delta G_{\rm H} &=& -F\Delta\Psi + R{\rm T}\ln\left( [{\rm H}^+]_x/[{\rm H}^+]_c \right) \nonumber \\
&=& -F\Delta\Psi - 2.3 R{\rm T} \, \Delta{\rm pH},
```
where the subscripts $x$ and $c$ indicate matrix and external (cytosol) spaces. $\Delta\Psi$ is defined as the cytosolic potential minus matrix potential, yielding a negative change in free energy for a positive potential. Membrane potential in respiring mitochondria is approximately $150 \text{-} 200 \ \text{mV}$, yielding a contribution to $\Delta G_{\rm H}$ on the order of $15 \text{-} 20 \ \text{kJ mol}^{-1}$ {cite}`Bazil2016`. Under in vitro conditions, $\Delta\text{pH}$ between the matrix and external buffer is on the order of $0.1 \ \text{pH}$ units {cite}`Bazil2016`. Thus, the contribution to proton motive force from a pH difference is less than $1 \ \text{kJ mol}^{-1}$ and substantially smaller than that from $\Delta\Psi$.
## Thermodynamics of ATP synthesis/hydrolysis
Under physiological conditions the ATP hydrolysis reaction
```{math}
:label: ATP1
\text{ATP}^{4-} + \text{H}_2\text{O} \rightleftharpoons
\text{ADP}^{3-} + \text{HPO}_4^{2-} + \text{H}^{+}
```
is thermodynamically favored to proceed from the left-to-right direction. The Gibbs energy associated with turnover of this reaction is
```{math}
:label: DrG_ATP
\Delta_r G_{\rm ATP} = \Delta_r G^o_\text{ATP} + R{\rm T} \ln
\left( \frac{ [\text{ADP}^{3-}] [\text{HPO}_4^{2-}] [{\rm H}^{+}] }
{ [\text{ATP}^{4-}] }\right),
```
where the Gibbs energy for ATP hydrolysis under physiological conditions is approximately $\Delta_r G^o_\text{ATP} = 4.99 \ \text{kJ mol}^{-1}$ {cite}`Li2011`. Using the convention that all concentrations are formally defined as measured relative to 1 Molar, the argument of the logarithm in Equation {eq}`DrG_ATP` is unitless.
### Calculation of the ATP hydrolysis potential
Equation {eq}`DrG_ATP` expresses the Gibbs energy of chemical Equation {eq}`ATP1` in terms of its *chemical species*. In practice, biochemistry typically deals with biochemical *reactants*, which are comprised of sums of rapidly interconverting chemical species. We calculate the total ATP concentration, $[\Sigma \text{ATP}]$, in terms of its bound and unbound species, that is,
```{math}
:label: sumATP
[\Sigma \text{ATP}] &= [\text{ATP}^{4-}] + [\text{MgATP}^{2-}] + [\text{HATP}^{3-}] + [\text{KATP}^{3-}] \nonumber\\
&= [\text{ATP}^{4-}] + \frac{[\text{Mg}^{2+}] [\text{ATP}^{4-}]}{K_{\text{MgATP}}} + \frac{ [\text{H}^{+}] [\text{ATP}^{4-}]}{K_{\text{HATP}}} + \frac{ [\text{K}^{+}] [\text{ATP}^{4-}]}{K_{\text{KATP}}} \nonumber \\
&= [\text{ATP}^{4-}] \left( 1 + \frac{[\text{Mg}^{2+}]}{K_{\text{MgATP}}} + \frac{ [\text{H}^{+}]}{K_{\text{HATP}}} + \frac{ [\text{K}^{+}]}{K_{\text{KATP}}} \right) \nonumber \\
&= [\text{ATP}^{4-}] P_{\text{ATP}},
```
where $P_{\text{ATP}}$ is a *binding polynomial*. Here, we we account for only the single cation-bound species. (Free $\text{H}^+$ in solution associates with water to form $\text{H}_3\text{O}^+$. Here we use [$\text{H}^+$] to indicate hydrogen ion activity, which is equal to $10^{-\text{pH}}$.) {numref}`table-dissociationconstants` lists the dissociation constants used in this study from {cite}`Li2011`. Similarly, total ADP, [$\Sigma \text{ADP}$], and inorganic phosphate, [$\Sigma \text{Pi}$], concentrations are
```{math}
:label: sumADP
[\Sigma {\rm ADP} ] &= [{\rm ADP}^{3-}]\left( 1 + \frac{[{\rm Mg}^{2+}]}{K_{\rm MgADP}} + \frac{ [{\rm H}^{+}]}{K_{\rm HADP}} + \frac{ [{\rm K}^{+}]}{K_{\rm KADP}} \right) \nonumber \\
&= [{\rm ADP}^{3-}]P_{\rm ADP}
```
and
```{math}
:label: sumPi
[\Sigma {\rm Pi} ] &= [{\rm HPO}_4^{2-}] \left( 1 + \frac{[{\rm Mg}^{2+}]}{K_{\rm MgPi}} + \frac{ [{\rm H}^{+}]}{K_{\rm HPi}} + \frac{ [{\rm K}^{+}]}{K_{\rm KPi}} \right) \nonumber \\
&= [{\rm HPO}_4^{2-}] P_{\rm Pi},
```
for binding polynomials $P_{\text{ADP}}$ and $P_{\text{Pi}}$.
Expressing the Gibbs energy of ATP hydrolysis in Equation {eq}`ATP1` in terms of biochemical reactant concentrations, we obtain
```{math}
:label: ATP2
\Delta_r G_{\rm ATP} &= \Delta_r G^o_\text{ATP} + R{\rm T} \ln \left(
\frac{[\Sigma{\rm ADP}][\Sigma{\rm Pi}]}
{[\Sigma{\rm ATP}]}\cdot\frac{[{\rm H}^+]P_{\rm ATP}}{P_{\rm ADP}P_{\rm Pi}}
\right) \nonumber \\
&= \Delta_r G^o_\text{ATP}
+ R{\rm T} \ln \left(\frac{[{\rm H}^+]P_{\rm ATP}}{P_{\rm ADP}P_{\rm Pi}} \right)
+ R{\rm T} \ln \left(\frac{[\Sigma{\rm ADP}][\Sigma{\rm Pi}]}
{[\Sigma{\rm ATP}]}\right) \nonumber \\
&= \Delta_r G'^o_\text{ATP}
+ R{\rm T} \ln \left(\frac{[\Sigma{\rm ADP}][\Sigma{\rm Pi}]}
{[\Sigma{\rm ATP}]}\right)
```
where $\Delta_r G'^o_\text{ATP}$ is a transformed, or *apparent*, reference Gibbs energy for the reaction.
```{list-table} Dissociation constants given as 10$^{-\text{p}K_a}$.
:header-rows: 2
:name: table-dissociationconstants
* -
-
- Ligand ($L$)
-
* -
- Mg$^{2+}$
- H$^{+}$
- K$^{+}$
* - $K_{L-\text{ATP}}$
- $10^{-3.88}$
- $10^{-6.33}$
- $10^{-1.02}$
* - $K_{L-\text{ADP}}$
- $10^{-3.00}$
- $10^{-6.26}$
- $10^{-0.89}$
* - $K_{L-\text{Pi}}$
- $10^{-1.66}$
- $10^{-6.62}$
- $10^{-0.42}$
```
The following code computes the apparent Gibbs energy with $\text{pH} = 7$, $[\text{K}^{+}] = 150 \ \text{mM}$, and $[\text{Mg}^{2+}] = 1 \ \text{mM}$. Biochemical reactant concentrations are set such that the total adenine nucleotide (TAN) pool inside the mitochondrion is $10 \ \text{mM}$, $[\Sigma \text{ATP}] = 0.5 \ \text{mM}$, $[\Sigma \text{ADP}] = 9.5 \ \text{mM}$, and $[\Sigma \text{Pi}] = 1 \ \text{mM}$. Here, we obtain a value of approximately $\text{-}45 \ \text{kJ mol}^{-1}$.
```
# Import numpy package for calculations
import numpy as np
# Dissociation constants
K_MgATP = 10**(-3.88)
K_MgADP = 10**(-3.00)
K_MgPi = 10**(-1.66)
K_HATP = 10**(-6.33)
K_HADP = 10**(-6.26)
K_HPi = 10**(-6.62)
K_KATP = 10**(-1.02)
K_KADP = 10**(-0.89)
K_KPi = 10**(-0.42)
# Gibbs energy under physiological conditions(J mol^(-1))
DrGo_ATP = 4990
# Thermochemical constants
R = 8.314 # J (mol * K)**(-1)
T = 310.15 # K
F = 96485 # C mol**(-1)
# Environment concentrations
pH = 7
H = 10**(-pH) # Molar
K = 150e-3 # Molar
Mg = 1e-3 # Molar
# Binding polynomials
P_ATP = 1 + H/K_HATP + K/K_KATP + Mg/K_MgATP # equation 6
P_ADP = 1 + H/K_HADP + K/K_KADP + Mg/K_MgADP # equation 7
P_Pi = 1 + H/K_HPi + K/K_KPi + Mg/K_MgPi # equation 8
# Total concentrations
sumATP = 0.5e-3 # Molar
sumADP = 9.5e-3 # Molar
sumPi = 1.0e-3 # Molar
# Reaction:
# ATP4− + H2O ⇌ ADP3− + HPO2−4 + H+
# Use equation 8 to calcuate apparent reference Gibbs energy
DrG_ATP_apparent = DrGo_ATP + R * T * np.log(H * P_ATP / (P_ADP * P_Pi))
# Use equation 8 to calculate reaction Gibbs energy
DrG_ATP = DrG_ATP_apparent + R * T * np.log((sumADP * sumPi / sumATP))
print('Gibbs energy of ATP hydrolysis (kJ mol^(-1))')
print(DrG_ATP / 1000)
```
The reactant concentrations used in the above calculation represent reasonable values for concentrations in the mitochondrial matrix. In the cytosol, the ATP/ADP ratio is on the order of 100:1, yielding a $\Delta_r G_\text{ATP}$ of approximately $\text{-}64 \ \text{kJ mol}^{-1}$.
Note the large difference in magnitude of the estimated Gibbs energy of ATP hydrolysis inside (-$45 \ \text{kJ mol}^{-1}$) versus outside (-$64 \ \text{kJ mol}^{-1}$) of the mitochondrial matrix. Light will be shed on the mechanisms underlying this difference via the calculations and analyses presented below.
### ATP synthesis in the mitochondrial matrix
The F$_0$F$_1$ ATP synthase catalyzes the synthesis of ATP from ADP and Pi by coupling to the translocation of $n_{\text{F}} = 8/3$ protons from the cytosol to the matrix via the combined reaction
```{math}
:label: ATP3
({\rm ADP}^{3-})_x + ({\rm HPO}_4^{2-})_x + ({\rm H}^+)_x + n_{\text{F}} (\text{H}^{+})_c
\rightleftharpoons
({\rm ATP})^{4-}_x + {\rm H_2O} + n_{\text{F}} (\text{H}^{+})_x \, .
```
Using the Gibbs energy of the reaction of Equation {eq}`ATP2` and the proton motive force in Equation {eq}`DG_H`, the overall Gibbs energy for the coupled process of ATP synthesis and proton transport via the F$_0$F$_1$ ATP synthase is
```{math}
:label: DG_F
\Delta G_{\text{F}} &=& -\Delta_r G_{\rm ATP} + n_\text{F} \Delta G_{\rm H} \nonumber \\
&=& -\Delta_r G'^o_\text{ATP} - R{\rm T} \ln \left(\frac{[\Sigma{\rm ADP}]_x[\Sigma{\rm Pi}]_x}
{[\Sigma{\rm ATP}]_x}\right) - n_\text{F} F \Delta \Psi + R{\rm T} \ln \left(
\frac{ [{\rm H}^{+}]_x }{ [{\rm H}^{+}]_c } \right)^{n_{\rm F}} .
```
Note that the negative before $\Delta_r G_\text{ATP}$ indicates that the reaction of Equation {eq}`ATP1` is reversed in Equation {eq}`ATP3`. The equilibrium concentration ratio occurs when $\Delta G_{\text{F}} = 0$. Solving for the second term in Equation {eq}`DG_F`, we calculate the apparent equilibrium constant for ATP synthesis as
```{math}
:label: Kapp_F
K_{eq,\text{F}}^\prime =
\left( \frac{[\Sigma{\rm ATP}]_x}{[\Sigma{\rm ADP}]_x[\Sigma{\rm Pi}]_x} \right)_{eq} = \exp\left\{\frac{ \Delta_rG'^o_{\rm ATP} + n_{\rm F} F \Delta\Psi}{R{\rm T}}\right\}
\left( \frac{[{\rm H^+}]_c}{[{\rm H^+}]_x} \right)^{n_{\rm F}}.
```
(modelATPsynthesis)=
### Mathematical modeling ATP synthesis
A simple model of ATP synthesis kinetics can be constructed using the apparent equilibrium constant and mass-action kinetics in the form
```{math}
:label: J_F
J_{\text{F}} = X_{\text{F}} (K_{eq,\text{F}}^\prime [\Sigma \text{ADP}]_x [\Sigma \text{Pi}]_x - [\Sigma \text{ATP}]_x),
```
where $X_{\text{F}} = 1000 \ \text{mol s}^{-1} \ \text{(L mito)}^{-1}$ is a rate constant set to an arbitrarily high value that maintains the reaction in equilibrium in model simulations. To simulate ATP synthesis at a given membrane potential, matrix pH, cytosolic pH, and cation concentrations, we have
```{math}
:label: system-ATPase
\left\{
\renewcommand{\arraystretch}{2}
\begin{array}{rl}
\dfrac{ {\rm d} [\Sigma \text{ATP}]_x }{{\rm d} t} &= J_\text{F} / W_x \\
\dfrac{ {\rm d} [\Sigma \text{ADP}]_x }{{\rm d} t} &= -J_\text{F} / W_x \\
\dfrac{ {\rm d} [\Sigma \text{Pi}]_x }{{\rm d} t} &= -J_\text{F} / W_x,
\end{array}
\renewcommand{\arraystretch}{1}
\right.
```
where $W_x \ \text{((L matrix water) (L mito)}^{-1}$) is the fraction of water volume in the mitochondrial matrix to total volume of the mitochondrion. Dissociation constants are listed in {numref}`table-dissociationconstants` and all other parameters are listed in {numref}`table-biophysicalconstants`.
```{list-table} Parameters for ATP synthesis in vitro.
:header-rows: 1
:name: table-biophysicalconstants
* - Symbol
- Units
- Description
- Value
- Source
* - F$_0$F$_1$ ATP synthase constants
-
-
-
-
* - $n_{\text{F}}$
-
- Protons translocated
- $8/3 $
- {cite}`Nicholls2013`
* - $X_\text{F}$
- mol s$^{-1}$ (L mito)$^{-1}$
- Rate constant
- $1000 $
-
* - $\Delta_r G_\text{ATP}^\circ$
- kJ mol$^{-1}$
- Reference Gibbs energy
- $4.99 $
- {cite}`Li2011`
* - Biophysical constants
-
-
-
-
* - $R$
- J mol$^{-1}$ K$^{-1}$
- Gas constant
- $8.314 $
-
* - $T$
- K
- Temperature
- $310.15 $
-
* - $F$
- C mol$^{-1}$
- Faraday's constant
- $96485$
-
* - $C_m$
- mol V$^{-1}$ (L mito)$^{-1}$
- IMM capacitance
- $3.1\text{e-}3$
- {cite}`Beard2005`
* - Volume ratios
-
-
-
-
* - $V_c$
- (L cyto) (L cell)$^{-1}$
- Cyto to cell ratio
- $0.6601$
- {cite}`Bazil2016`
* - $V_m$
- (L mito) (L cell)$^{-1}$
- Mito to cell ratio
- $0.2882$
- {cite}`Bazil2016`
* - $V_{m2c}$
- (L mito) (L cyto)$^{-1}$
- Mito to cyto ratio
- $V_m / V_c$
-
* - $W_c$
- (L cyto water) (L cyto)$^{-1}$
- Cyto water space ratio
- $0.8425$
- {cite}`Bazil2016`
* - $W_m$
- (L mito water) (L mito)$^{-1}$
- Mito water space ratio
- $0.7238 $
- {cite}`Bazil2016`
* - $W_x$
- (L matrix water) (L mito)$^{-1}$
- Mito matrix water space ratio
- $0.9$ $W_m$
- {cite}`Bazil2016`
* - $W_i$
- (L IM water) (L mito)$^{-1}$
- IMS water space ratio
- $0.1$ $W_m$
- {cite}`Bazil2016`
```
The following code simulates steady state ATP, ADP, and Pi concentrations for $\Delta \Psi = 175 \ \text{mV}$. Here, a pH gradient is fixed across the IMM such that the pH in the matrix is slightly more basic than the cytosol, $\text{pH}_x = 7.4$ and $\text{pH}_c = 7.2$. All other conditions remain unchanged.
```
import matplotlib.pyplot as plt
import numpy as np
!pip install scipy
from scipy.integrate import solve_ivp
# Define system of ordinary differential equations from equation (13)
def dXdt(t, X, DPsi, pH_c):
# Unpack X state variable
sumATP, sumADP, sumPi = X
# Biophysical constants
R = 8.314 # J (mol * K)**(-1)
T = 310.15 # K
F = 96485 # C mol**(-1)
# F0F1 constants
n_F = 8/3
X_F = 1000 # mol (s * L mito)**(-1)
DrGo_F = 4990 # (J mol**(-1))
# Dissociation constants
K_MgATP = 10**(-3.88)
K_MgADP = 10**(-3.00)
K_MgPi = 10**(-1.66)
K_HATP = 10**(-6.33)
K_HADP = 10**(-6.26)
K_HPi = 10**(-6.62)
K_KATP = 10**(-1.02)
K_KADP = 10**(-0.89)
K_KPi = 10**(-0.42)
# Environment concentrations
pH_x = 7.4 # pH in matrix
H_x = 10**(-pH_x) # M
H_c = 10**(-pH_c) # M
K_x = 150e-3 # M
Mg_x = 1e-3 # M
# Volume ratios
W_m = 0.7238 # (L mito water) (L mito)**(-1)
W_x = 0.9 * W_m # (L matrix water) (L mito)**(-1)
# Binding polynomials
P_ATP = 1 + H_x/K_HATP + K_x/K_KATP + Mg_x/K_MgATP # equation 5
P_ADP = 1 + H_x/K_HADP + K_x/K_KADP + Mg_x/K_MgADP # equation 6
P_Pi = 1 + H_x/K_HPi + K_x/K_KPi + Mg_x/K_MgPi # equation 7
# Gibbs energy (equation 9)
DrGapp_F = DrGo_F + R * T * np.log(H_x * P_ATP / (P_ADP * P_Pi))
# Apparent equilibrium constant
Kapp_F = np.exp((DrGapp_F + n_F * F * DPsi)/ (R * T)) * (H_c / H_x) ** n_F
# Flux (mol (s * L mito)**(-1))
J_F = X_F * (Kapp_F * sumADP * sumPi - sumATP)
###### Differential equations (equation 13) ######
dATP = J_F / W_x
dADP = -J_F / W_x
dPi = -J_F / W_x
dX = (dATP, dADP, dPi)
return dX
# Simple steady state simulation at 175 mV membrane potential
# Initial conditions (M)
sumATP_0 = 0.5e-3
sumADP_0 = 9.5e-3
sumPi_0 = 1e-3
X_0 = np.array([sumATP_0, sumADP_0, sumPi_0])
# Inputs
DPsi = 175e-3 # Constant membrane potential (V)
pH_c = 7.2 # IMS/buffer pH
solutions = solve_ivp(dXdt, [0, 1], X_0, method = 'Radau', args = (DPsi,pH_c))
t = solutions.t
results = solutions.y
results = results * 1000
# Plot figure
plt.figure()
plt.plot(t, results[0,:], label = '[$\Sigma$ATP]$_x$')
plt.plot(t, results[1,:], label = '[$\Sigma$ADP]$_x$')
plt.plot(t, results[2,:], label = '[$\Sigma$Pi]$_x$')
plt.legend()
plt.xlabel('Time (s)')
plt.ylabel('Concentration (mM)')
plt.ylim(0, 10)
plt.show()
```
**Figure 2:** Steady state solution from Equation {eq}`system-ATPase` for $\Delta \Psi = 175$ mV, $\text{pH}_x = 7.4$, and $\text{pH}_c = 7.2$.
The above simulation shows that under the clamped pH and $\Delta\Psi$ conditions simulated here, the model quickly approaches an equilibrium steady state. (Even though all reaction fluxes go to zero in the final steady state, the ATP hydrolysis potential attains a finite nonzero value because of the energy supplied by the clamped proton motive force.) Most of the adenine nucleotide remains in the form of ADP and the final ATP/ADP ratio in the matrix is approximately $1$:$20$, with the inorganic phosphate concentration of approximately $1 \ \text{mM}$.
To explore how the equilibrium changes with membrane potential, the following code computes the predicted equilibrium steady-state over a ranges of $\Delta\Psi$ from $100$ to $250 \ \text{mV}$.
```
### Simulate over a range of membrane potential from 100 mV to 250 mV ###
# Define array to iterate over
membrane_potential = np.linspace(100,250) # mV
# Constant external pH
pH_c = 7.2 # IMS/buffer pH
# Define arrays to store steady state results
ATP_steady_DPsi = np.zeros(len(membrane_potential))
ADP_steady_DPsi = np.zeros(len(membrane_potential))
Pi_steady_DPsi = np.zeros(len(membrane_potential))
# Iterate through range of membrane potentials
for i in range(len(membrane_potential)):
DPsi = membrane_potential[i] / 1000 # convert to V
temp_results = solve_ivp(dXdt, [0, 5], X_0, method = 'Radau', args = (DPsi, pH_c,)).y*1000 # Concentration in mM
ATP_steady_DPsi[i] = temp_results[0,-1]
ADP_steady_DPsi[i] = temp_results[1,-1]
Pi_steady_DPsi[i] = temp_results[2,-1]
# Concentration vs DPsi
plt.figure()
plt.plot(membrane_potential, ATP_steady_DPsi, label = '[$\Sigma$ATP]$_x$')
plt.plot(membrane_potential, ADP_steady_DPsi, label = '[$\Sigma$ADP]$_x$')
plt.plot(membrane_potential, Pi_steady_DPsi, label = '[$\Sigma$Pi]$_x$')
plt.legend()
plt.xlabel('Membrane potential (mV)')
plt.ylabel('Concentration (mM)')
plt.xlim([100, 250])
plt.show()
```
**Figure 3:** Simulation of concentration versus $\Delta \Psi$ for Equation {eq}`system-ATPase` for $\Delta \Psi$ from $100$ to $250$ mV.
The above simulations show that under physiological levels of $\Delta$pH, matrix ATP concentrations become essentially zero for values of the membrane potential less than approximately $150 \ \text{mV}$. At higher levels of $\Delta\Psi$, all of the available phosphate is used to phosphorylate ADP to ATP. Since the initial $[\text{Pi}]$ and $[\text{ATP}]$ are $1 \ \text{mM}$ and $0.5 \ \text{mM}$, respectively, the maximum ATP obtained at the maximal $\Delta\Psi$ is $1.5 \ \text{mM}$.
| github_jupyter |
# Default of credit card clients Data Set
### Data Set Information:
This research aimed at the case of customers’ default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification - credible or not credible clients. Because the real probability of default is unknown, this study presented the novel “Sorting Smoothing Method†to estimate the real probability of default. With the real probability of default as the response variable (Y), and the predictive probability of default as the independent variable (X), the simple linear regression result (Y = A + BX) shows that the forecasting model produced by artificial neural network has the highest coefficient of determination; its regression intercept (A) is close to zero, and regression coefficient (B) to one. Therefore, among the six data mining techniques, artificial neural network is the only one that can accurately estimate the real probability of default.
### Attribute Information:
This research employed a binary variable, default payment (Yes = 1, No = 0), as the response variable. This study reviewed the literature and used the following 23 variables as explanatory variables:
X1: Amount of the given credit (NT dollar): it includes both the individual consumer credit and his/her family (supplementary) credit.
X2: Gender (1 = male; 2 = female).
X3: Education (1 = graduate school; 2 = university; 3 = high school; 4 = others).
X4: Marital status (1 = married; 2 = single; 3 = others).
X5: Age (year).
X6 - X11: History of past payment. We tracked the past monthly payment records (from April to September, 2005) as follows:
X6 = the repayment status in September, 2005;
X7 = the repayment status in August, 2005;
. . .;
X11 = the repayment status in April, 2005. The measurement scale for the repayment status is: -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; . . .; 8 = payment delay for eight months; 9 = payment delay for nine months and above.
X12-X17: Amount of bill statement (NT dollar).
X12 = amount of bill statement in September, 2005;
X13 = amount of bill statement in August, 2005;
. . .;
X17 = amount of bill statement in April, 2005.
X18-X23: Amount of previous payment (NT dollar).
X18 = amount paid in September, 2005;
X19 = amount paid in August, 2005;
. . .;
X23 = amount paid in April, 2005.
```
%matplotlib inline
import os
import json
import time
import pickle
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
URL = "http://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls"
def fetch_data(fname='default_of_credit_card_clients.xls'):
"""
Helper method to retreive the ML Repository dataset.
"""
response = requests.get(URL)
outpath = os.path.abspath(fname)
with open(outpath, 'w') as f:
f.write(response.content)
return outpath
# Fetch the data if required
# DATA = fetch_data()
# IMPORTANT - Issue saving xls file needed to be fix in fetch_data. Using a valid manually downloaded files instead for this example.
DATA = "./default_of_credit_card_clients2.xls"
FEATURES = [
"ID",
"LIMIT_BAL",
"SEX",
"EDUCATION",
"MARRIAGE",
"AGE",
"PAY_0",
"PAY_2",
"PAY_3",
"PAY_4",
"PAY_5",
"PAY_6",
"BILL_AMT1",
"BILL_AMT2",
"BILL_AMT3",
"BILL_AMT4",
"BILL_AMT5",
"BILL_AMT6",
"PAY_AMT1",
"PAY_AMT2",
"PAY_AMT3",
"PAY_AMT4",
"PAY_AMT5",
"PAY_AMT6",
"label"
]
LABEL_MAP = {
1: "Yes",
0: "No",
}
# Read the data into a DataFrame
df = pd.read_excel(DATA,header=None, skiprows=2, names=FEATURES)
# Convert class labels into text
for k,v in LABEL_MAP.items():
df.ix[df.label == k, 'label'] = v
# Describe the dataset
print df.describe()
df.head(5)
# Determine the shape of the data
print "{} instances with {} features\n".format(*df.shape)
# Determine the frequency of each class
print df.groupby('label')['label'].count()
from pandas.tools.plotting import scatter_matrix
scatter_matrix(df, alpha=0.2, figsize=(12, 12), diagonal='kde')
plt.show()
from pandas.tools.plotting import parallel_coordinates
plt.figure(figsize=(12,12))
parallel_coordinates(df, 'label')
plt.show()
from pandas.tools.plotting import radviz
plt.figure(figsize=(12,12))
radviz(df, 'label')
plt.show()
```
## Data Extraction
One way that we can structure our data for easy management is to save files on disk. The Scikit-Learn datasets are already structured this way, and when loaded into a `Bunch` (a class imported from the `datasets` module of Scikit-Learn) we can expose a data API that is very familiar to how we've trained on our toy datasets in the past. A `Bunch` object exposes some important properties:
- **data**: array of shape `n_samples` * `n_features`
- **target**: array of length `n_samples`
- **feature_names**: names of the features
- **target_names**: names of the targets
- **filenames**: names of the files that were loaded
- **DESCR**: contents of the readme
**Note**: This does not preclude database storage of the data, in fact - a database can be easily extended to load the same `Bunch` API. Simply store the README and features in a dataset description table and load it from there. The filenames property will be redundant, but you could store a SQL statement that shows the data load.
In order to manage our data set _on disk_, we'll structure our data as follows:
```
with open('./../data/cc_default/meta.json', 'w') as f:
meta = {'feature_names': FEATURES, 'target_names': LABEL_MAP}
json.dump(meta, f, indent=4)
from sklearn.datasets.base import Bunch
DATA_DIR = os.path.abspath(os.path.join(".", "..", "data", "cc_default"))
# Show the contents of the data directory
for name in os.listdir(DATA_DIR):
if name.startswith("."): continue
print "- {}".format(name)
def load_data(root=DATA_DIR):
# Construct the `Bunch` for the wheat dataset
filenames = {
'meta': os.path.join(root, 'meta.json'),
'rdme': os.path.join(root, 'README.md'),
'data_xls': os.path.join(root, 'default_of_credit_card_clients.xls'),
'data': os.path.join(root, 'default_of_credit_card_clients.csv'),
}
# Load the meta data from the meta json
with open(filenames['meta'], 'r') as f:
meta = json.load(f)
target_names = meta['target_names']
feature_names = meta['feature_names']
# Load the description from the README.
with open(filenames['rdme'], 'r') as f:
DESCR = f.read()
# Load the dataset from the EXCEL file.
df = pd.read_excel(filenames['data_xls'],header=None, skiprows=2, names=FEATURES)
df.to_csv(filenames['data'],header=False)
dataset = np.loadtxt(filenames['data'],delimiter=",")
# Extract the target from the data
data = dataset[:, 0:-1]
target = dataset[:, -1]
# Create the bunch object
return Bunch(
data=data,
target=target,
filenames=filenames,
target_names=target_names,
feature_names=feature_names,
DESCR=DESCR
)
# Save the dataset as a variable we can use.
dataset = load_data()
print dataset.data.shape
print dataset.target.shape
```
## Classification
Now that we have a dataset `Bunch` loaded and ready, we can begin the classification process. Let's attempt to build a classifier with kNN, SVM, and Random Forest classifiers.
```
from sklearn import metrics
from sklearn import cross_validation
from sklearn.cross_validation import KFold
from sklearn.svm import SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
def fit_and_evaluate(dataset, model, label, **kwargs):
"""
Because of the Scikit-Learn API, we can create a function to
do all of the fit and evaluate work on our behalf!
"""
start = time.time() # Start the clock!
scores = {'precision':[], 'recall':[], 'accuracy':[], 'f1':[]}
for train, test in KFold(dataset.data.shape[0], n_folds=12, shuffle=True):
X_train, X_test = dataset.data[train], dataset.data[test]
y_train, y_test = dataset.target[train], dataset.target[test]
estimator = model(**kwargs)
estimator.fit(X_train, y_train)
expected = y_test
predicted = estimator.predict(X_test)
# Append our scores to the tracker
scores['precision'].append(metrics.precision_score(expected, predicted, average="weighted"))
scores['recall'].append(metrics.recall_score(expected, predicted, average="weighted"))
scores['accuracy'].append(metrics.accuracy_score(expected, predicted))
scores['f1'].append(metrics.f1_score(expected, predicted, average="weighted"))
# Report
print "Build and Validation of {} took {:0.3f} seconds".format(label, time.time()-start)
print "Validation scores are as follows:\n"
print pd.DataFrame(scores).mean()
# Write official estimator to disk
estimator = model(**kwargs)
estimator.fit(dataset.data, dataset.target)
outpath = label.lower().replace(" ", "-") + ".pickle"
with open(outpath, 'w') as f:
pickle.dump(estimator, f)
print "\nFitted model written to:\n{}".format(os.path.abspath(outpath))
# Perform SVC Classification
#fit_and_evaluate(dataset, SVC, "CC Defaut - SVM Classifier")
# Perform kNN Classification
fit_and_evaluate(dataset, KNeighborsClassifier, "CC Defaut - kNN Classifier", n_neighbors=12)
# Perform Random Forest Classification
fit_and_evaluate(dataset, RandomForestClassifier, "CC Defaut - Random Forest Classifier")
```
| github_jupyter |
```
#IMPORT SEMUA LIBRARY DISINI
#IMPORT LIBRARY PANDAS
import pandas as pd
#IMPORT LIBRARY POSTGRESQL
import psycopg2
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
#IMPORT LIBRARY CHART
from matplotlib import pyplot as plt
from matplotlib import style
#IMPORT LIBRARY PDF
from fpdf import FPDF
#IMPORT LIBRARY BASEPATH
import io
#IMPORT LIBRARY BASE64 IMG
import base64
#IMPORT LIBRARY NUMPY
import numpy as np
#IMPORT LIBRARY EXCEL
import xlsxwriter
#IMPORT LIBRARY SIMILARITAS
import n0similarities as n0
#FUNGSI UNTUK MENGUPLOAD DATA DARI CSV KE POSTGRESQL
def uploadToPSQL(host, username, password, database, port, table, judul, filePath, name, subjudul, dataheader, databody):
#TEST KONEKSI KE DATABASE
try:
for t in range(0, len(table)):
#DATA DIJADIKAN LIST
rawstr = [tuple(x) for x in zip(dataheader, databody[t])]
#KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=database)
cursor = connection.cursor()
connection.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT);
#CEK TABLE
cursor.execute("SELECT * FROM information_schema.tables where table_name=%s", (table[t],))
exist = bool(cursor.rowcount)
#KALAU ADA DIHAPUS DULU, TERUS DICREATE ULANG
if exist == True:
cursor.execute("DROP TABLE "+ table[t] + " CASCADE")
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#KALAU GA ADA CREATE DATABASE
else:
cursor.execute("CREATE TABLE "+table[t]+" (index SERIAL, tanggal date, total varchar);")
#MASUKAN DATA KE DATABASE YANG TELAH DIBUAT
cursor.execute('INSERT INTO '+table[t]+'(tanggal, total) values ' +str(rawstr)[1:-1])
#JIKA BERHASIL SEMUA AKAN MENGHASILKAN KELUARAN BENAR (TRUE)
return True
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
return error
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT CHART, DATA YANG DIAMBIL DARI DATABASE DENGAN MENGGUNAKAN ORDER DARI TANGGAL DAN JUGA LIMIT
#DISINI JUGA MEMANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
def makeChart(host, username, password, db, port, table, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath):
try:
datarowsend = []
for t in range(0, len(table)):
#TEST KONEKSI KE DATABASE
connection = psycopg2.connect(user=username,password=password,host=host,port=port,database=db)
cursor = connection.cursor()
#MENGAMBIL DATA DARI DATABASE DENGAN LIMIT YANG SUDAH DIKIRIMKAN DARI VARIABLE DIBAWAH
postgreSQL_select_Query = "SELECT * FROM "+table[t]+" ORDER BY tanggal DESC LIMIT " + str(limitdata)
cursor.execute(postgreSQL_select_Query)
mobile_records = cursor.fetchall()
uid = []
lengthx = []
lengthy = []
#MENYIMPAN DATA DARI DATABASE KE DALAM VARIABLE
for row in mobile_records:
uid.append(row[0])
lengthx.append(row[1])
lengthy.append(row[2])
datarowsend.append(mobile_records)
#JUDUL CHART
judulgraf = A2 + " " + wilayah[t]
#bar
style.use('ggplot')
fig, ax = plt.subplots()
#DATA CHART DIMASUKAN DISINI
ax.bar(uid, lengthy, align='center')
#JUDUL CHART
ax.set_title(judulgraf)
ax.set_ylabel('Total')
ax.set_xlabel('Tanggal')
ax.set_xticks(uid)
ax.set_xticklabels((lengthx))
b = io.BytesIO()
#BUAT CHART MENJADI FORMAT PNG
plt.savefig(b, format='png', bbox_inches="tight")
#CHART DIJADIKAN BASE64
barChart = base64.b64encode(b.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#line
#DATA CHART DIMASUKAN DISINI
plt.plot(lengthx, lengthy)
plt.xlabel('Tanggal')
plt.ylabel('Total')
#JUDUL CHART
plt.title(judulgraf)
plt.grid(True)
l = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(l, format='png', bbox_inches="tight")
#GAMBAR DIJADIKAN BAS64
lineChart = base64.b64encode(l.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#pie
#JUDUL CHART
plt.title(judulgraf)
#DATA CHART DIMASUKAN DISINI
plt.pie(lengthy, labels=lengthx, autopct='%1.1f%%',
shadow=True, startangle=180)
plt.plot(legend=None)
plt.axis('equal')
p = io.BytesIO()
#CHART DIJADIKAN GAMBAR
plt.savefig(p, format='png', bbox_inches="tight")
#CHART DICONVERT KE BASE64
pieChart = base64.b64encode(p.getvalue()).decode("utf-8").replace("\n", "")
plt.show()
#CHART DISIMPAN KE DIREKTORI DIJADIKAN FORMAT PNG
#BARCHART
bardata = base64.b64decode(barChart)
barname = basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/img/'+name+''+table[t]+'-bar.png'
with open(barname, 'wb') as f:
f.write(bardata)
#LINECHART
linedata = base64.b64decode(lineChart)
linename = basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/img/'+name+''+table[t]+'-line.png'
with open(linename, 'wb') as f:
f.write(linedata)
#PIECHART
piedata = base64.b64decode(pieChart)
piename = basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/img/'+name+''+table[t]+'-pie.png'
with open(piename, 'wb') as f:
f.write(piedata)
#MEMANGGIL FUNGSI EXCEL
makeExcel(datarowsend, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, name, limitdata, table, wilayah, basePath)
#MEMANGGIL FUNGSI PDF
makePDF(datarowsend, judul, barChart, lineChart, pieChart, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, table, wilayah, basePath)
#JIKA KONEKSI GAGAL
except (Exception, psycopg2.Error) as error :
print (error)
#TUTUP KONEKSI
finally:
if(connection):
cursor.close()
connection.close()
#FUNGSI UNTUK MEMBUAT PDF YANG DATANYA BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH FPDF
def makePDF(datarow, judul, bar, line, pie, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, lengthPDF, table, wilayah, basePath):
#PDF DIATUR DENGAN SIZE A4 DAN POSISI LANDSCAPE
pdf = FPDF('L', 'mm', [210,297])
#TAMBAH HALAMAN PDF
pdf.add_page()
#SET FONT DAN JUGA PADDING
pdf.set_font('helvetica', 'B', 20.0)
pdf.set_xy(145.0, 15.0)
#TAMPILKAN JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=judul, border=0)
#SET FONT DAN JUGA PADDING
pdf.set_font('arial', '', 14.0)
pdf.set_xy(145.0, 25.0)
#TAMPILKAN SUB JUDUL PDF
pdf.cell(ln=0, h=2.0, align='C', w=10.0, txt=subjudul, border=0)
#BUAT GARIS DIBAWAH SUB JUDUL
pdf.line(10.0, 30.0, 287.0, 30.0)
pdf.set_font('times', '', 10.0)
pdf.set_xy(17.0, 37.0)
pdf.set_font('Times','B',11.0)
pdf.ln(0.5)
th1 = pdf.font_size
#BUAT TABLE DATA DATA DI DPF
pdf.cell(100, 2*th1, "Kategori", border=1, align='C')
pdf.cell(177, 2*th1, A2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Region", border=1, align='C')
pdf.cell(177, 2*th1, B2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Frekuensi", border=1, align='C')
pdf.cell(177, 2*th1, C2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Unit", border=1, align='C')
pdf.cell(177, 2*th1, D2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Sumber", border=1, align='C')
pdf.cell(177, 2*th1, E2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Status", border=1, align='C')
pdf.cell(177, 2*th1, F2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "ID Seri", border=1, align='C')
pdf.cell(177, 2*th1, G2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Kode SR", border=1, align='C')
pdf.cell(177, 2*th1, H2, border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Pertama", border=1, align='C')
pdf.cell(177, 2*th1, str(I2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Tanggal Obs. Terakhir ", border=1, align='C')
pdf.cell(177, 2*th1, str(J2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.cell(100, 2*th1, "Waktu pembaruan terakhir", border=1, align='C')
pdf.cell(177, 2*th1, str(K2.date()), border=1, align='C')
pdf.ln(2*th1)
pdf.set_xy(17.0, 125.0)
pdf.set_font('Times','B',11.0)
epw = pdf.w - 2*pdf.l_margin
col_width = epw/(lengthPDF+1)
pdf.ln(0.5)
th = pdf.font_size
#HEADER TABLE DATA F2
pdf.cell(col_width, 2*th, str("Wilayah"), border=1, align='C')
#TANGAL HEADER DI LOOPING
for row in datarow[0]:
pdf.cell(col_width, 2*th, str(row[1]), border=1, align='C')
pdf.ln(2*th)
#ISI TABLE F2
for w in range(0, len(table)):
data=list(datarow[w])
pdf.set_font('Times','B',10.0)
pdf.set_font('Arial','',9)
pdf.cell(col_width, 2*th, wilayah[w], border=1, align='C')
#DATA BERDASARKAN TANGGAL
for row in data:
pdf.cell(col_width, 2*th, str(row[2]), border=1, align='C')
pdf.ln(2*th)
#PEMANGGILAN GAMBAR
for s in range(0, len(table)):
col = pdf.w - 2*pdf.l_margin
pdf.ln(2*th)
widthcol = col/3
#TAMBAH HALAMAN
pdf.add_page()
#DATA GAMBAR BERDASARKAN DIREKTORI DIATAS
pdf.image(basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/img/'+name+''+table[s]+'-bar.png', link='', type='',x=8, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/img/'+name+''+table[s]+'-line.png', link='', type='',x=103, y=80, w=widthcol)
pdf.set_xy(17.0, 144.0)
col = pdf.w - 2*pdf.l_margin
pdf.image(basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/img/'+name+''+table[s]+'-pie.png', link='', type='',x=195, y=80, w=widthcol)
pdf.ln(4*th)
#PDF DIBUAT
pdf.output(basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/pdf/'+A2+'.pdf', 'F')
#FUNGSI MAKEEXCEL GUNANYA UNTUK MEMBUAT DATA YANG BERASAL DARI DATABASE DIJADIKAN FORMAT EXCEL TABLE F2
#PLUGIN YANG DIGUNAKAN ADALAH XLSXWRITER
def makeExcel(datarow, A2, B2, C2, D2, E2, F2, G2, H2, I2, J2, K2, name, limit, table, wilayah, basePath):
#BUAT FILE EXCEL
workbook = xlsxwriter.Workbook(basePath+'jupyter/CEIC/23. Sektor Petambangan dan Manufaktur/excel/'+A2+'.xlsx')
#BUAT WORKSHEET EXCEL
worksheet = workbook.add_worksheet('sheet1')
#SETTINGAN UNTUK BORDER DAN FONT BOLD
row1 = workbook.add_format({'border': 2, 'bold': 1})
row2 = workbook.add_format({'border': 2})
#HEADER UNTUK TABLE EXCEL F2
header = ["Wilayah", "Kategori","Region","Frekuensi","Unit","Sumber","Status","ID Seri","Kode SR","Tanggal Obs. Pertama","Tanggal Obs. Terakhir ","Waktu pembaruan terakhir"]
#DATA DATA DITAMPUNG PADA VARIABLE
for rowhead2 in datarow[0]:
header.append(str(rowhead2[1]))
#DATA HEADER DARI VARIABLE DIMASUKAN KE SINI UNTUK DITAMPILKAN BERDASARKAN ROW DAN COLUMN
for col_num, data in enumerate(header):
worksheet.write(0, col_num, data, row1)
#DATA ISI TABLE F2 DITAMPILKAN DISINI
for w in range(0, len(table)):
data=list(datarow[w])
body = [wilayah[w], A2, B2, C2, D2, E2, F2, G2, H2, str(I2.date()), str(J2.date()), str(K2.date())]
for rowbody2 in data:
body.append(str(rowbody2[2]))
for col_num, data in enumerate(body):
worksheet.write(w+1, col_num, data, row2)
#FILE EXCEL DITUTUP
workbook.close()
#DISINI TEMPAT AWAL UNTUK MENDEFINISIKAN VARIABEL VARIABEL SEBELUM NANTINYA DIKIRIM KE FUNGSI
#PERTAMA MANGGIL FUNGSI UPLOADTOPSQL DULU, KALAU SUKSES BARU MANGGIL FUNGSI MAKECHART
#DAN DI MAKECHART MANGGIL FUNGSI MAKEEXCEL DAN MAKEPDF
#BASE PATH UNTUK NANTINYA MENGCREATE FILE ATAU MEMANGGIL FILE
basePath = 'C:/Users/ASUS/Documents/bappenas/'
#FILE SIMILARITY WILAYAH
filePathwilayah = basePath+'data mentah/CEIC/allwilayah.xlsx';
#BACA FILE EXCEL DENGAN PANDAS
readexcelwilayah = pd.read_excel(filePathwilayah)
dfwilayah = list(readexcelwilayah.values)
readexcelwilayah.fillna(0)
allwilayah = []
#PEMILIHAN JENIS DATA, APA DATA ITU PROVINSI, KABUPATEN, KECAMATAN ATAU KELURAHAN
tipewilayah = 'prov'
if tipewilayah == 'prov':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][1])
elif tipewilayah=='kabkot':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][3])
elif tipewilayah == 'kec':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][5])
elif tipewilayah == 'kel':
for x in range(0, len(dfwilayah)):
allwilayah.append(dfwilayah[x][7])
semuawilayah = list(set(allwilayah))
#SETTING VARIABLE UNTUK DATABASE DAN DATA YANG INGIN DIKIRIMKAN KE FUNGSI DISINI
name = "01. Produksi Industri (BAA001-BAA008)"
host = "localhost"
username = "postgres"
password = "1234567890"
port = "5432"
database = "ceic"
judul = "Produk Domestik Bruto (AA001-AA007)"
subjudul = "Badan Perencanaan Pembangunan Nasional"
filePath = basePath+'data mentah/CEIC/23. Sektor Petambangan dan Manufaktur/'+name+'.xlsx';
limitdata = int(8)
readexcel = pd.read_excel(filePath)
tabledata = []
wilayah = []
databody = []
#DATA EXCEL DIBACA DISINI DENGAN MENGGUNAKAN PANDAS
df = list(readexcel.values)
head = list(readexcel)
body = list(df[0])
readexcel.fillna(0)
#PILIH ROW DATA YANG INGIN DITAMPILKAN
rangeawal = 106
rangeakhir = 107
rowrange = range(rangeawal, rangeakhir)
#INI UNTUK MEMFILTER APAKAH DATA YANG DIPILIH MEMILIKI SIMILARITAS ATAU TIDAK
#ISIKAN 'WILAYAH' UNTUK SIMILARITAS
#ISIKAN BUKAN WILAYAH JIKA BUKAN WILAYAH
jenisdata = "Indonesia"
#ROW DATA DI LOOPING UNTUK MENDAPATKAN SIMILARITAS WILAYAH
#JIKA VARIABLE JENISDATA WILAYAH AKAN MASUK KESINI
if jenisdata == 'Wilayah':
for x in rowrange:
rethasil = 0
big_w = 0
for w in range(0, len(semuawilayah)):
namawilayah = semuawilayah[w].lower().strip()
nama_wilayah_len = len(namawilayah)
hasil = n0.get_levenshtein_similarity(df[x][0].lower().strip()[nama_wilayah_len*-1:], namawilayah)
if hasil > rethasil:
rethasil = hasil
big_w = w
wilayah.append(semuawilayah[big_w].capitalize())
tabledata.append('produkdomestikbruto_'+semuawilayah[big_w].lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#JIKA BUKAN WILAYAH MASUK KESINI
else:
for x in rowrange:
wilayah.append(jenisdata.capitalize())
tabledata.append('produkdomestikbruto_'+jenisdata.lower().replace(" ", "") + "" + str(x))
testbody = []
for listbody in df[x][11:]:
if ~np.isnan(listbody) == False:
testbody.append(str('0'))
else:
testbody.append(str(listbody))
databody.append(testbody)
#HEADER UNTUK PDF DAN EXCEL
A2 = "Data Migas"
B2 = df[rangeawal][1]
C2 = df[rangeawal][2]
D2 = df[rangeawal][3]
E2 = df[rangeawal][4]
F2 = df[rangeawal][5]
G2 = df[rangeawal][6]
H2 = df[rangeawal][7]
I2 = df[rangeawal][8]
J2 = df[rangeawal][9]
K2 = df[rangeawal][10]
#DATA ISI TABLE F2
dataheader = []
for listhead in head[11:]:
dataheader.append(str(listhead))
#FUNGSI UNTUK UPLOAD DATA KE SQL, JIKA BERHASIL AKAN MAMANGGIL FUNGSI UPLOAD CHART
sql = uploadToPSQL(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, dataheader, databody)
if sql == True:
makeChart(host, username, password, database, port, tabledata, judul, filePath, name, subjudul, A2, B2, C2, D2, E2, F2, G2, H2,I2, J2, K2, limitdata, wilayah, tabledata, basePath)
else:
print(sql)
```
| github_jupyter |
# Translating Story Map from one language to another using Deep Learning
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc">
<ul class="toc-item">
<li><span><a href="#Introduction" data-toc-modified-id="Introduction-1">Introduction</a></span></li>
<li><span><a href="#Prerequisites" data-toc-modified-id="Prerequisites-2">Prerequisites</a></span></li>
<li><span><a href="#Imports" data-toc-modified-id="Imports-3">Imports</a></span></li>
<li><span><a href="#Translate-Story-Map-from-English-to-Spanish" data-toc-modified-id="Translate-Story-Map-from-English-to-Spanish-4">Translate Story Map from English to Spanish</a></span></li>
<ul class="toc-item">
<li><span><a href="#Connect-to-GIS-and-clone-Story-Map" data-toc-modified-id="Connect-to-GIS-and-clone-Story-Map-4.1">Connect to GIS and clone Story Map</a></span></li>
<li><span><a href="#Instantiate-text-translator" data-toc-modified-id="Instantiate-text-translator-4.2">Instantiate text translator</a></span></li>
<li><span><a href="#Translate-Story-Map-content" data-toc-modified-id="Translate-Story-Map-content-4.3">Translate Story Map content</a></span></li>
<li><span><a href="#Update-cloned-Story-Map-item" data-toc-modified-id="Update-cloned-Story-Map-item-4.4">Update cloned Story Map item</a></span></li>
</ul>
<li><span><a href="#Conclusion" data-toc-modified-id="Conclusion-5">Conclusion</a></span></li>
<li><span><a href="#References" data-toc-modified-id="References-6">References</a></span></li>
</ul>
</div>
# Introduction
A [story map](https://www.esri.com/en-us/arcgis/products/arcgis-storymaps/overview) is a web map that is created for a given context with supporting information so that it becomes a stand-alone resource. It integrates maps, legends, text, photos, and video. Story maps can be built using Esri's story map templates and are a great way to quickly build useful information products tailored to one's organization needs. Using the templates, one can publish a story map without writing any code. One can simply create a web map, supply the text and images for the story, and configure the template files to create a story map.
Sometimes, there is a need to convert the text of a story map from one language to another so that it can be understood by nonnative language speaker as well. This can be done either by employing a human translator or by using a machine translation system to automatically convert the text from one language to another.
Machine translation is a sub-field of computational linguistics that deals with the problem of translating an input text or speech from one language to another. With the recent advancements in **Natural Language Processing (NLP)** and **Deep Learning**, it is now possible for a machine translation system to reach human like performance in translating a text from one language to another.
In this notebook, we will pick a story map written in English language, and create another story map with the text translated to Spanish language using the `arcgis.learn.text`'s **TextTranslator** class. The **TextTranslator** class is part of inference-only classes offerered by the `arcgis.learn.text` submodule. These inference-only classes offer a simple API dedicated to several **Natural Language Processing (NLP)** tasks including **Masked Language Modeling**, **Text Generation**, **Sentiment Analysis**, **Summarization**, **Machine Translation** and **Question Answering**.
# Prerequisites
- Inferencing workflows for Inference-only Text models of `arcgis.learn.text` submodule is based on [Hugging Face Transformers](https://huggingface.co/transformers/v3.0.2/index.html) library.
- Refer to the section [Install deep learning dependencies of arcgis.learn module](https://developers.arcgis.com/python/guide/install-and-set-up/#Install-deep-learning-dependencies) for detailed explanation about deep learning dependencies.
- [Beautiful Soup](https://anaconda.org/anaconda/beautifulsoup4) python library to pull text out from the HTML content of the story map.
- **Choosing a pretrained model**: Depending on the task and the language of the input text, user might need to choose an appropriate transformer backbone to generate desired inference. This [link](https://huggingface.co/models?search=helsinki) lists out all the pretrained models offered by [Hugging Face Transformers](https://huggingface.co/transformers/v3.0.2/index.html) library that allows translation from a source language to one or more target languages.
# Imports
```
from arcgis import GIS
from bs4 import BeautifulSoup
from arcgis.learn.text import TextTranslator
```
# Translate Story Map from English to Spanish
In this notebook we have picked up a story map written in **English** language, which talks about the near-term interim improvements (for the new bicycle and pedestrian bridge design over Lady Bird Lake) to South Pleasant Valley Road. Our goal will be to create a clone of this story map with the text translated to **Spanish** language
## Connect to GIS and clone Story Map
To achieve our goal, we will first connect to an ArcGIS online account, get the desired content by passing the appropriate item-id and cloning the item into our GIS account. We will then apply the `TextTranslator` model of `arcgis.learn.text` submodule to this cloned item to convert the content of the story map to **Spanish** language.
```
agol = GIS()
gis = GIS('home')
storymapitem = agol.content.get('c8eef2a96c88489c92010a63d0944881')
storymapitem
cloned_items = gis.content.clone_items([storymapitem], search_existing_items=False)
cloned_items
cloned_item = cloned_items[0]
cloned_item
cloned_item.id
```
## Instantiate text translator
Next, we will instantiate the class object for the `TextTranslator` model. We wish to translate text from **English** language into **Spanish** language. So will invoke the object by passing the corresponding ISO language codes [[1]](#References) in the model constructor.
```
translator = TextTranslator(source_language='en', target_language='es')
```
We will also write some helper functions to help translate the content of story map into the desired language. The `replace` function is a recursive [[2]](#References) function that accepts a json (story map item dictionary) object `obj` and applies the function `func` on the values `v` of the `keys` list passed in the `replace` function argument.
```
def replace(obj, keys, func):
return {k: replace(func(v) if k in keys else v, keys, func)
for k,v in obj.items()} if isinstance(obj, dict) else obj
```
The `translate` function will translate the English `text` (passed in the function argument) into Spanish language. The story map text sometimes contain text wrapped inside HTML [[3]](#References) tags. We will use `BeautifulSoup` library to get the non-HTML part of the input text content and use the `translator`'s `translate` method to translate the non-HTML part of the input text into desired language (Spanish in this case).
```
def translate(text):
if text == '':
return text
soup = BeautifulSoup(text, "html.parser")
for txt in soup.find_all(text=True):
translation = translator.translate(txt)[0]['translated_text'] if txt.strip() != '' else txt
txt.string.replace_with(" " + translation + " ")
return str(soup)
```
## Translate Story Map content
We will call the story map item's `get_data()` method to retrieves the data associated with the item.
```
smdata = storymapitem.get_data()
```
The call to the above method will return a python dictionary containing the contents of the story map which we wish to translate. We wish to translate not only the `text` content of the story map but also things like `title`, `summary`, `captions`, etc. To do so we will call the `replace` function defined above with the desired arguments.
```
result = replace(smdata, ['text', 'alt', 'title', 'summary', 'byline', 'caption', 'storyLogoAlt'], translate)
```
## Update cloned Story Map item
This cloned story map item doesn't contain the translated version of the story map until this point. But this can be achieved by calling the `update()` method of the cloned item and passing a dictionary of the item attributes we wish to translate.
```
cloned_item.update({'url': cloned_item.url.replace(storymapitem.id, cloned_item.id),
'text': result,
'title': translator.translate(storymapitem['title'])[0]['translated_text'],
'description': translator.translate(storymapitem['description'])[0]['translated_text'],
'snippet': translator.translate(storymapitem['snippet'])[0]['translated_text']})
```
The cloned story map text is now translated into Spanish language and is ready to be shared.
```
cloned_item
cloned_item.share(True)
```
# Conclusion
This sample demonstrates how inference only `TextTranslator` class of `arcgis.learn.text` submodule can be used to perform machine translation task to translate text from one language to another. We showed, how easy it is to translate a story map which is written in English language into Spanish language. Similar workflow can be followed to automate the task of translating story maps or other ArcGIS items into various languages.
# References
[1] [ISO Language Codes](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)
[2] [Recursion](https://en.wikipedia.org/wiki/Recursion_(computer_science))
[3] [HTML](https://en.wikipedia.org/wiki/HTML)
| github_jupyter |
MNIST classification (drawn from sklearn example)
=====================================================
MWEM is not particularly well suited for image data (where there are tons of features with relatively large ranges) but it is still able to capture some important information about the underlying distributions if tuned correctly.
We use a feature included with MWEM that allows a column to be specified for a custom bin count, if we are capping every other bin count at a small value. In this case, we specify that the numerical column (784) has 10 possible values. We do this with the dict {'784': 10}.
Here we borrow from a scikit-learn example, and insert MWEM synthetic data into their training example/visualization, to understand the tradeoffs.
https://scikit-learn.org/stable/auto_examples/linear_model/plot_sparse_logistic_regression_mnist.html#sphx-glr-download-auto-examples-linear-model-plot-sparse-logistic-regression-mnist-py
```
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_openml
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.utils import check_random_state
# pip install scikit-image
from skimage import data, color
from skimage.transform import rescale
# Author: Arthur Mensch <arthur.mensch@m4x.org>
# License: BSD 3 clause
# Turn down for faster convergence
t0 = time.time()
train_samples = 5000
# Load data from https://www.openml.org/d/554
data = fetch_openml('mnist_784', version=1, return_X_y=False)
data_np = np.hstack((data.data,np.reshape(data.target.astype(int), (-1, 1))))
from opendp.smartnoise.synthesizers.mwem import MWEMSynthesizer
# Here we set max bin count to be 10, so that we retain the numeric labels
synth = MWEMSynthesizer(10.0, 40, 15, 10, split_factor=1, max_bin_count = 128, custom_bin_count={'784':10})
synth.fit(data_np)
sample_size = 2000
synthetic = synth.sample(sample_size)
from sklearn.linear_model import RidgeClassifier
import utils
real = pd.DataFrame(data_np[:sample_size])
model_real, model_fake = utils.test_real_vs_synthetic_data(real, synthetic, RidgeClassifier, tsne=True)
# Classification
coef = model_real.coef_.copy()
plt.figure(figsize=(10, 5))
scale = np.abs(coef).max()
for i in range(10):
l1_plot = plt.subplot(2, 5, i + 1)
l1_plot.imshow(coef[i].reshape(28, 28), interpolation='nearest',
cmap=plt.cm.RdBu, vmin=-scale, vmax=scale)
l1_plot.set_xticks(())
l1_plot.set_yticks(())
l1_plot.set_xlabel('Class %i' % i)
plt.suptitle('Classification vector for...')
run_time = time.time() - t0
print('Example run in %.3f s' % run_time)
plt.show()
coef = model_fake.coef_.copy()
plt.figure(figsize=(10, 5))
scale = np.abs(coef).max()
for i in range(10):
l1_plot = plt.subplot(2, 5, i + 1)
l1_plot.imshow(coef[i].reshape(28, 28), interpolation='nearest',
cmap=plt.cm.RdBu, vmin=-scale, vmax=scale)
l1_plot.set_xticks(())
l1_plot.set_yticks(())
l1_plot.set_xlabel('Class %i' % i)
plt.suptitle('Classification vector for...')
run_time = time.time() - t0
print('Example run in %.3f s' % run_time)
plt.show()
```
| github_jupyter |
# Dataset Descriptions
This notebook contains most of the datasets used in Pandas Cookbook along with the names, types, descriptions and some summary statistics of each column. This is not an exhaustive list as several datasets used in the book are quite small and are explained with enough detail in the book itself. The datasets presented here are the prominent ones that appear most frequently throughout the book.
## Datasets in order of appearance
* [Movie](#Movie-Dataset)
* [College](#College-Dataset)
* [Employee](#Employee-Dataset)
* [Flights](#Flights-Dataset)
* [Chinook Database](#Chinook-Database)
* [Crime](#Crime-Dataset)
* [Meetup Groups](#Meetup-Groups-Dataset)
* [Diamonds](#Diamonds-Dataset)
```
import pandas as pd
pd.options.display.max_columns = 80
```
# Movie Dataset
### Brief Overview
28 columns from 4,916 movies scraped from the popular website IMDB. Each row contains information on a single movie dating back to 1916 to 2015. Actor and director facebook likes should be constant for all instances across all movies. For instance, Johnny Depp should have the same number of facebook likes regardless of which movie he is in. Since each movie was not scraped at the same exact time, there are some inconsistencies in these counts. The dataset **movie_altered.csv** is a much cleaner version of this dataset.
```
movie = pd.read_csv('data/movie.csv')
movie.head()
movie.shape
pd.read_csv('data/descriptions/movie_decsription.csv', index_col='Column Name')
```
# College Dataset
### Brief Overview
US department of education data on 7,535 colleges. Only a sample of the total number of columns available were used in this dataset. Visit [the website](https://collegescorecard.ed.gov/data/) for more info. Data was pulled in January, 2017.
```
college = pd.read_csv('data/college.csv')
college.head()
college.shape
pd.read_csv('data/descriptions/college_decsription.csv')
```
# Employee Dataset
### Brief Overview
The city of Houston provides information on all its employees to the public. This is a random sample of 2,000 employees with a few of the more interesting columns. For more on [open Houston data visit their website](http://data.houstontx.gov/). Data was pulled in December, 2016.
```
employee = pd.read_csv('data/employee.csv')
employee.head()
employee.shape
pd.read_csv('data/descriptions/employee_description.csv')
```
# Flights Dataset
### Brief Overview
A random sample of three percent of the US domestic flights originating from the ten busiest airports. Data is from the U.S. Department of Transportation's (DOT) Bureau of Transportation Statistics. [See here for more info](https://www.kaggle.com/usdot/flight-delays).
```
flights = pd.read_csv('data/flights.csv')
flights.head()
flights.shape
pd.read_csv('data/descriptions/flights_description.csv')
```
### Airline Codes
```
pd.read_csv('data/descriptions/airlines.csv')
```
### Airport codes
```
pd.read_csv('data/descriptions/airports.csv').head()
```
# Chinook Database
### Brief Overview
This is a sample database of a music store provided by SQLite with 11 tables. The table description image is an excellent way to get familiar with the database. [Visit the sqlite website](http://www.sqlitetutorial.net/sqlite-sample-database/) for more detail.

# Crime Dataset
### Brief Overview
All crime and traffic accidents for the city of Denver from January to September of 2017. This dataset is stored in special binary form called *hdf5*. Pandas uses the PyTables library to help read the data into a DataFrame. [Read the documentation](http://pandas.pydata.org/pandas-docs/stable/io.html#io-hdf5) for more info on hdf5 formatted data.
```
crime = pd.read_hdf('data/crime.h5')
crime.head()
crime.shape
pd.read_csv('data/descriptions/crime_description.csv')
```
# Meetup Groups Dataset
### Brief Overview
Data was collected through the [meetup.com API](https://www.meetup.com/meetup_api/) on five Houston-area data science meetup groups. Each row represents a member joining a particular group.
```
meetup = pd.read_csv('data/meetup_groups.csv')
meetup.head()
meetup.shape
pd.read_csv('data/descriptions/meetup_description.csv')
```
# Diamonds Dataset
### Brief Overview
Quality, size and price of nearly 54,000 diamonds scraped from the [Diamond Search Engine](http://www.diamondse.info/) by Hadley Wickham. [Visit blue nile](https://www.bluenile.com/ca/education/diamonds?track=SideNav) for a beginners guide to diamonds.
```
diamonds = pd.read_csv('data/diamonds.csv')
diamonds.head()
diamonds.shape
pd.read_csv('data/descriptions/diamonds_description.csv')
```
| github_jupyter |
___
<a href='https://www.udemy.com/user/joseportilla/'><img src='../Pierian_Data_Logo.png'/></a>
___
<center><em>Content Copyright by Pierian Data</em></center>
# Working with CSV Files
Welcome back! Let's discuss how to work with CSV files in Python. A file with the CSV file extension is a Comma Separated Values file. All CSV files are plain text, contain alphanumeric characters, and structure the data contained within them in a tabular form. Don't confuse Excel Files with csv files, while csv files are formatted very similarly to excel files, they don't have data types for their values, they are all strings with no font or color. They also don't have worksheets the way an excel file does. Python does have several libraries for working with Excel files, you can check them out [here](http://www.python-excel.org/) and [here](https://www.xlwings.org/).
Files in the CSV format are generally used to exchange data, usually when there's a large amount, between different applications. Database programs, analytical software, and other applications that store massive amounts of information (like contacts and customer data), will usually support the CSV format.
Let's explore how we can open a csv file with Python's built-in csv library.
____
## Notebook Location.
Run **pwd** inside a notebook cell to find out where your notebook is located
```
pwd
```
____
## Reading CSV Files
```
import csv
```
When passing in the file path, make sure to include the extension if it has one, you should be able to Tab Autocomplete the file name. If you can't Tab autocomplete, that is a good indicator your file is not in the same location as your notebook. You can always type in the entire file path (it will look similar in formatting to the output of **pwd**.
```
data = open('example.csv')
data
```
### Encoding
Often csv files may contain characters that you can't interpret with standard python, this could be something like an **@** symbol, or even foreign characters. Let's view an example of this sort of error ([its pretty common, so its important to go over](https://stackoverflow.com/questions/9233027/unicodedecodeerror-charmap-codec-cant-decode-byte-x-in-position-y-character)).
```
csv_data = csv.reader(data)
```
Cast to a list will give an error, note the **can't decode** line in the error, this is a giveaway that we have an encoding problem!
```
data_lines = list(csv_data)
```
Let's not try reading it with a "utf-8" encoding.
```
data = open('example.csv',encoding="utf-8")
csv_data = csv.reader(data)
data_lines = list(csv_data)
# Looks like it worked!
data_lines[:3]
```
Note the first item in the list is the header line, this contains the information about what each column represents. Let's format our printing just a bit:
```
for line in data_lines[:5]:
print(line)
```
Let's imagine we wanted a list of all the emails. For demonstration, since there are 1000 items plus the header, we will only do a few rows.
```
len(data_lines)
all_emails = []
for line in data_lines[1:15]:
all_emails.append(line[3])
print(all_emails)
```
What if we wanted a list of full names?
```
full_names = []
for line in data_lines[1:15]:
full_names.append(line[1]+' '+line[2])
full_names
```
## Writing to CSV Files
We can also write csv files, either new ones or add on to existing ones.
### New File
**This will also overwrite any exisiting file with the same name, so be careful with this!**
```
# newline controls how universal newlines works (it only applies to text
# mode). It can be None, '', '\n', '\r', and '\r\n'.
file_to_output = open('to_save_file.csv','w',newline='')
csv_writer = csv.writer(file_to_output,delimiter=',')
csv_writer.writerow(['a','b','c'])
csv_writer.writerows([['1','2','3'],['4','5','6']])
file_to_output.close()
```
____
### Existing File
```
f = open('to_save_file.csv','a',newline='')
csv_writer = csv.writer(f)
csv_writer.writerow(['new','new','new'])
f.close()
```
That is all for the basics! If you believe you will be working with CSV files often, you may want to check out the powerful [pandas library](https://pandas.pydata.org/).
| github_jupyter |
# T-ABSA Logistic Regression Model using word2vec
#### Preprocessing the reviews
Importing the libraries for preprocessing the reviews
```
import os
import pandas as pd
import nltk
from gensim.models import Word2Vec, word2vec
import matplotlib.pyplot as plt
import numpy as np
from nltk.corpus import stopwords
import os
import re
```
Loading the training dataset into python
```
data_dir = 'D:/jupyter/Year2_Research/Generate_Data/data/5_aspects/'
df_train = pd.read_csv(os.path.join(data_dir, "train_NLI.tsv"),sep="\t")
df_dev = pd.read_csv(os.path.join(data_dir, "train_NLI.tsv"),sep="\t")
df_test = pd.read_csv(os.path.join(data_dir, "test_NLI.tsv"),sep="\t")
df_train.tail(2)
frames = [df_train, df_dev, df_test]
combined_dataframe = pd.concat(frames)
combined_dataframe.iloc[3000]
combined_dataframe.tail()
combined_dataframe['concatinated'] = combined_dataframe['sentence1'] + ' ' + combined_dataframe['sentence2']
combined_dataframe['concatinated'][4102]
word2vec_training_dataset = combined_dataframe['concatinated'].values
word2vec_training_dataset[4000]
```
### Preprocessing the data
Convert each review in the training set to a list of sentences where each sentence is in turn a list of words.
Besides splitting reviews into sentences, non-letters and stop words are removed and all words
coverted to lower case.
```
def review_to_wordlist(review, remove_stopwords=True):
"""
Convert a review to a list of words.
"""
# remove non-letters
review_text = re.sub("[^a-zA-Z]"," ", review)
# convert to lower case and split at whitespace
words = review_text.lower().split()
# remove stop words (false by default)
if remove_stopwords:
stops = set(stopwords.words("english"))
words = [w for w in words if not w in stops]
return words
# Load the punkt tokenizer used for splitting reviews into sentences
tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
def review_to_sentences(review, tokenizer, remove_stopwords=True):
"""
Split review into list of sentences where each sentence is a list of words.
"""
# use the NLTK tokenizer to split the paragraph into sentences
raw_sentences = tokenizer.tokenize(review.strip())
# each sentence is furthermore split into words
sentences = []
for raw_sentence in raw_sentences:
# If a sentence is empty, skip it
if len(raw_sentence) > 0:
sentences.append(review_to_wordlist(raw_sentence, remove_stopwords))
return sentences
train_sentences = [] # Initialize an empty list of sentences
for review in word2vec_training_dataset:
train_sentences += review_to_sentences(review, tokenizer)
train_sentences[4000]
```
### Training a word2vec model
```
model_name = 'train_model'
# Set values for various word2vec parameters
num_features = 300 # Word vector dimensionality
min_word_count = 40 # Minimum word count
num_workers = 3 # Number of threads to run in parallel
context = 10 # Context window size
downsampling = 1e-3 # Downsample setting for frequent words
if not os.path.exists(model_name):
# Initialize and train the model (this will take some time)
model = word2vec.Word2Vec(train_sentences, workers=num_workers, \
size=num_features, min_count = min_word_count, \
window = context, sample = downsampling)
# If you don't plan to train the model any further, calling
# init_sims will make the model much more memory-efficient.
model.init_sims(replace=True)
# It can be helpful to create a meaningful model name and
# save the model for later use. You can load it later using Word2Vec.load()
model.save(model_name)
else:
model = Word2Vec.load(model_name)
```
```
model.most_similar("internet")
```
### Building a Classifier
```
# shape of the data
df_train.shape
```
Encoding the labels of the dataset
```
y_train = df_train['label'].replace(['None','Positive','Negative'],[1,2,0])
x_cols = [x for x in df_train.columns if x != 'label']
# Split the data into two dataframes (one for the labels and the other for the independent variables)
X_data = df_train[x_cols]
X_data.tail()
X_data['concatinated'] = X_data['sentence1'] + ' ' + X_data['sentence2']
X_data['concatinated'][9]
X_train = X_data['concatinated'].values
X_train[100]
y_train[100]
```
## 3. Build classifier using word embedding
Each review is mapped to a feature vector by averaging the word embeddings of all words in the review. These features are then fed into a random forest classifier.
```
def make_feature_vec(words, model, num_features):
"""
Average the word vectors for a set of words
"""
feature_vec = np.zeros((num_features,),dtype="float32") # pre-initialize (for speed)
nwords = 0
#index2word_set = set(model.index2word) # words known to the model
index2word_set = set(model.wv.index2word) # words known to the model
for word in words:
if word in index2word_set:
nwords = nwords + 1
feature_vec = np.add(feature_vec,model[word])
feature_vec = np.divide(feature_vec, nwords)
return feature_vec
def get_avg_feature_vecs(reviews, model, num_features):
"""
Calculate average feature vectors for all reviews
"""
counter = 0
review_feature_vecs = np.zeros((len(reviews),num_features), dtype='float32') # pre-initialize (for speed)
for review in reviews:
review_feature_vecs[counter] = make_feature_vec(review, model, num_features)
counter = counter + 1
return review_feature_vecs
# calculate average feature vectors for training and test sets
clean_train_reviews = []
for review in X_train:
clean_train_reviews.append(review_to_wordlist(review, remove_stopwords=True))
trainDataVecs = get_avg_feature_vecs(clean_train_reviews, model, num_features)
```
#### Fit a random forest classifier to the training data
```
from sklearn.linear_model import LogisticRegression
print("Fitting a weighted logistic regression to the labeled training data...")
model_lr = LogisticRegression(class_weight='balanced')
model_lr = model_lr.fit(trainDataVecs, y_train)
print("Fitting Completed")
```
## 4. Prediction
### Test set data preparation
```
# Split the data into two dataframes (one for the labels and the other for the independent variables)
x_cols = [x for x in df_test.columns if x != 'label']
X_data_test = df_test[x_cols]
# Combining the review with the generated auxilliary sentence
X_data_test['concatinated'] = X_data_test['sentence1'] + ' ' + X_data_test['sentence2']
# X test data
X_test = X_data_test['concatinated'].values
print(X_test[100:108])
# y test data
y_test = df_test['label'].replace(['None','Positive','Negative'],[1,2,0])
y_test[100:108]
clean_test_reviews = []
for review in X_test:
clean_test_reviews.append(review_to_wordlist(review, remove_stopwords=True))
testDataVecs = get_avg_feature_vecs(clean_test_reviews, model, num_features)
# remove instances in test set that could not be represented as feature vectors
nan_indices = list({x for x,y in np.argwhere(np.isnan(testDataVecs))})
if len(nan_indices) > 0:
print('Removing {:d} instances from test set.'.format(len(nan_indices)))
testDataVecs = np.delete(testDataVecs, nan_indices, axis=0)
test_reviews.drop(test_reviews.iloc[nan_indices, :].index, axis=0, inplace=True)
assert testDataVecs.shape[0] == len(test_reviews)
print("Predicting labels for test data..")
Y_predicted = model_lr.predict(testDataVecs)
```
Evaluating the performance of the model
```
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, Y_predicted))
Y_forest_score = model_lr.predict_proba(testDataVecs)
Y_forest_score
import csv
# Open/Create a file to append data
csvFile_pred = open('prediction_score.csv', 'w')
#Use csv Writer
csvWriter_pred = csv.writer(csvFile_pred)
csvWriter_pred.writerow(['predicted','score_none','score_pos','score_neg'])
for f in range(len(Y_predicted)):
csvWriter_pred.writerow([Y_predicted[f],Y_forest_score[f][1], Y_forest_score[f][0], Y_forest_score[f][2]])
csvFile_pred.close()
dataframe = pd.read_csv('prediction_score.csv')
dataframe.tail()
```
### Evaluating the model
```
# -*- coding: utf-8 -*-
"""
Created on Sun Oct 20 11:40:28 2019
@author: David
"""
import collections
import numpy as np
import pandas as pd
from sklearn import metrics
def get_y_true():
# """
# Read file to obtain y_true.
#
# """
true_data_file = "D:/jupyter/Year2_Research/Generate_Data/data/5_aspects/test_NLI.tsv"
df = pd.read_csv(true_data_file,sep='\t')
y_true = []
for i in range(len(df)):
label = df['label'][i]
assert label in ['None', 'Positive', 'Negative'], "error!"
if label == 'None':
n = 1
elif label == 'Positive':
n = 2
else:
n = 0
y_true.append(n)
print(len(y_true))
return y_true
def get_y_pred():
# """
# Read file to obtain y_pred and scores.
# """
dataframe = pd.read_csv('prediction_score.csv')
pred=[]
score=[]
for f in range(len(dataframe)):
pred.append(dataframe.predicted[f])
score.append([float(dataframe.score_pos[f]),float(dataframe.score_none[f]),float(dataframe.score_neg[f])])
return pred, score
def sentitel_strict_acc(y_true, y_pred):
"""
Calculate "strict Acc" of aspect detection task of sentitel.
"""
total_cases=int(len(y_true)/5)
true_cases=0
for i in range(total_cases):
if y_true[i*5]!=y_pred[i*5]:continue
if y_true[i*5+1]!=y_pred[i*5+1]:continue
if y_true[i*5+2]!=y_pred[i*5+2]:continue
if y_true[i*5+3]!=y_pred[i*5+3]:continue
if y_true[i*5+4]!=y_pred[i*5+4]:continue
true_cases+=1
aspect_strict_Acc = true_cases/total_cases
return aspect_strict_Acc
def sentitel_macro_F1(y_true, y_pred):
"""
Calculate "Macro-F1" of aspect detection task of sentitel.
"""
p_all=0
r_all=0
count=0
for i in range(len(y_pred)//5):
a=set()
b=set()
for j in range(5):
if y_pred[i*5+j]!=1:
a.add(j)
if y_true[i*5+j]!=1:
b.add(j)
if len(b)==0:continue
a_b=a.intersection(b)
if len(a_b)>0:
p=len(a_b)/len(a)
r=len(a_b)/len(b)
else:
p=0
r=0
count+=1
p_all+=p
r_all+=r
Ma_p=p_all/count
Ma_r=r_all/count
aspect_Macro_F1 = 2*Ma_p*Ma_r/(Ma_p+Ma_r)
return aspect_Macro_F1
def sentitel_AUC_Acc(y_true, score):
"""
Calculate "Macro-AUC" of both aspect detection and sentiment classification tasks of sentitel.
Calculate "Acc" of sentiment classification task of sentitel.
"""
# aspect-Macro-AUC
aspect_y_true=[]
aspect_y_score=[]
aspect_y_trues=[[],[],[],[],[]]
aspect_y_scores=[[],[],[],[],[]]
for i in range(len(y_true)):
if y_true[i]>0:
aspect_y_true.append(0)
else:
aspect_y_true.append(1) # "None": 1
tmp_score=score[i][0] # probability of "None"
aspect_y_score.append(tmp_score)
aspect_y_trues[i%5].append(aspect_y_true[-1])
aspect_y_scores[i%5].append(aspect_y_score[-1])
aspect_auc=[]
for i in range(5):
aspect_auc.append(metrics.roc_auc_score(aspect_y_trues[i], aspect_y_scores[i]))
aspect_Macro_AUC = np.mean(aspect_auc)
# sentiment-Macro-AUC
sentiment_y_true=[]
sentiment_y_pred=[]
sentiment_y_score=[]
sentiment_y_trues=[[],[],[],[],[]]
sentiment_y_scores=[[],[],[],[],[]]
for i in range(len(y_true)):
if y_true[i]>0:
sentiment_y_true.append(y_true[i]-1) # "Postive":0, "Negative":1
tmp_score=score[i][2]/(score[i][1]+score[i][2]) # probability of "Negative"
sentiment_y_score.append(tmp_score)
if tmp_score>0.5:
sentiment_y_pred.append(1) # "Negative": 1
else:
sentiment_y_pred.append(0)
sentiment_y_trues[i%5].append(sentiment_y_true[-1])
sentiment_y_scores[i%5].append(sentiment_y_score[-1])
sentiment_auc=[]
for i in range(5):
sentiment_auc.append(metrics.roc_auc_score(sentiment_y_trues[i], sentiment_y_scores[i]))
sentiment_Macro_AUC = np.mean(sentiment_auc)
# sentiment Acc
sentiment_y_true = np.array(sentiment_y_true)
sentiment_y_pred = np.array(sentiment_y_pred)
sentiment_Acc = metrics.accuracy_score(sentiment_y_true,sentiment_y_pred)
return aspect_Macro_AUC, sentiment_Acc, sentiment_Macro_AUC
#####################################################################
y_true = (get_y_true())
y_pred, score = get_y_pred()
result = collections.OrderedDict()
aspect_strict_Acc = sentitel_strict_acc(y_true, y_pred)
aspect_Macro_F1 = sentitel_macro_F1(y_true, y_pred)
aspect_Macro_AUC, sentiment_Acc, sentiment_Macro_AUC = sentitel_AUC_Acc(y_true, score)
result = {'aspect_strict_Acc': aspect_strict_Acc,
'aspect_Macro_F1': aspect_Macro_F1,
'aspect_Macro_AUC': aspect_Macro_AUC,
'sentiment_Acc': sentiment_Acc,
'sentiment_Macro_AUC': sentiment_Macro_AUC}
print(result)
nameHandle = open('LR_word2vec_evaluation_results.txt', 'w')
nameHandle.write('aspect_strict_Acc:\t'+ str(aspect_strict_Acc))
nameHandle.write('\naspect_Macro_F1:\t' + str(aspect_Macro_F1))
nameHandle.write('\naspect_Macro_AUC:\t' + str(aspect_Macro_AUC))
nameHandle.write('\n\nsentiment_Acc:\t' + str(sentiment_Acc))
nameHandle.write('\nsentiment_Macro_AUC:\t' + str(sentiment_Macro_AUC))
nameHandle.close()
```
| github_jupyter |
# k-NN movie reccomendation
| User\Film | Movie A | Movie B | Movie C | ... | Movie # |
|---------------------------------------------------------|
| **User A**| 3 | 4 | 0 | ... | 5 |
| **User B**| 0 | 3 | 2 | ... | 0 |
| **User C**| 4 | 1 | 3 | ... | 4 |
| **User D**| 5 | 3 | 2 | ... | 3 |
| ... | ... | ... | ... | ... | ... |
| **User #**| 2 | 1 | 1 | ... | 4 |
Task: For a new user find k similar users based on movie rating and recommend few new, previously unseen, movies to the new user. Use mean rating of k users to find which one to recommend. Use cosine similarity as distance function. User didnt't see a movie if he didn't rate the movie.
```
# Import necessary libraries
import tensorflow as tf
import numpy as np
# Define paramaters
set_size = 1000 # Number of users in dataset
n_features = 300 # Number of movies in dataset
K = 3 # Number of similary users
n_movies = 6 # Number of movies to reccomend
# Generate dummy data
data = np.array(np.random.randint(0, 6, size=(set_size, n_features)), dtype=np.float32)
new_user = np.array(np.random.randint(0, 6, size=(1, n_features)), dtype=np.float32)
# Find the number of movies that user did not rate
not_rated = np.count_nonzero(new_user == 0)
# Case in which the new user rated all movies in our dataset
if not_rated == 0:
print('Regenerate new user')
# Case in which we try to recommend more movies than user didn't see
if not_rated < n_movies:
print('Regenerate new user')
# Print few examples
# print(data[:3])
# print(new_user)
# Input train vector
X1 = tf.placeholder(dtype=tf.float32, shape=[None, n_features], name="X1")
# Input test vector
X2 = tf.placeholder(dtype=tf.float32, shape=[1, n_features], name="X2")
# Cosine similarity
norm_X1 = tf.nn.l2_normalize(X1, axis=1)
norm_X2 = tf.nn.l2_normalize(X2, axis=1)
cos_similarity = tf.reduce_sum(tf.matmul(norm_X1, tf.transpose(norm_X2)), axis=1)
with tf.Session() as sess:
# Find all distances
distances = sess.run(cos_similarity, feed_dict={X1: data, X2: new_user})
# print(distances)
# Find indices of k user with highest similarity
_, user_indices = sess.run(tf.nn.top_k(distances, K))
# print(user_indices)
# Get users rating
# print(data[user_indices])
# New user ratings
# print(new_user[0])
# NOTICE:
# There is a possibility that we can incorporate
# user for e.g. movie A which he didn't see.
movie_ratings = sess.run(tf.reduce_mean(data[user_indices], axis=0))
# print(movie_ratings)
# Positions where the new user doesn't have rating
# NOTICE:
# In random generating there is a possibility that
# the new user rated all movies in data set, if that
# happens regenerate the new user.
movie_indices = sess.run(tf.where(tf.equal(new_user[0], 0)))
# print(movie_indices)
# Pick only the avarege rating of movies that have been rated by
# other users and haven't been rated by the new user and among
# those movies pick n_movies for recommend to the new user
_, top_rated_indices = sess.run(tf.nn.top_k(movie_ratings[movie_indices].reshape(-1), n_movies))
# print(top_rated_indices)
# Indices of the movies with the highest mean rating, which new user did not
# see, from the k most similary users based on movie ratings
print('Movie indices to reccomend: ', movie_indices[top_rated_indices].T)
```
# Locally weighted regression (LOWESS)
```
# Import necessary libraries
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
tf.reset_default_graph()
# Load data as numpy array
x, y = np.loadtxt('../../data/02_LinearRegression/polynomial.csv', delimiter=',', unpack=True)
m = x.shape[0]
x = (x - np.mean(x, axis=0)) / np.std(x, axis=0)
y = (y - np.mean(y)) / np.std(y)
# Graphical preview
%matplotlib inline
fig, ax = plt.subplots()
ax.set_xlabel('X Labe')
ax.set_ylabel('Y Label')
ax.scatter(x, y, edgecolors='k', label='Data')
ax.grid(True, color='gray', linestyle='dashed')
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
w = tf.Variable(0.0, name='weights')
b = tf.Variable(0.0, name='bias')
point_x = -0.5
tau = 0.15 # 0.22
t_w = tf.exp(tf.div(-tf.pow(tf.subtract(X, point_x), 2), tf.multiply(tf.pow(tau, 2), 2)))
Y_predicted = tf.add(tf.multiply(X, w), b)
cost = tf.reduce_mean(tf.multiply(tf.square(Y - Y_predicted), t_w), name='cost')
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(cost)
with tf.Session() as sess:
# Initialize the necessary variables, in this case, w and b
sess.run(tf.global_variables_initializer())
# Train the model in 50 epochs
for i in range(500):
total_cost = 0
# Session runs train_op and fetch values of loss
for sample in range(m):
# Session looks at all trainable variables that loss depends on and update them
_, l = sess.run([optimizer, cost], feed_dict={X: x[sample], Y:y[sample]})
total_cost += l
# Print epoch and loss
if i % 50 == 0:
print('Epoch {0}: {1}'.format(i, total_cost / m))
# Output the values of w and b
w1, b1 = sess.run([w, b])
print(sess.run(t_w, feed_dict={X: 1.4}))
print('W: %f, b: %f' % (w1, b1))
print('Cost: %f' % sess.run(cost, feed_dict={X: x, Y: y}))
# Append hypothesis that we found on the plot
x1 = np.linspace(-1.0, 0.0, 50)
ax.plot(x1, x1 * w1 + b1, color='r', label='Predicted')
ax.plot(x1, np.exp(-(x1 - point_x) ** 2 / (2 * 0.15 ** 2)), color='g', label='Weight function')
ax.legend()
fig
```
| github_jupyter |
```
import pandas as pd
from pathlib import Path
import numpy as np
import seaborn as sns
from sklearn.pipeline import make_pipeline
import statsmodels.api as sm
from yellowbrick.model_selection import LearningCurve
from yellowbrick.regressor import ResidualsPlot
from yellowbrick.regressor import PredictionError
from sklearn.model_selection import train_test_split,RepeatedKFold
from sklearn.feature_selection import VarianceThreshold
from sklearn.preprocessing import StandardScaler, RobustScaler
import xgboost as xgb
from sklearn.externals import joblib
from sklearn.model_selection import GridSearchCV
from xgboost import XGBRegressor
from sklearn.metrics import mean_squared_error,mean_absolute_error,r2_score
import matplotlib.pyplot as plt
#import xgboost as xgb
%matplotlib inline
#dataframe final
df_final = pd.read_csv("../data/DF_train15_skempiAB_modeller_final.csv",index_col=0)
pdb_names = df_final.index
features_names = df_final.drop('ddG_exp',axis=1).columns
X = df_final.drop('ddG_exp',axis=1).astype(float)
y = df_final['ddG_exp']
# binned split
bins = np.linspace(0, len(X), 200)
y_binned = np.digitize(y, bins)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, stratify=y_binned,random_state=1)
sns.distplot( y_test , color="red", label="ddG_exp_test")
sns.distplot( y_train , color="skyblue", label="ddG_exp_train")
import numpy as np
from yellowbrick.model_selection import ValidationCurve
#1)
selector = VarianceThreshold()
#2)
lr_model = XGBRegressor(random_state=1212,n_jobs=-1,n_estimators=100)
#3) Crear pipeline
pipeline1 = make_pipeline(selector,lr_model)
viz = ValidationCurve(
pipeline1, njobs=-1,param_name="xgbregressor__reg_alpha",
param_range=[0.1,0.5,0.7,0.9,1.2], cv=10, scoring="r2"
)
#plt.ylim(0,0.6)
# Fit and poof the visualizer
viz.fit(X_train, y_train)
viz.poof()
XGBRegressor?
#1)
selector = VarianceThreshold()
#2)
xg_model = XGBRegressor(random_state=1212)
#3) Crear pipeline
pipeline1 = make_pipeline(selector,xg_model)
# grid params
param_grid = {'xgbregressor__colsample_bytree': [0.3],
'xgbregressor__subsample': [0.5],
'xgbregressor__n_estimators':[100],
'xgbregressor__max_depth': [6],
'xgbregressor__gamma': [3],
'xgbregressor__learning_rate': [0.07],
'xgbregressor__min_child_weight':[20],
'xgbregressor__reg_lambda': [2.7],
'xgbregressor__reg_alpha': [1.9],
'variancethreshold__threshold':[0.0]
}
cv = RepeatedKFold(n_splits=5,n_repeats=10,random_state=13)
# Instantiate the grid search model
grid1 = GridSearchCV(pipeline1, param_grid, verbose=5, n_jobs=-1,cv=cv,scoring=['neg_mean_squared_error','r2'],
refit='r2',return_train_score=True)
grid1.fit(X_train,y_train)
# index of best scores
rmse_bestCV_test_index = grid1.cv_results_['mean_test_neg_mean_squared_error'].argmax()
rmse_bestCV_train_index = grid1.cv_results_['mean_train_neg_mean_squared_error'].argmax()
r2_bestCV_test_index = grid1.cv_results_['mean_test_r2'].argmax()
r2_bestCV_train_index = grid1.cv_results_['mean_train_r2'].argmax()
# scores
rmse_bestCV_test_score = grid1.cv_results_['mean_test_neg_mean_squared_error'][rmse_bestCV_test_index]
rmse_bestCV_test_std = grid1.cv_results_['std_test_neg_mean_squared_error'][rmse_bestCV_test_index]
rmse_bestCV_train_score = grid1.cv_results_['mean_train_neg_mean_squared_error'][rmse_bestCV_train_index]
rmse_bestCV_train_std = grid1.cv_results_['std_train_neg_mean_squared_error'][rmse_bestCV_train_index]
r2_bestCV_test_score = grid1.cv_results_['mean_test_r2'][r2_bestCV_test_index]
r2_bestCV_test_std = grid1.cv_results_['std_test_r2'][r2_bestCV_test_index]
r2_bestCV_train_score = grid1.cv_results_['mean_train_r2'][r2_bestCV_train_index]
r2_bestCV_train_std = grid1.cv_results_['std_train_r2'][r2_bestCV_train_index]
print('CV test RMSE {:f} +/- {:f}'.format(np.sqrt(-rmse_bestCV_test_score),np.sqrt(rmse_bestCV_test_std)))
print('CV train RMSE {:f} +/- {:f}'.format(np.sqrt(-rmse_bestCV_train_score),np.sqrt(rmse_bestCV_train_std)))
print('CV test r2 {:f} +/- {:f}'.format(r2_bestCV_test_score,r2_bestCV_test_std))
print('CV train r2 {:f} +/- {:f}'.format(r2_bestCV_train_score,r2_bestCV_train_std))
print(r2_bestCV_train_score-r2_bestCV_test_score)
print("",grid1.best_params_)
y_test_pred = grid1.best_estimator_.predict(X_test)
y_train_pred = grid1.best_estimator_.predict(X_train)
print("\nRMSE for test dataset: {}".format(np.round(np.sqrt(mean_squared_error(y_test, y_test_pred)), 2)))
print("RMSE for train dataset: {}".format(np.round(np.sqrt(mean_squared_error(y_train, y_train_pred)), 2)))
print("pearson corr {:f}".format(np.corrcoef(y_test_pred,y_test)[0][1]))
print('R2 test',grid1.score(X_test,y_test))
print('R2 train',grid1.score(X_train,y_train))
visualizer = ResidualsPlot(grid1.best_estimator_,hist=False)
visualizer.fit(X_train, y_train) # Fit the training data to the model
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.poof() # Draw/show/poof the data
perror = PredictionError(grid1.best_estimator_)
perror.fit(X_train, y_train) # Fit the training data to the visualizer
perror.score(X_test, y_test) # Evaluate the model on the test data
g = perror.poof()
viz = LearningCurve(grid1.best_estimator_, cv=cv, n_jobs=-1,scoring='r2',train_sizes=np.linspace(0.3, 1.0, 10))
viz.fit(X, y)
plt.ylim(0,0.6)
viz.poof()
final_xgb = grid1.best_estimator_.fit(X,y)
# save final model
joblib.dump(final_xgb, 'XGBmodel_train15skempiAB_FINAL.pkl')
from sklearn.base import BaseEstimator
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor, XGBClassifier
class XGBoostWithEarlyStop(BaseEstimator):
def __init__(self, early_stopping_rounds=5, test_size=0.1,
eval_metric='rmse', **estimator_params):
self.early_stopping_rounds = early_stopping_rounds
self.test_size = test_size
self.eval_metric=eval_metric='rmse'
if self.estimator is not None:
self.set_params(**estimator_params)
def set_params(self, **params):
return self.estimator.set_params(**params)
def get_params(self, **params):
return self.estimator.get_params()
def fit(self, X, y):
x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=self.test_size)
self.estimator.fit(x_train, y_train,
early_stopping_rounds=self.early_stopping_rounds,
eval_metric=self.eval_metric, eval_set=[(x_val, y_val)])
return self
def predict(self, X):
return self.estimator.predict(X)
class XGBoostRegressorWithEarlyStop(XGBoostWithEarlyStop):
def __init__(self, *args, **kwargs):
self.estimator = XGBRegressor()
super(XGBoostRegressorWithEarlyStop, self).__init__(*args, **kwargs)
class XGBoostClassifierWithEarlyStop(XGBoostWithEarlyStop):
def __init__(self, *args, **kwargs):
self.estimator = XGBClassifier()
super(XGBoostClassifierWithEarlyStop, self).__init__(*args, **kwargs)
https://www.kaggle.com/c/santander-customer-satisfaction/discussion/20662
https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html
print('CV test RMSE',np.sqrt(-grid.best_score_))
print('CV train RMSE',np.sqrt(-grid.cv_results_['mean_train_score'].max()))
#print('Training score (r2): {}'.format(r2_score(X_train, y_train)))
#print('Test score (r2): {}'.format(r2_score(X_test, y_test)))
print(grid.best_params_)
y_test_pred = grid.best_estimator_.predict(X_test)
y_train_pred = grid.best_estimator_.predict(X_train)
print("\nRoot mean square error for test dataset: {}".format(np.round(np.sqrt(mean_squared_error(y_test, y_test_pred)), 2)))
print("Root mean square error for train dataset: {}".format(np.round(np.sqrt(mean_squared_error(y_train, y_train_pred)), 2)))
print("pearson corr: ",np.corrcoef(y_test_pred,y_test)[0][1])
viz = LearningCurve(grid.best_estimator_, cv=5, nthreads=10,scoring='neg_mean_squared_error',train_sizes=np.linspace(.1, 1.0, 10))
viz.fit(X, y)
viz.poof()
```
# Metodologia completa Gridsearch , early stop
```
XGBRegressor?
import numpy as np
from yellowbrick.model_selection import ValidationCurve
#1)
selector = VarianceThreshold()
#2)
lr_model = XGBoostRegressorWithEarlyStop(random_state=1212,n_jobs=-1,n_estimators=30)
#3) Crear pipeline
pipeline1 = make_pipeline(selector,lr_model)
viz = ValidationCurve(
pipeline1, njobs=-1,param_name="xgboostregressorwithearlystop__subsample",
param_range=[0.5,0.6,0.7,0.8,0.9,1], cv=10, scoring="r2"
)
#plt.ylim(0,0.6)
# Fit and poof the visualizer
viz.fit(X_train, y_train)
viz.poof()
import os
if __name__ == "__main__":
# NOTE: on posix systems, this *has* to be here and in the
# `__name__ == "__main__"` clause to run XGBoost in parallel processes
# using fork, if XGBoost was built with OpenMP support. Otherwise, if you
# build XGBoost without OpenMP support, you can use fork, which is the
# default backend for joblib, and omit this.
try:
from multiprocessing import set_start_method
except ImportError:
raise ImportError("Unable to import multiprocessing.set_start_method."
" This example only runs on Python 3.4")
#set_start_method("forkserver")
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.metrics import mean_squared_error,mean_absolute_error,r2_score
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
import xgboost as xgb
from sklearn.base import BaseEstimator
from sklearn.model_selection import train_test_split
from xgboost import XGBRegressor, XGBClassifier
class XGBoostWithEarlyStop(BaseEstimator):
def __init__(self, early_stopping_rounds=10, test_size=0.15,
eval_metric='rmse', random_state=1212,**estimator_params):
self.early_stopping_rounds = early_stopping_rounds
self.test_size = test_size
self.eval_metric=eval_metric='rmse'
if self.estimator is not None:
self.set_params(**estimator_params)
def set_params(self, **params):
return self.estimator.set_params(**params)
def get_params(self, **params):
return self.estimator.get_params()
def fit(self, X, y):
x_train, x_val, y_train, y_val = train_test_split(X, y, test_size=self.test_size)
self.estimator.fit(x_train, y_train,
early_stopping_rounds=self.early_stopping_rounds,
eval_metric=self.eval_metric, eval_set=[(x_val, y_val)])
return self
def predict(self, X):
return self.estimator.predict(X)
class XGBoostRegressorWithEarlyStop(XGBoostWithEarlyStop):
def __init__(self, *args, **kwargs):
self.estimator = XGBRegressor()
super(XGBoostRegressorWithEarlyStop, self).__init__(*args, **kwargs)
class XGBoostClassifierWithEarlyStop(XGBoostWithEarlyStop):
def __init__(self, *args, **kwargs):
self.estimator = XGBClassifier()
super(XGBoostClassifierWithEarlyStop, self).__init__(*args, **kwargs)
# Load data
ABPRED_DIR = Path().cwd().parent
DATA = ABPRED_DIR / "data"
#dataframe final
df_final = pd.read_csv(DATA/"../data/DF_contact400_energy_sasa.FcorrZero.csv",index_col=0)
pdb_names = df_final.index
features_names = df_final.drop('ddG_exp',axis=1).columns
# Data final
X = df_final.drop('ddG_exp',axis=1).astype(float)
y = df_final['ddG_exp']
#Split data
# split for final test
X_train, X_test, y_train, y_test = train_test_split(X, y,train_size=0.75,random_state=1212)
njob = 4
os.environ["OMP_NUM_THREADS"] = str(njob) # or to whatever you want
xgb_model = XGBoostRegressorWithEarlyStop()
param_grid = {'colsample_bytree': [0.7],
'subsample': [1],
'n_estimators':[1000],
'max_depth': [10],
'gamma': [1],
'learning_rate': [0.05],
'min_child_weight':[1],
'reg_lambda': [0],
'reg_alpha': [10],
'colsample_bylevel':[0.9],
'random_state':[1212]}
grid = GridSearchCV(xgb_model, param_grid, verbose=5, n_jobs=njob,cv=10,scoring=['neg_mean_squared_error','r2'],
refit='r2',return_train_score=True)
grid.fit(X_train, y_train)
# index of best scores
rmse_bestCV_test_index = grid.cv_results_['mean_test_neg_mean_squared_error'].argmax()
rmse_bestCV_train_index = grid.cv_results_['mean_train_neg_mean_squared_error'].argmax()
r2_bestCV_test_index = grid.cv_results_['mean_test_r2'].argmax()
r2_bestCV_train_index = grid.cv_results_['mean_train_r2'].argmax()
# scores
rmse_bestCV_test_score = grid.cv_results_['mean_test_neg_mean_squared_error'][rmse_bestCV_test_index]
rmse_bestCV_test_std = grid.cv_results_['std_test_neg_mean_squared_error'][rmse_bestCV_test_index]
rmse_bestCV_train_score = grid.cv_results_['mean_train_neg_mean_squared_error'][rmse_bestCV_train_index]
rmse_bestCV_train_std = grid.cv_results_['std_train_neg_mean_squared_error'][rmse_bestCV_train_index]
r2_bestCV_test_score = grid.cv_results_['mean_test_r2'][r2_bestCV_test_index]
r2_bestCV_test_std = grid.cv_results_['std_test_r2'][r2_bestCV_test_index]
r2_bestCV_train_score = grid.cv_results_['mean_train_r2'][r2_bestCV_train_index]
r2_bestCV_train_std = grid.cv_results_['std_train_r2'][r2_bestCV_train_index]
print('CV test RMSE {:f} +/- {:f}'.format(np.sqrt(-rmse_bestCV_test_score),np.sqrt(rmse_bestCV_test_std)))
print('CV train RMSE {:f} +/- {:f}'.format(np.sqrt(-rmse_bestCV_train_score),np.sqrt(rmse_bestCV_train_std)))
print('CV test r2 {:f} +/- {:f}'.format(r2_bestCV_test_score,r2_bestCV_test_std))
print('CV train r2 {:f} +/- {:f}'.format(r2_bestCV_train_score,r2_bestCV_train_std))
print(r2_bestCV_train_score-r2_bestCV_test_score)
print("",grid.best_params_)
y_test_pred = grid.best_estimator_.predict(X_test)
y_train_pred = grid.best_estimator_.predict(X_train)
print("\nRMSE for test dataset: {}".format(np.round(np.sqrt(mean_squared_error(y_test, y_test_pred)), 2)))
print("RMSE for train dataset: {}".format(np.round(np.sqrt(mean_squared_error(y_train, y_train_pred)), 2)))
print("pearson corr {:f}".format(np.corrcoef(y_test_pred,y_test)[0][1]))
viz = LearningCurve(grid.best_estimator_, n_jobs=-1,cv=10, scoring='neg_mean_squared_error')
viz.fit(X, y)
viz.poof()
perror = PredictionError(grid.best_estimator_)
perror.fit(X_train, y_train) # Fit the training data to the visualizer
perror.score(X_test, y_test) # Evaluate the model on the test data
g = perror.poof()
visualizer = ResidualsPlot(grid.best_estimator_)
visualizer.fit(X_train, y_train) # Fit the training data to the model
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.poof() # Draw/show/poof the data
perror = PredictionError(model)
perror.fit(X_train_normal, y_train_normal) # Fit the training data to the visualizer
perror.score(X_test_normal.values, y_test_normal.values) # Evaluate the model on the test data
g = perror.poof()
```
| github_jupyter |
# Django2.2
**Python Web Framework**:<https://wiki.python.org/moin/WebFrameworks>
先说句大实话,Web端我一直都是`Net技术站`的`MVC and WebAPI`,Python我一直都是用些数据相关的知识(爬虫、简单的数据分析等)Web这块只是会Flask,其他框架也都没怎么接触过,都说`Python`的`Django`是`建站神器`,有`自动生成后端管理页面`的功能,于是乎就接触了下`Django2.2`(目前最新版本)
> 逆天点评:Net的MVC最擅长的就是(通过Model+View)`快速生成前端页面和对应的验证`,而Python的`Django`最擅长的就是(通过注册Model)`快速生成后台管理页面`。**这两个语言都是快速建站的常用编程语言**(项目 V1~V2 阶段)
网上基本上都是Django1.x的教程,很多东西在2下都有点不适用,所以简单记录下我的学习笔记以及一些心得:
> PS:ASP.Net MVC相关文章可以参考我16年写的文章:<https://www.cnblogs.com/dunitian/tag/MVC/>
官方文档:<https://docs.djangoproject.com/zh-hans/2.2/releases/2.2/>
## 1.环境
### 1.虚拟环境
这个之前的确没太大概念,我一直都是用Conda来管理不同版本的包,现在借助Python生态圈里的工具`virtualenv`和`virtualenvwapper`
---
### 2.Django命令
1.**创建一个空项目:`django-admin startproject 项目名称`**
> PS:项目名不要以数字开头哦~
```shell
# 创建一个base_demo的项目
django-admin startproject base_demo
# 目录结构
|-base_demo (文件夹)
|---__init__.py(说明这个文件夹是一个Python包)
|---settings.py(项目配置文件:创建应用|模块后进行配置)
|---urls.py(URL路由配置)
|---wsgi.py(遵循wsgi协议:web服务器和Django的交互入口)
|-manage.py(项目管理文件,用来生成应用|模块)
```
2.**创建一个应用:`python manage.py startapp 应用名称`**
> 项目中一个模块就是一个应用,eg:商品模块、订单模块等
```shell
# 创建一个用户模块
python manage.py startapp users
├─base_demo
│ __init__.py
│ settings.py
│ urls.py
│ wsgi.py
├─manage.py(项目管理文件,用来生成应用|模块)
│
└─users(新建的模块|应用)
│ │ __init__.py
│ │ admin.py(后台管理相关)
│ │ models.py(数据库相关模型)
│ │ views.py(相当于MVC中的C,用来定义处理|视图函数)
│ │ tests.py(写测试代码)
│ │ apps.py:配置应用的元数据(可选)
│ │
│ └─migrations:数据迁移模块(根据Model内容生成的)
│ __init__.py
```
**PS:记得在项目(`base_demo`)的settings.py注册一下应用模块哦~**
```py
INSTALLED_APPS = [
......
'users', # 注册自己创建的模块|应用
]
```
3.**运行项目:`python manage.py runserver`**
> PS:指定端口:`python manage.py runserver 8080`
## 2.MVT入门
**大家都知道MVC(模型-视图-控制器),而Django的MVC叫做MVT(模型-视图-模版)**
> PS:Django出来很早,名字是自己定义的,用法和理念是一样的
### 2.1.M(模型)
#### 2.1.1.类的定义
- 1.**生成迁移文件:`python manage.py makemigrations`**
- PS:根据编写好的Model文件生成(模型里面可以不用定义ID属性)
- 2.**执行迁移生成表:`python mange.py migrate`**
- PS:执行生成的迁移文件
PS:类似于EF的`CodeFirst`,Django默认使用的是`sqlite`,更改数据库后面会说的
先看个演示案例:
**1.定义类文件**(会根据Code来生成DB)
```py
# users > models.py
from django.db import models
# 用户信息表
class UserInfo(models.Model):
# 字符串类型,最大长度为20
name = models.CharField(max_length=20)
# 创建时间:日期类型
create_time = models.DateTimeField()
# 更新时间
update_time = models.DateTimeField()
```
**2. 生成数据库**
```shell
# 生成迁移文件
> python manage.py makemigrations
Migrations for 'userinfo':
userinfo\migrations\0001_initial.py
- Create model UserInfo
# 执行迁移生成表
> python manage.py migrate
Operations to perform:
Apply all migrations: admin, auth, contenttypes, sessions, userinfo
Running migrations:
Applying contenttypes.0001_initial... OK
Applying auth.0001_initial... OK
Applying admin.0001_initial... OK
Applying admin.0002_logentry_remove_auto_add... OK
Applying admin.0003_logentry_add_action_flag_choices... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying auth.0007_alter_validators_add_error_messages... OK
Applying auth.0008_alter_user_username_max_length... OK
Applying auth.0009_alter_user_last_name_max_length... OK
Applying sessions.0001_initial... OK
Applying userinfo.0001_initial... OK
```
然后就自动生成对应的表了 ==> **`users_userinfo`**(应用名_模块中的类名)

知识拓展:默认时间相关文章:<https://www.cnblogs.com/huchong/p/7895263.html>
#### 2.1.2.生成后台
##### 1.配置本地化(设置后台管理页面是中文)
主要就是修改`settings.py`文件的`语言`和`时区`(后台管理的语言和时间)
```py
# 使用中文(zh-hans可以这么记==>zh-汉'字')
LANGUAGE_CODE = 'zh-hans'
# 设置中国时间
TIME_ZONE = 'Asia/Shanghai'
```
##### 2.创建管理员
**创建系统管理员:`python manage.py createsuperuser`**
```shell
python manage.py createsuperuser
用户名 (leave blank to use 'win10'): dnt # 如果不填,默认是计算机用户名
电子邮件地址: # 可以不设置
Password:
Password (again):
Superuser created successfully.
```
**经验:如果忘记密码可以创建一个新管理员账号,然后把旧的删掉就行了**
> PS:根据新password字段,修改下旧账号的password也可以
课后拓展:<a href="https://blog.csdn.net/dsjakezhou/article/details/84319228">修改django后台管理员密码</a>
##### 3.后台管理页面
主要就是**在admin中注册模型类**
比如给之前创建的UserInfo类创建对应的管理页面:
```py
# base_demo > users > admin.py
from users.models import UserInfo
# from .models import UserInfo
# 注册模型类(自动生成后台管理页面)
admin.site.register(UserInfo) # .site别忘记
```
然后运行Django(`python manage.py runserver`),访问"127.0.0.1:8080/admin",登录后就就可以管理了
> PS:如果不想交admin,而是想在root下。那么可以修改项目的`urls.py`(后面会说)

##### 4.制定化显示
注册模型类就ok了,但是显示稍微有点不人性化,eg:

列表页显示出来的标题是UserInfo对象,而我们平时一般显示用户名等信息

so ==> 可以自己改写下
回顾下之前讲的:(程序是显示的`str(对象)`,那么我们重写魔方方法`__str__`即可改写显示了)
```py
# base_demo > users > models.py
# 用户信息表
class UserInfo(models.Model):
# 字符串类型,最大长度为20
name = models.CharField(max_length=20)
# 创建时间:日期类型
create_time = models.DateTimeField()
# 更新时间
update_time = models.DateTimeField()
def __str__(self):
"""为了后台管理页面的美化"""
return self.name
```
这时候再访问就美化了:(**不用重启Django**)

Django就没有提供对应的方法?NoNoNo,我们继续看:
```py
# base_demo > users > admin.py
from .models import UserInfo
# 自定义模型管理页面
class UserInfoAdmin(admin.ModelAdmin):
# 自定义管理页面的列表显示字段(和类属性相对应)
list_display = ["id", "name", "create_time", "update_time", "datastatus"]
# 注册模型类和模型管理类(自动生成后台管理页面)
admin.site.register(UserInfo, UserInfoAdmin)
```
其他什么都不用修改,后端管理列表的布局就更新了:
> PS:设置Model的`verbose_name`就可以在后台显示中文,eg:`name = models.CharField(max_length=25, verbose_name="姓名")`

还有更多个性化的内容后面会继续说的~
### 2.3.V(视图)
这个类比于MVC的C(控制器)
> PS:这块比Net的MVC和Python的Flask要麻烦点,url地址要简单配置下映射关系(小意思,不花太多时间)
这块刚接触稍微有点绕,所以我们借助图来看:
**比如我们想访问users应用下的首页(`/users/index`)**
#### 2.3.1.设置视图函数
这个和定义控制器里面的方法没区别:
> PS:函数必须含`request`(类比下类方法必须含的self)

```py
from django.http import HttpResponse
# 1.定义视图函数
# http://127.0.0.1:8000/users/index
def index(request):
print(request)
# 响应浏览器请求(需要页面就去T拿,需要数据就去M找)
return HttpResponse('这是users应用模块的index页面哦~')
```
#### 2.3.2.配置路由
因为我想要的地址是:`/users/index`,那么我在项目urls中也需要配置下访问`/users`的路由规则:
> PS:我是防止以后模块多了管理麻烦,所以分开写,要是你只想在一个urls中配置也无妨

```py
# base_demo > urls.py
from django.contrib import admin
from django.urls import path, include
# 项目urls配置文件
urlpatterns = [
path('users/', include("users.urls")), # 配置项
]
```
最后再贴一下users应用模块的匹配:

```py
# users > urls.py
from django.urls import path
from . import views
# 2.进行url配置(建立url地址和视图的对应关系)
urlpatterns = [
# /users/index ==> view的index处理函数
path('index', views.index),
]
```
#### 2.3.3.url访问
这时候你访问`127.0.0.1:8000/users/index`就可以了:

简单说下这个过程:
1. 先去项目的urls.py中进行匹配
- `path('users/', include("users.urls")), # 配置项`
2. 发现只要是以`/users/`开头的都使用了`users`模块自己的`urls.py`来匹配
- `path('index', views.index),`
3. 发现访问`/users/index`最后进入的视图函数是`index`
4. 然后执行`def index(request):pass`里面的内容并返回
### 2.4.T(模版)
这个类比于MVC的V,我们来看个简单案例:
#### 2.4.1.创建模版
Django1.x版本需要配置下模版路径之类的,现在只要在对应模块下创建`templates`文件夹就可以直接访问了
我们来定义一个list的模版:

定义视图函数(类比定义控制器方法)

配置对应的路由:

然后就出效果了:

如果把之前添加的数据删除掉,也会显示默认效果:

#### 2.4.2.指定模版
也可以指定模版位置:(看个人习惯)
打开项目`settings.py`文件,设置`TEMPLATES`的`DIRS`值,来指定默认模版路径:
```py
# Base_dir:当前项目的绝对路径
# BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# https://docs.djangoproject.com/zh-hans/2.2/ref/settings/#templates
TEMPLATES = [
{
...
'DIRS': [os.path.join(BASE_DIR, 'templates')], # 模版文件的绝对路径
...
},
]
```
### 扩展:使用MySQL数据库
这篇详细流程可以查看之前写的文章:<a href="" title="https://www.cnblogs.com/dotnetcrazy/p/10782441.html" target="_blank">稍微记录下Django2.2使用MariaDB和MySQL遇到的坑</a>
这边简单过下即可:
#### 1.创建数据库
Django不会帮你创建数据库,需要自己创建,eg:`create database django charset=utf8;`
#### 2.配置数据库
我把对应的文档url也贴了:
```py
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'django', # 使用哪个数据库
'USER': 'root', # mysql的用户名
'PASSWORD': 'dntdnt', # 用户名对应的密码
'HOST': '127.0.0.1', # 数据库服务的ip地址
'PORT': 3306, # 对应的端口
# https://docs.djangoproject.com/en/2.2/ref/settings/#std:setting-OPTIONS
'OPTIONS': {
# https://docs.djangoproject.com/zh-hans/2.2/ref/databases/#setting-sql-mode
# SQLMode可以看我之前写的文章:https://www.cnblogs.com/dotnetcrazy/p/10374091.html
'init_command': "SET sql_mode='STRICT_TRANS_TABLES'", # 设置SQL_Model
},
}
}
```
最小配置:

项目init.py文件中配置:
```py
import pymysql
# Django使用的MySQLdb对Python3支持力度不够,我们用PyMySQL来代替
pymysql.install_as_MySQLdb()
```
图示:

#### 3.解决干扰
如果你的Django是最新的2.2,PyMySQL也是最新的0.93的话,你会发现Django会报错:
> django.core.exceptions.ImproperlyConfigured: mysqlclient 1.3.13 or newer is required; you have 0.9.3.
这个是Django对MySQLdb版本的限制,我们使用的是PyMySQL,所以不用管它

再继续运行发现又冒了个错误:`AttributeError: 'str' object has no attribute 'decode'`
这个就不能乱改了,所以先调试输出下:

发现是对字符串进行了decode解码操作:(一般对字符串进行编码,二进制进行解码)

解决也很简单,改成encode即可

然后就没问题了,之后创建新项目也不会有问题了
### 扩展:避免命令忘记


## 3.MVT基础
### 3.1.M基础
#### 3.1.字段类型
模型类的命名规则基本上和变量命名一致,然后添加一条:**不能含`__`**(双下划线)
> PS:这个后面讲查询的时候你就秒懂了(`__`来间隔关键词)
这边简单罗列下**常用字段类型**:**模型类数据库字段的定义:`属性名=models.字段类型(选项)`**
| 字段类型 | 备注 |
| ----------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| `AutoField` | **自增长的int类型**(Django默认会自动创建属性名为id的自动增长属性) |
| `BigAutoField` | **自增长的bigint类型**(Django默认会自动创建属性名为id的自动增长属性) |
| **`BooleanField`** | **布尔类型**,值为True或False |
| `NullBooleanField` | **可空布尔类型**,支持Null、True、False三种值 |
| **`CharField(max_length=最大长度)`** | `varchar`**字符串**。参数max_length表示最大字符个数 |
| **`TextField`** | **大文本类型**,一般超过4000个字符时使用。 |
| **`IntegerField`** | **整型** |
| `BigIntegerField` | **长整型** |
| **`DecimalField(max_digits=None, decimal_places=None)`** | **十进制浮点数**,`max_digits`:总位数。`decimal_places`:小数占几位 |
| `FloatField(max_digits=None, decimal_places=None)` | **浮点数**,`max_digits`:总位数。`decimal_places`:小数占几位 |
| `DateField([auto_now=True] | [auto_now_add=True])` | **日期类型**,`auto_now_add`:自动设置创建时间,`auto_now`:自动设置修改时间 |
| `TimeField([auto_now=True] | [auto_now_add=True])` | **时间类型**,参数同`DateField` |
| **`DateTimeField([auto_now=True] | [auto_now_add=True])`** | **日期时间类型**,参数同`DateField` |
| **`UUIDField([primary_key=True,] default=uuid.uuid4, editable=False)`** | **UUID字段** |
后端常用字段:
| 字段类型 | 备注 |
| ---------------- | ----------------------------------------------------------- |
| `EmailField` | `CharField`子类,专门用于**Email**的字段 |
| **`FileField`** | **文件字段** |
| **`ImageField`** | **图片字段**,FileField子类,对内容进行校验以保证是有效图片 |
#### 3.2.字段选项
通过选项可以约束数据库字段,简单罗列几个常用的:
| 字段选项 | 描述 |
| -------------- | -------------------------------------------------------- |
| **`default`** | `default=函数名|默认值`设置字段的**默认值** |
| `primary_key` | `primary_key=True`设置**主键**(一般在`AutoField`中使用) |
| `unique` | `unique=True`设置**唯一键** |
| **`db_index`** | `db_index=True`设置**索引** |
| `db_column` | `db_column='xx'`设置数据库的字段名(默认就是属性名) |
| `null` | 字段是否可以为`null`(现在基本上都是不为`null`) |
和Django自动生成的后台管理相关的选项:(后台管理页面表单验证)
| 管理页的表单选项 | 描述 |
| ------------------ | ---------------------------------------------------------------------- |
| **`blank`** | `blank=True`设置表单验证中字段**是否可以为空** |
| **`verbose_name`** | `verbose_name='xxx'`设置字段对应的**中文显示**(下划线会转换为空格) |
| **`help_text`** | `help_text='xxx'`设置字段对应的**文档提示**(可以包含HTML) |
| `editable` | `editable=False`设置字段**是否可编辑**(不可编辑就不显示在后台管理页) |
| `validators` | `validators=xxx`设置字段**验证**(<http://mrw.so/4LzsEq>) |
补充说明:
1. 除非要覆盖默认的主键行为,否则不需要设置任何字段的`primary_key=True`(Django默认会创建`AutoField`来保存主键)
2. ***`auto_now_add`和`auto_now`是互斥的,一个字段中只能设置一个,不能同时设置**
3. 修改模型类时:如果添加的选项不影响表的结构,就不需要重新迁移
- 字段选项中`default`和`管理表单选项`(`blank`、`verbose_name`、`help_text`等)不影响表结构
### ORM基础
官方文档:<https://docs.djangoproject.com/zh-hans/2.2/ref/models/querysets/>
`O`(objects):类和对象,`R`(Relation):关系型数据库,`M`(Mapping):映射
> PS:表 --> 类、每行数据 --> 对象、字段 --> 对象的属性
进入命令模式:python manager.py shell
增(有连接关系的情况)
删(逻辑删除、删)
改(内连接关联修改)
查(总数、条件查询据、分页查询)
表与表之间的关系(relation),主要有这三种:
1. 一对一(one-to-one):一种对象与另一种对象是一一对应关系
- eg:一个学生只能在一个班级。
2. 一对多(one-to-many): 一种对象可以属于另一种对象的多个实例
- eg:一张唱片包含多首歌。
3. 多对多(many-to-many):两种对象彼此都是"一对多"关系
- eg:比如一张唱片包含多首歌,同时一首歌可以属于多张唱片。
#### 执行SQL语句
官方文档:<https://docs.djangoproject.com/zh-hans/2.2/topics/db/sql/>
#### 扩展:查看生成SQL
课后拓展:<https://www.jianshu.com/p/b69a7321a115>
#### 3.3.模型管理器类
1. 改变查询的结果集
- eg:程序里面都是假删除,而默认的`all()`把那些假删除的数据也查询出来了
2. 添加额外的方法
- eg:
---
### 3.2.V基础
返回Json格式(配合Ajax)
JsonResponse
#### 3.2.1.
#### URL路由
#### 模版配置
#### 重定向
跳转到视图函数
知识拓展:<https://www.cnblogs.com/attila/p/10420702.html>
#### 动态生成URL
模版中动态生成URL地址,类似于Net里面的`@Url.Action("Edit","Home",new {id=13})`
> <https://docs.djangoproject.com/zh-hans/2.2/intro/tutorial03/#removing-hardcoded-urls-in-templates>
#### 404和500页面
调试信息关闭
### 扩展:HTTPRequest
**HTTPRequest的常用属性**:
1. **`path`**:请求页面的完整路径(字符串)
- 不含域名和参数
2. **`method`**:请求使用的HTTP方式(字符串)
- eg:`GET`、`POST`
3. `encoding`:提交数据的编码方式(字符串)
- 如果为`None`:使用浏览器默认设置(PS:一般都是UTF-8)
4. **`GET`**:包含get请求方式的所有参数(QueryDict类型,类似于Dict)
5. **`POST`**:包含post请求方式的所有参数(类型同上)
6. **`FILES`**:包含所有上传的文件(MultiValueDict类型,类似于Dict)
7. **`COOKIES`**:以key-value形式包含所有cookie(Dict类型)
8. **`session`**:状态保持使用(类似于Dict)
#### 获取URL参数
复选框:勾选on,不勾选None
reques.POST.get("xxx")
#### Cookies
基于域名来存储的,如果不指定过期时间则关闭浏览器就过期
```py
# set cookie
response = Htpresponse对象(eg:JsonResponse,Response)
# max_age:多少秒后过期
# response.set_cookie(key,value,max_age=7*24*3600) # 1周过期
# expires:过期时间,timedelta:时间间隔
response.set_cookie(key,value,expires=datatime.datatime.now()+datatime.timedelta(days=7))
return response;
# get cookie
value = request.COOKIES.get(key)
```
**PS:Cookie不管保存什么类型,取出来都是str字符串**
知识拓展:<https://blog.csdn.net/cuishizun/article/details/81537316>
#### Session
Django的Session信息存储在`django_session`里面,可以根据sessionid(`session_key`)获取对应的`session_data`值

**PS:Session之所以依赖于Cookie,是因为Sessionid(唯一标识)存储在客户端,没有sessionid你怎么获取?**
```py
# 设置
request.session["key"] = value
# 获取
request.session.get("key",默认值)
# 删除指定session
del request.session["key"] # get(key) ?
# 删除所有session的value
# sessiondata里面只剩下了sessionid,而对于的value变成了{}
request.session.clear()
# 删除所有session(数据库内容全删了)
request.session.flush()
# 设置过期时间(默认过期时间是2周)
request.session.set_expiry(不活动多少秒后失效)
# PS:都是request里面的方法
```
**PS:Session保存什么类型,取出来就是什么类型**(Cookie取出来都是str)
---
#### 文件上传
---
### 3.3.T基础
自定义404页面
## 4.Admin后台
上面演示了一些简单的制定化知识点:<a href="#2.1.2.生成后台">上节回顾</a>,现在简单归纳下`Django2.2`admin相关设置:
### 4.1.修改后台管理页面的标题
大致效果如下:

在`admin.py`中设置`admin.site.site_header`和`admin.site.site_title`:

### 4.2.修改app在Admin后台显示的名称
大致效果如下:

先设置应用模块的中文名:**`verbose_name = 'xxx'`**

让配置生效:**`default_app_config = '应用名.apps.应用名Config'`**

### 4.3.汉化显示应用子项
大致效果如下:

在每个模型类中设置**`Meta`**类,并设置`verbose_name`和`verbose_name_plural`

### 4.4.汉化表单字段和提示
大致效果如下:

汉化表单的字段:**`verbose_name`**,显示字段提示:**`help_text`**

### 4.5.
列表显示
状态显示+字体颜色
文件上传
文本验证
Tag过滤
apt install sqliteman
| github_jupyter |
```
ls -l| tail -10
#G4
from google.colab import drive
drive.mount('/content/gdrive')
cp gdrive/My\ Drive/fingerspelling5.tar.bz2 fingerspelling5.tar.bz2
# rm fingerspelling5.tar.bz2
# cd /media/datastorage/Phong/
!tar xjf fingerspelling5.tar.bz2
cd dataset5
mkdir surrey
mkdir surrey/E
mv dataset5/* surrey/E/
cd ..
#remove depth files
import glob
import os
import shutil
# get parts of image's path
def get_image_parts(image_path):
"""Given a full path to an image, return its parts."""
parts = image_path.split(os.path.sep)
#print(parts)
filename = parts[2]
filename_no_ext = filename.split('.')[0]
classname = parts[1]
train_or_test = parts[0]
return train_or_test, classname, filename_no_ext, filename
#del_folders = ['A','B','C','D','E']
move_folders_1 = ['A','B','C','D']
move_folders_2 = ['E']
# look for all images in sub-folders
for folder in move_folders_1:
class_folders = glob.glob(os.path.join(folder, '*'))
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'depth*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
if "0001" not in src:
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('train_depth', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('train_depth', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
else:
print('ignor: %s' %src)
#move color files
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'color*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('train_color', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('train_color', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
# look for all images in sub-folders
for folder in move_folders_2:
class_folders = glob.glob(os.path.join(folder, '*'))
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'depth*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
if "0001" not in src:
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('test_depth', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('test_depth', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
else:
print('ignor: %s' %src)
#move color files
for iid_class in class_folders:
#move depth files
class_files = glob.glob(os.path.join(iid_class, 'color*.png'))
print('copying %d files' %(len(class_files)))
for idx in range(len(class_files)):
src = class_files[idx]
train_or_test, classname, _, filename = get_image_parts(src)
dst = os.path.join('test_color', classname, train_or_test+'_'+ filename)
# image directory
img_directory = os.path.join('test_color', classname)
# create folder if not existed
if not os.path.exists(img_directory):
os.makedirs(img_directory)
#copying
shutil.copy(src, dst)
# #/content
%cd ..
ls -l
mkdir surrey/E/checkpoints
cd surrey/
#MUL 1 - Inception - ST
from keras.applications import MobileNet
# from keras.applications import InceptionV3
# from keras.applications import Xception
# from keras.applications.inception_resnet_v2 import InceptionResNetV2
from tensorflow.keras.applications import EfficientNetB0
from keras.models import Model
from keras.layers import concatenate
from keras.layers import Dense, GlobalAveragePooling2D, Input, Embedding, SimpleRNN, LSTM, Flatten, GRU, Reshape
# from keras.applications.inception_v3 import preprocess_input
# from tensorflow.keras.applications.efficientnet import preprocess_input
from keras.applications.mobilenet import preprocess_input
from keras.layers import GaussianNoise
def get_adv_model():
# f1_base = EfficientNetB0(include_top=False, weights='imagenet',
# input_shape=(299, 299, 3),
# pooling='avg')
# f1_x = f1_base.output
f1_base = MobileNet(weights='imagenet', include_top=False, input_shape=(224,224,3))
f1_x = f1_base.output
f1_x = GlobalAveragePooling2D()(f1_x)
# f1_x = f1_base.layers[-151].output #layer 5
# f1_x = GlobalAveragePooling2D()(f1_x)
# f1_x = Flatten()(f1_x)
# f1_x = Reshape([1,1280])(f1_x)
# f1_x = SimpleRNN(2048,
# return_sequences=False,
# # dropout=0.8
# input_shape=[1,1280])(f1_x)
#Regularization with noise
f1_x = GaussianNoise(0.1)(f1_x)
f1_x = Dense(1024, activation='relu')(f1_x)
f1_x = Dense(24, activation='softmax')(f1_x)
model_1 = Model(inputs=[f1_base.input],outputs=[f1_x])
model_1.summary()
return model_1
from keras.callbacks import Callback
import pickle
import sys
#Stop training on val_acc
class EarlyStoppingByAccVal(Callback):
def __init__(self, monitor='val_acc', value=0.00001, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current >= self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping" % epoch)
self.model.stop_training = True
#Save large model using pickle formate instead of h5
class SaveCheckPoint(Callback):
def __init__(self, model, dest_folder):
super(Callback, self).__init__()
self.model = model
self.dest_folder = dest_folder
#initiate
self.best_val_acc = 0
self.best_val_loss = sys.maxsize #get max value
def on_epoch_end(self, epoch, logs={}):
val_acc = logs['val_acc']
val_loss = logs['val_loss']
if val_acc > self.best_val_acc:
self.best_val_acc = val_acc
# Save weights in pickle format instead of h5
print('\nSaving val_acc %f at %s' %(self.best_val_acc, self.dest_folder))
weigh= self.model.get_weights()
#now, use pickle to save your model weights, instead of .h5
#for heavy model architectures, .h5 file is unsupported.
fpkl= open(self.dest_folder, 'wb') #Python 3
pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
fpkl.close()
# model.save('tmp.h5')
elif val_acc == self.best_val_acc:
if val_loss < self.best_val_loss:
self.best_val_loss=val_loss
# Save weights in pickle format instead of h5
print('\nSaving val_acc %f at %s' %(self.best_val_acc, self.dest_folder))
weigh= self.model.get_weights()
#now, use pickle to save your model weights, instead of .h5
#for heavy model architectures, .h5 file is unsupported.
fpkl= open(self.dest_folder, 'wb') #Python 3
pickle.dump(weigh, fpkl, protocol= pickle.HIGHEST_PROTOCOL)
fpkl.close()
# Training
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping, CSVLogger, ReduceLROnPlateau
from keras.optimizers import Adam
import time, os
from math import ceil
train_datagen = ImageDataGenerator(
# rescale = 1./255,
rotation_range=30,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.3,
zoom_range=0.3,
# horizontal_flip=True,
# vertical_flip=True,##
# brightness_range=[0.5, 1.5],##
channel_shift_range=10,##
fill_mode='nearest',
# preprocessing_function=get_cutout_v2(),
preprocessing_function=preprocess_input,
)
test_datagen = ImageDataGenerator(
# rescale = 1./255
preprocessing_function=preprocess_input
)
NUM_GPU = 1
batch_size = 64
train_set = train_datagen.flow_from_directory('surrey/E/train_color/',
target_size = (224, 224),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=True,
seed=7,
# subset="training"
)
valid_set = test_datagen.flow_from_directory('surrey/E/test_color/',
target_size = (224, 224),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=False,
seed=7,
# subset="validation"
)
model_txt = 'st'
# Helper: Save the model.
savedfilename = os.path.join('surrey', 'E', 'checkpoints', 'Surrey_MobileNet_E_tmp.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
# Helper: TensorBoard
tb = TensorBoard(log_dir=os.path.join('svhn_output', 'logs', model_txt))
# Helper: Save results.
timestamp = time.time()
csv_logger = CSVLogger(os.path.join('svhn_output', 'logs', model_txt + '-' + 'training-' + \
str(timestamp) + '.log'))
earlystopping = EarlyStoppingByAccVal(monitor='val_accuracy', value=0.9900, verbose=1)
epochs = 40##!!!
lr = 1e-3
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# train on multiple-gpus
# Create a MirroredStrategy.
strategy = tf.distribute.MirroredStrategy()
print("Number of GPUs: {}".format(strategy.num_replicas_in_sync))
# Open a strategy scope.
with strategy.scope():
# Everything that creates variables should be under the strategy scope.
# In general this is only model construction & `compile()`.
model_mul = get_adv_model()
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
step_size_train=ceil(train_set.n/train_set.batch_size)
step_size_valid=ceil(valid_set.n/valid_set.batch_size)
# step_size_test=ceil(testing_set.n//testing_set.batch_size)
# result = model_mul.fit_generator(
# generator = train_set,
# steps_per_epoch = step_size_train,
# validation_data = valid_set,
# validation_steps = step_size_valid,
# shuffle=True,
# epochs=epochs,
# callbacks=[checkpointer],
# # callbacks=[csv_logger, checkpointer, earlystopping],
# # callbacks=[tb, csv_logger, checkpointer, earlystopping],
# verbose=1)
# Training
import tensorflow as tf
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import TensorBoard, ModelCheckpoint, EarlyStopping, CSVLogger, ReduceLROnPlateau
from keras.optimizers import Adam
import time, os
from math import ceil
train_datagen = ImageDataGenerator(
# rescale = 1./255,
rotation_range=30,
width_shift_range=0.3,
height_shift_range=0.3,
shear_range=0.3,
zoom_range=0.3,
# horizontal_flip=True,
# vertical_flip=True,##
# brightness_range=[0.5, 1.5],##
channel_shift_range=10,##
fill_mode='nearest',
# preprocessing_function=get_cutout_v2(),
preprocessing_function=preprocess_input,
)
test_datagen = ImageDataGenerator(
# rescale = 1./255
preprocessing_function=preprocess_input
)
NUM_GPU = 1
batch_size = 64
train_set = train_datagen.flow_from_directory('surrey/E/train_color/',
target_size = (224, 224),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=True,
seed=7,
# subset="training"
)
valid_set = test_datagen.flow_from_directory('surrey/E/test_color/',
target_size = (224, 224),
batch_size = batch_size,
class_mode = 'categorical',
shuffle=False,
seed=7,
# subset="validation"
)
model_txt = 'st'
# Helper: Save the model.
savedfilename = os.path.join('gdrive', 'My Drive', 'Surrey_ASL', '4_Surrey_MobileNet_E.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
# Helper: TensorBoard
tb = TensorBoard(log_dir=os.path.join('svhn_output', 'logs', model_txt))
# Helper: Save results.
timestamp = time.time()
csv_logger = CSVLogger(os.path.join('svhn_output', 'logs', model_txt + '-' + 'training-' + \
str(timestamp) + '.log'))
earlystopping = EarlyStoppingByAccVal(monitor='val_accuracy', value=0.9900, verbose=1)
epochs = 40##!!!
lr = 1e-3
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# train on multiple-gpus
# Create a MirroredStrategy.
strategy = tf.distribute.MirroredStrategy()
print("Number of GPUs: {}".format(strategy.num_replicas_in_sync))
# Open a strategy scope.
with strategy.scope():
# Everything that creates variables should be under the strategy scope.
# In general this is only model construction & `compile()`.
model_mul = get_adv_model()
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
step_size_train=ceil(train_set.n/train_set.batch_size)
step_size_valid=ceil(valid_set.n/valid_set.batch_size)
# step_size_test=ceil(testing_set.n//testing_set.batch_size)
result = model_mul.fit_generator(
generator = train_set,
steps_per_epoch = step_size_train,
validation_data = valid_set,
validation_steps = step_size_valid,
shuffle=True,
epochs=epochs,
callbacks=[checkpointer],
# callbacks=[csv_logger, checkpointer, earlystopping],
# callbacks=[tb, csv_logger, checkpointer, earlystopping],
verbose=1)
print(savedfilename)
ls -l
# Open a strategy scope.
with strategy.scope():
model_mul.load_weights(os.path.join('gdrive', 'My Drive', 'Surrey_ASL', '4_Surrey_MobileNet_E.hdf5'))
# Helper: Save the model.
savedfilename = os.path.join('gdrive', 'My Drive', 'Surrey_ASL', '4_Surrey_MobileNet_E_L2.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
epochs = 15##!!!
lr = 1e-4
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# Open a strategy scope.
with strategy.scope():
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
result = model_mul.fit_generator(
generator = train_set,
steps_per_epoch = step_size_train,
validation_data = valid_set,
validation_steps = step_size_valid,
shuffle=True,
epochs=epochs,
callbacks=[checkpointer],
# callbacks=[csv_logger, checkpointer, earlystopping],
# callbacks=[tb, csv_logger, checkpointer, earlystopping],
verbose=1)
# Open a strategy scope.
with strategy.scope():
model_mul.load_weights(os.path.join('gdrive', 'My Drive', 'Surrey_ASL', '4_Surrey_MobileNet_E_L2.hdf5'))
# Helper: Save the model.
savedfilename = os.path.join('gdrive', 'My Drive', 'Surrey_ASL', '4_Surrey_MobileNet_E_L3.hdf5')
checkpointer = ModelCheckpoint(savedfilename,
monitor='val_accuracy', verbose=1,
save_best_only=True, mode='max',save_weights_only=True)########
epochs = 15##!!!
lr = 1e-5
decay = lr/epochs
optimizer = Adam(lr=lr, decay=decay)
# Open a strategy scope.
with strategy.scope():
model_mul.compile(optimizer=optimizer,loss='categorical_crossentropy',metrics=['accuracy'])
result = model_mul.fit_generator(
generator = train_set,
steps_per_epoch = step_size_train,
validation_data = valid_set,
validation_steps = step_size_valid,
shuffle=True,
epochs=epochs,
callbacks=[checkpointer],
# callbacks=[csv_logger, checkpointer, earlystopping],
# callbacks=[tb, csv_logger, checkpointer, earlystopping],
verbose=1)
from sklearn.metrics import classification_report, confusion_matrix
import numpy as np
test_datagen = ImageDataGenerator(
rescale = 1./255)
testing_set = test_datagen.flow_from_directory('dataset5/test_color/',
target_size = (224, 224),
batch_size = 32,
class_mode = 'categorical',
seed=7,
shuffle=False
# subset="validation"
)
y_pred = model.predict_generator(testing_set,steps = testing_set.n//testing_set.batch_size)
y_pred = np.argmax(y_pred, axis=1)
y_true = testing_set.classes
print(confusion_matrix(y_true, y_pred))
# print(model.evaluate_generator(testing_set,
# steps = testing_set.n//testing_set.batch_size))
```
| github_jupyter |
# Mean Reversion on Futures
by Rob Reider and Maxwell Margenot
Part of the Quantopian Lecture Series:
* [www.quantopian.com/lectures](https://www.quantopian.com/lectures)
* [github.com/quantopian/research_public](https://github.com/quantopian/research_public)
Notebook released under the Creative Commons Attribution 4.0 License.
Introducing futures as an asset opens up trading opportunities that were previously unavailable. In this lecture we will look at strategies involving one future against another. We will also look at strategies involving trades of futures and stocks at the same time. Other strategies, like trading calendar spreads (futures on the same commodity but with different delivery months) will be topics for future lectures.
```
import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint, adfuller
import matplotlib.pyplot as plt
from quantopian.research.experimental import continuous_future, history
```
## Pairs of Futures (or Spreads)
Before we look at trading pairs of futures contracts, let's quickly review pairs trading in general and cointegration. For full lectures on these topics individually, see the [lecture on stationarity and cointegration](https://www.quantopian.com/lectures/integration-cointegration-and-stationarity) and the [lecture on pairs trading](https://www.quantopian.com/lectures/introduction-to-pairs-trading).
When markets are efficient, the prices of assets are often modeled as [random walks](https://en.wikipedia.org/wiki/Random_walk_hypothesis):
$$P*t=\mu+P*{t-1}+\epsilon_t$$
We can then take the difference of the prices to get white noise, stationary time series:
$$r*t=P_t-P*{t-1}= \mu +\epsilon_t$$
Of course, if prices follow a random walk, they are completely unforecastable, so the goal is to find returns that are correlated with something in the past. For example, if we can find an asset whose price is a little mean-reverting (and therefore its returns are negatively autocorrelated), we can use that to forecast future returns.
## Cointegration and Mean Reversion
The idea behind cointegration is that even if the prices of two different assets both follow random walks, it is still possible that a linear combination of them is not a random walk. A common analogy is of a dog owner walking his dog with a retractable leash. If you look at the position of the dog owner, it may follow a random walk, and if you look at the dog separately, it also may follow a random walk, but the distance between them, the difference of their positions, may very well be mean reverting. If the dog is behind the owner, he may run to catch up and if the dog is ahead, the length of the leash may prevent him from getting too far away.
The dog and its owner are linked together and their distance is a mean reverting process. In cointegration, we look for assets that are economically linked, so that if $P_t$ and $Q_t$ are both random walks, the linear combination, $P_t - b Q_t$, may not itself be a random walk and may be forecastable.
## Finding a Feasible Spread
For stocks, a natural starting point for identifying cointegrated pairs is looking at stocks in the same industry. However, competitors are not necessarily economic substitutes. Think of Apple and Blackberry. It's not always the case that when one of those company's stock price jumps up, the other catches up. The economic link is fairly tenuous. Here it is more like the dog broke the leash and ran away from the owner.
However, with pairs of futures, there may be economic forces that link the two prices. Consider heating oil and natural gas. Some power plants have the ability to use either one, depending on which has become cheaper. So when heating oil has dipped below natural gas, increased demand for heating oil will push it back up. Platinum and Palladium are substitutes for some types of catalytic converters used for emission control. Corn and wheat are substitutes for animal feed. Corn and sugar are substitutes as sweeteners. There are many potential links to examine and test.
Let's go through a specific example of futures prices that might be cointegrated.
## Soybean Crush
The difference in price between soybeans and their refined products is referred to as the "crush spread". It represents the processing margin from "crushing" a soybean into its refined products. Note that we scale up the futures price of soybean oil so that it is the same magnitude as the price for soybean meal. It certainly seems from the plots that the prices of the refined products move together.
```
soy_meal_mult = symbols('SMF17').multiplier
soy_oil_mult = symbols('BOF17').multiplier
soybean_mult = symbols('SYF17').multiplier
sm_future = continuous_future('SM', offset=0, roll='calendar', adjustment='mul')
sm_price = history(sm_future, fields='price', start_date='2014-01-01', end_date='2017-01-01')
bo_future = continuous_future('BO', offset=0, roll='calendar', adjustment='mul')
bo_price = history(bo_future, fields='price', start_date='2014-01-01', end_date='2017-01-01')
sm_price.plot()
bo_price.multiply(soy_oil_mult//soy_meal_mult).plot()
plt.ylabel('Price')
plt.legend(['Soybean Meal', 'Soybean Oil']);
```
However, from looking at the p-value for our test, we conclude that soybean meal and soybean meal and soybean oil are not cointegrated.
```
print 'p-value: ', coint(sm_price, bo_price)[1]
```
We still have this compelling economic link, though. Both soybean oil and soybean meal have a root product in soybeans themselves. Let's see if we can suss out any signal by creating a spread between soybean prices and the refined products together, by implementing the [crush spread](https://en.wikipedia.org/wiki/Crush_spread).
```
sm_future = continuous_future('SM', offset=1, roll='calendar', adjustment='mul')
sm_price = history(sm_future, fields='price', start_date='2014-01-01', end_date='2017-01-01')
bo_future = continuous_future('BO', offset=1, roll='calendar', adjustment='mul')
bo_price = history(bo_future, fields='price', start_date='2014-01-01', end_date='2017-01-01')
sy_future = continuous_future('SY', offset=0, roll='calendar', adjustment='mul')
sy_price = history(sy_future, fields='price', start_date='2014-01-01', end_date='2017-01-01')
crush = sy_price - (sm_price + bo_price)
crush.plot()
plt.ylabel('Crush Spread');
```
In the above plot, we offset the refined products by one month to roughly match the time it takes to crush the soybeans and we set `roll='calendar'` so that all three contracts are rolled at the same time.
To test whether this spread is stationary, we will use the augmented Dickey-Fuller test.
```
print 'p-value for stationarity: ', adfuller(crush)[1]
```
The test confirms that the spread is stationary. And it makes sense, economically, that the crush spread may exhibit some mean reversion due to simple supply and demand.
Note that there is usually a little more finesse required to obtain a mean reverting spread. We usually find a linear combination that would make the spread between the assets stationary after discovering cointegration. For more details on this, see the [lecture on cointegration](https://www.quantopian.com/lectures/integration-cointegration-and-stationarity). We skipped these steps to test this known spread off the bat.
Here are a few other examples of economically-linked futures:
* **3:2:1 Crack Spread**: Buy three crude oil, sell two gasoline, Sell one heating oil (this represents the profitability of oil refining)
* **8:4:3 Cattle Crush** Buy 8 October live-cattle, Sell 4 May feeder cattle, Sell 3 July corn (this represents the profitability of fattening feeder cattle, where the 3 corn contracts are enough to feed the young feeder cattle)
For widely-followed spreads like the crush spread or the crack spread, it would be surprising if the depth of mean reversion became so large that you could easily profit from it. If we consider futures that are linked to stocks, however, the number of potential pairs grows.
## Futures and Stocks
There are many examples of potential relationships between futures and stocks. We already discussed one of them - the relationship between the crush spread and the price of soybean processors. Here are several more, though this is not meant to be a complete list:
* Crude oil futures and oil stocks
* Gold futures and gold mining stocks
* Crude oil futures and airline stocks
* Currency futures and exporters
* Interest rate futures and utilities
* Interest rate futures and Real Estate Investment Trusts (REITs)
* Corn futures and agricultural processing companies (e.g., ADM)
Consider the relationship between ten-year interest rate futures and the price of EQR, a large REIT. Interest rates heavily influence the value of real estate, so there is a strong economic connection between the value of interest rate futures and the value of REITs.
```
ty_future = continuous_future('TY', offset=0, roll='calendar', adjustment='mul')
ty_prices = history(ty_future, fields='price', start_date='2009-01-01', end_date='2017-01-01')
ty_prices.name = ty_future.root_symbol
equities = symbols(['EQR', 'SPY'])
equity_prices = get_pricing(equities, fields='price', start_date='2009-01-01', end_date='2017-01-01')
equity_prices.columns = map(lambda x: x.symbol, equity_prices.columns)
data = pd.concat([ty_prices, equity_prices], axis=1)
data = data.dropna()
data.plot()
plt.legend();
```
If we apply a hypothesis test to the two price series we find that they are indeed cointegrated, corroborating our economic hypothesis.
```
print 'Cointegration test p-value: ', coint(data['TY'], data['EQR'])[1]
```
The next step would be to test if this signal is viable once we include market impact by trading EQR against the futures contract as a pair in a backtest.
Trading strategies based on cointegrated pairs form buy and sell signals based on the *relative prices* of the pair. We can also form trading signals based on *changes in prices*, or returns. Of course we would expect that changes in futures prices to be contemporaneously correlated with stock prices, which is not forecastable. If crude oil prices rise today, oil company stocks are likely to rise today also. But perhaps there are lead/lag effects also between changes in futures and stocks returns. We know there is evidence that the market can systematically underreact or overreact to other news releases, leading to trending and mean reversion. Let's look at a few examples with futures.
The first example looks at crude oil futures and oil company stocks.
```
cl_future = continuous_future('CL', offset=0, roll='calendar', adjustment='mul')
cl_prices = history(cl_future, fields='price', start_date='2007-01-01', end_date='2017-04-06')
cl_prices.name = cl_future.root_symbol
equities = symbols(['XOM', 'SPY'])
equity_prices = get_pricing(equities, fields='price', start_date='2007-01-01', end_date='2017-04-06')
equity_prices.columns = map(lambda x: x.symbol, equity_prices.columns)
data = pd.concat([cl_prices, equity_prices],axis=1)
data = data.dropna()
#Take log of prices
data['stock_ret'] = np.log(data['XOM']).diff()
data['spy_ret'] = np.log(data['SPY']).diff()
data['futures_ret'] = np.log(data['CL']).diff()
# Compute excess returns in excess of SPY
data['stock_excess'] = data['stock_ret'] - data['spy_ret']
#Compute lagged futures returns
data['futures_lag_diff'] = data['futures_ret'].shift(1)
data = data[2:].dropna()
data.tail(5)
```
We have a high positive contemporaneous correlation, but a slightly negative lagged correlation.
```
#Compute contemporaneous correlation
contemp_corr = data['stock_excess'].shift(1).corr(data['futures_lag_diff'])
#Compute correlation of excess stock returns with lagged futures returns
lagged_corr = data['stock_excess'].corr(data['futures_lag_diff'])
print 'Contemporaneous correlation: ', contemp_corr
print 'Lagged correlation : ', lagged_corr
```
And when we form a linear regression of the excess returns of XOM on the lagged futures returns, the coefficient is significant and negative. This and the above correlations indicate that there might be a slight overreaction to the shift in oil prices.
```
result = sm.OLS(data['stock_excess'], sm.add_constant(data['futures_lag_diff'])).fit()
result.summary2()
```
A coefficient of around $-0.02$ on the lagged futures return implies that if the oil price increased by 1% yesterday, the pure-play refiner is expected to go down by $2$ bp today. This would require more testing to formulate a functioning model, but it indicates that there might be some signal in drawing out the underreaction or overreaction of equity prices to changes in futures prices.
This conjecture could be total data mining, but perhaps when the connection between the futures and stock is exceedingly obvious, like oil stocks and oil exploration companies or gold stocks and gold miners, the market overreacts to fundamental information, but when the relationship is more subtle, the market underreacts.
Also, there may be other lead/lag effects over longer time scales than one-day, but as always, this could also lead to more data mining.
```
data['futures_lag_diff'].plot(alpha=0.50, legend=True)
data['stock_excess'].plot(alpha=0.50, legend=True);
```
*This presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. ("Quantopian"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances.*
| github_jupyter |
# Dijkstra's Algorithm
In this exercise, you'll implement Dijkstra's algorithm. First, let's build the graph.
## Graph Representation
In order to run Dijkstra's Algorithm, we'll need to add distance to each edge. We'll use the `GraphEdge` class below to represent each edge between a node.
```
class GraphEdge(object):
def __init__(self, node, distance):
self.node = node
self.distance = distance
```
The new graph representation should look like this:
```
class GraphNode(object):
def __init__(self, val):
self.value = val
self.edges = []
def add_child(self, node, distance):
self.edges.append(GraphEdge(node, distance))
def remove_child(self, del_node):
if del_node in self.edges:
self.edges.remove(del_node)
class Graph(object):
def __init__(self, node_list):
self.nodes = node_list
def add_edge(self, node1, node2, distance):
if node1 in self.nodes and node2 in self.nodes:
node1.add_child(node2, distance)
node2.add_child(node1, distance)
def remove_edge(self, node1, node2):
if node1 in self.nodes and node2 in self.nodes:
node1.remove_child(node2)
node2.remove_child(node1)
```
Now let's create the graph.
```
node_u = GraphNode('U')
node_d = GraphNode('D')
node_a = GraphNode('A')
node_c = GraphNode('C')
node_i = GraphNode('I')
node_t = GraphNode('T')
node_y = GraphNode('Y')
graph = Graph([node_u, node_d, node_a, node_c, node_i, node_t, node_y])
graph.add_edge(node_u, node_a, 4)
graph.add_edge(node_u, node_c, 6)
graph.add_edge(node_u, node_d, 3)
graph.add_edge(node_d, node_u, 3)
graph.add_edge(node_d, node_c, 4)
graph.add_edge(node_a, node_u, 4)
graph.add_edge(node_a, node_i, 7)
graph.add_edge(node_c, node_d, 4)
graph.add_edge(node_c, node_u, 6)
graph.add_edge(node_c, node_i, 4)
graph.add_edge(node_c, node_t, 5)
graph.add_edge(node_i, node_a, 7)
graph.add_edge(node_i, node_c, 4)
graph.add_edge(node_i, node_y, 4)
graph.add_edge(node_t, node_c, 5)
graph.add_edge(node_t, node_y, 5)
graph.add_edge(node_y, node_i, 4)
graph.add_edge(node_y, node_t, 5)
```
## Implementation
Using what you've learned, implement Dijkstra's Algorithm to find the shortest distance from the "U" node to the "Y" node.
```
import math
def dijkstra(start_node, end_node):
pass
print('Shortest Distance from {} to {} is {}'.format(node_u.value, node_y.value, dijkstra(node_u, node_y)))
```
<span class="graffiti-highlight graffiti-id_6vmf0hp-id_cjtybve"><i></i><button>Show Solution</button></span>
| github_jupyter |
# Pipelining In Machine Learning
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
# save filepath to variable for easier access
melbourne_file_path = '../hitchhikersGuideToMachineLearning/home-data-for-ml-course/train.csv'
# read the data and store data in DataFrame titled melbourne_data
train_data = pd.read_csv(melbourne_file_path)
# print a summary of the data in Melbourne data
train_data.head()
```
Instead of directly attacking the dataset I will cover the capabilities of Piplelining library from scikit-learn package!
- Pipelines and composite estimators
- Pipeline: chaining estimators
- Transforming target in regression
- FeatureUnion: composite feature spaces
- ColumnTransformer for heterogeneous data
Especially FetureUnion and ColumnTransformers are important.
Also I will demontrate how to incorporate custom techniques and tricks into piplines.
We will use few of this tricks on our dataset!
Data cleaning and preprocessing are a crucial step in the machine learning project.
Whenever new data points are added to the existing data, we need to perform the same preprocessing steps again before we can use the machine learning model to make predictions. This becomes a tedious and time-consuming process!
An alternate to this is creating a machine learning pipeline that remembers the complete set of preprocessing steps in the exact same order. So that whenever any new data point is introduced, the machine learning pipeline performs the steps as defined and uses the machine learning model to predict the target variable.
Setting up a machine learning algorithm involves more than the algorithm itself. You need to preprocess the data in order for it to fit the algorithm. It's this preprocessing pipeline that often requires a lot of work. Building a flexible pipeline is key. Here's how you can build it in python.
#### What is a pipeline?
A pipeline in sklearn is a set of chained algorithms to extract features, preprocess them and then train or use a machine learning algorith
```
from sklearn.datasets import load_iris
from sklearn.preprocessing import MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
# Load and split the data
iris = load_iris()
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size= 0.2,random_state=42 )
X_train.shape
```
#### Construction
The Pipeline is built using a list of (key, value) pairs, where the key is a string containing the name you want to give this step and value is an estimator object:
```
estimators = [('minmax', MinMaxScaler()),('lr', LogisticRegression(C=1))]
pipe = Pipeline(estimators)
pipe.fit(X_train, y_train)
score = pipe.score(X_test, y_test)
print('Logistic Regression pipeline test accuracy: %.3f' % score)
```
The utility function make_pipeline is a shorthand for constructing pipelines; it takes a variable number of estimators and returns a pipeline, filling in the names automatically:
```
from sklearn.pipeline import make_pipeline
pipe2= make_pipeline(MinMaxScaler(), LogisticRegression(C=10))
pipe2
```
###### Accessing steps
The estimators of a pipeline are stored as a list in the steps attribute, but can be accessed by index or name by indexing (with [idx]) the Pipeline:
```
pipe.steps[0]
pipe['minmax']
```
Pipeline’s named_steps attribute allows accessing steps by name with tab completion in interactive environments:
```
pipe.named_steps.minmax
```
A sub-pipeline can also be extracted using the slicing notation commonly used for Python Sequences such as lists or strings (although only a step of 1 is permitted). This is convenient for performing only some of the transformations (or their inverse):
```
pipe2[:1]
```
###### Nested parameters
Parameters of the estimators in the pipeline can be accessed using the <estimator>__<parameter> syntax:
```
pipe.set_params(lr__C=2)
```
This is particularly important for doing grid searches!
Individual steps may also be replaced as parameters, and non-final steps may be ignored by setting them to 'passthrough'
```
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings='ignore'
param_grid = dict(minmax=['passthrough'],lr__C=[1, 2, 3])
grid_search = GridSearchCV(pipe, param_grid=param_grid)
gd=grid_search.fit(X_train, y_train)
gd.best_estimator_
score = pipe.score(X_test, y_test)
score
```
Let's also explore a regression dataset so that i can drill a few more useful points in your skull!
```
from sklearn.datasets import load_boston
from sklearn.compose import TransformedTargetRegressor
from sklearn.preprocessing import QuantileTransformer
from sklearn.linear_model import LinearRegression
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
estimators = [('minmax', MinMaxScaler()),('lr', LinearRegression())]
raw_Y_regr = Pipeline(estimators)
raw_Y_regr.fit(X_train, y_train)
print('R2 score: {0:.2f}'.format(raw_Y_regr.score(X_test, y_test)))
```
Okay I can transform the input features and do all kinds of preprocessing , but what if i want to transform
Y. In many regression tasks you may have to transform Y while feeding the input to the model.For example taing log of Y.
But when we are predicting we ill have to scale it back to original unit basically we will have apply inverse of the transformation on predicted Y.
All this can be done in PIpeline very easily!
```
transformer = QuantileTransformer(output_distribution='normal')
regressor = LinearRegression()
regr = TransformedTargetRegressor(regressor=regressor,transformer=transformer)
regr.fit(X_train, y_train)
print('R2 score: {0:.2f}'.format(regr.score(X_test, y_test)))
```
For simple transformations, instead of a Transformer object, a pair of functions can be passed, defining the transformation and its inverse mapping. It means you can make your custom transformers!
```
def func(x):
return np.log(x)
def inverse_func(x):
return np.exp(x)
regr = TransformedTargetRegressor(regressor=regressor,func=func,inverse_func=inverse_func)
regr.fit(X_train, y_train)
print('R2 score: {0:.2f}'.format(regr.score(X_test, y_test)))
```
###### FeaturUnions
This estimator applies a list of transformer objects in parallel to the input data, then concatenates the results. This is useful to combine several feature extraction mechanisms into a single transformer.
```
from sklearn.pipeline import FeatureUnion
from sklearn.decomposition import PCA
from sklearn.decomposition import KernelPCA
from sklearn.feature_selection import SelectKBest
estimators = [('linear_pca', PCA()), ('select_k_best', SelectKBest(k=10))]
combined_features = FeatureUnion(estimators)
```
From PCA I am selecting 8 and from select K best I am slecting 7
```
X.shape[1]
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
print("Combined space has", X_features.shape[1], "features")
pipeline = Pipeline([("features", combined_features), ("lr", LinearRegression())])
param_grid = dict(features__linear_pca__n_components=[4, 6],
features__select_k_best__k=[5, 8])
grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)
```
###### ColumnTransformer for heterogeneous data
Many datasets contain features of different types, say text, floats, and dates, where each type of feature requires separate preprocessing or feature extraction steps.
```
from sklearn.compose import ColumnTransformer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
from sklearn.impute import SimpleImputer
```
Lets use subset of our data to illustrate few more points.
```
X=train_data.iloc[:,0:10].drop(['Alley'],axis=1)
X
X.columns
```
For this data, we might want to encode the 'street' column as a categorical variable using preprocessing.
As we might use multiple feature extraction methods on the same column, we give each transformer a unique name, say 'street_category'.
By default, the remaining rating columns are ignored (remainder='drop').
We can keep the remaining rating columns by setting remainder='passthrough' also the remainder parameter can be set to an estimator to transform the remaining rating columns.
```
column_trans = ColumnTransformer(
[('street_category', OneHotEncoder(dtype='int'),['Street'])],
remainder='drop')
column_trans.fit(X)
column_trans.get_feature_names()
```
The make_column_selector is used to select columns based on data type or column name. Lets use OneHotEncoder on categorical data
```
from sklearn.preprocessing import StandardScaler
from sklearn.compose import make_column_selector
ct = ColumnTransformer([
('scale', StandardScaler(),
make_column_selector(dtype_include=np.number)),
('ohe',OneHotEncoder(),
make_column_selector(pattern='Street', dtype_include=object))])
ct.fit_transform(X)
```
If you will use LabelEncoder() in place of OneHotEncoder() you will run into this error
fit_transform() takes 2 positional arguments but 3 were given
Read here for workaround
https://stackoverflow.com/questions/46162855/fit-transform-takes-2-positional-arguments-but-3-were-given-with-labelbinarize
Didn't I told you that that we can use custom functionalities! Let's Change the labelencoder is implemented so that we can use it in our pipeline!
```
from sklearn.base import BaseEstimator, TransformerMixin
```
If you want to add some custom functionallity to your pipeline it will be basically two things either you will want to do some transformation which is not present in sklearn(or present but not in suitable format) or some estimator!
You should have obsereved by now that we have to make a object of every transformer or estimator before calling functions over it.For example
>lr=LinearRegression()
>lr.fit(data)
Pipelines also work in the same way so we will need to implememnt our functionallities as Classes!But you dont have to do everything from scratch Scikit-Learn got you covered.
Writing custom functionallity in SKlearn depends upon the inheritence of two classes:
- class sklearn.base.TransformerMixi:
Mixin class for all transformers in scikit-learn.This is the base class for writting all kinds of transformation you want ! That is all other classes will derive it as a parent class.
- class sklearn.base.BaseEstimator:
Base class for all estimators in scikit-learn and it for writting estimators!
So this is your custom functionality
```
class MultiColumnLabelEncoder(BaseEstimator,TransformerMixin):
def __init__(self, columns = None):
self.columns = columns # list of column to encode
def fit(self, X, y=None):
return self
def transform(self, X):
'''
Transforms columns of X specified in self.columns using
LabelEncoder(). If no columns specified, transforms all
columns in X.
'''
output = X.copy()
if self.columns is not None:
for col in self.columns:
output[col] = LabelEncoder().fit_transform(output[col])
else:
for colname, col in output.iteritems():
output[colname] = LabelEncoder().fit_transform(col)
return output
def fit_transform(self, X, y=None):
return self.fit(X, y).transform(X)
sk_pipe = Pipeline([ ("mssing", SimpleImputer(Strategy='most_frequent'))
,("MLCLE", MultiColumnLabelEncoder()),
("lr", LinearRegression())])
sk_pipe.fit(X,y)
```
One can also exploit featureUnion
scikit created a FunctionTransformer as part of the preprocessing class. It can be used in a similar manner as above but with less flexibility. If the input/output of the function is configured properly, the transformer can implement the fit/transform/fit_transform methods for the function and thus allow it to be used in the scikit pipeline.
For example, if the input to a pipeline is a series, the transformer would be as follows:
```
def trans_func(input_series):
return output_series
from sklearn.preprocessing import FunctionTransformer
name_transformer = FunctionTransformer(trans_func)
sk_pipe = Pipeline([("trans", name_transformer), ("lr", LinearRegression())])
sk_pipe
```
Lets put this all concept together and create simple pipleine!
before jumping to pipelines a few thigs still needed to be taken careof:
```
train_data = pd.read_csv('../hitchhikersGuideToMachineLearning/home-data-for-ml-course/train.csv' , index_col ='Id')
X_test_full = pd.read_csv('../hitchhikersGuideToMachineLearning/home-data-for-ml-course/test.csv', index_col='Id')
X_test_full['Neighborhood'].unique()
train_data['Neighborhood']
train_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = train_data.SalePrice
train_data.drop(['SalePrice'], axis=1, inplace=True)
# Break off validation set from training data
X_train_full, X_valid_full, y_train, y_valid = train_test_split(train_data, y, train_size=0.8, test_size=0.2,
random_state=0)
# "Cardinality" means the number of unique values in a column
# Select categorical columns with relatively low cardinality (convenient but arbitrary) for one hot encoding
categorical_cols_type1 = [cname for cname in X_train_full.columns if
X_train_full[cname].nunique() <= 10 and
X_train_full[cname].dtype == "object"]
# Select categorical columns with high cardinality (convenient but arbitrary) for one hot encoding
categorical_cols_type2 = [cname for cname in X_train_full.columns if
X_train_full[cname].nunique() > 10 and
X_train_full[cname].dtype == "object"]
# Select numerical columns
numerical_cols = [cname for cname in X_train_full.columns if
X_train_full[cname].dtype in ['int64', 'float64']]
# Keep selected columns only
my_cols = categorical_cols_type1+categorical_cols_type2 + numerical_cols
X_train = X_train_full[my_cols].copy()
X_valid = X_valid_full[my_cols].copy()
X_test = X_test_full[my_cols].copy()
class LabelOneHotEncoder():
def __init__(self):
self.ohe = OneHotEncoder()
self.le = LabelEncoder()
def fit_transform(self, x):
features = self.le.fit_transform( x)
return self.ohe.fit_transform( features.reshape(-1,1))
def transform( self, x):
return self.ohe.transform( self.la.transform( x.reshape(-1,1)))
def inverse_tranform( self, x):
return self.le.inverse_transform( self.ohe.inverse_tranform( x))
def inverse_labels( self, x):
return self.le.inverse_transform( x)
class ModifiedLabelEncoder(LabelEncoder):
def fit_transform(self, y, *args, **kwargs):
return super().fit_transform(y).reshape(-1, 1)
def transform(self, y, *args, **kwargs):
return super().transform(y).reshape(-1, 1)
pipe = Pipeline([("le", ModifiedLabelEncoder()), ("ohe", OneHotEncoder())])
pipe.fit_transform(['dog', 'cat', 'dog'])
pipe.fit_transform(X_train["Street"])
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Preprocessing for numerical data
numerical_transformer = SimpleImputer(strategy='constant')
# Preprocessing for categorical data
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
# Bundle preprocessing for numerical and categorical data
preprocessor = ColumnTransformer(
transformers=[
('num', numerical_transformer, numerical_cols),
('cat', categorical_transformer, categorical_cols)
])
# Define model
model = RandomForestRegressor(n_estimators=100, random_state=0)
# Bundle preprocessing and modeling code in a pipeline
clf = Pipeline(steps=[('preprocessor', preprocessor),
('model', model)
])
# Preprocessing of training data, fit model
clf.fit(X_train, y_train)
# Preprocessing of validation data, get predictions
preds = clf.predict(X_valid)
print('MAE:', mean_absolute_error(y_valid, preds))
# Preprocessing for numerical data
numerical_transformer1 = SimpleImputer(strategy='constant') # Your code here
# Preprocessing for categorical data
categorical_transformer1 = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
]) # Your code here
# Bundle preprocessing for numerical and categorical data
preprocessor1 = ColumnTransformer(
transformers=[
('num', numerical_transformer1, numerical_cols),
('cat', categorical_transformer1, categorical_cols)
])
# Define model
model1 = RandomForestRegressor(n_estimators=150, random_state=0)
# Your code here
# Check your answer
step_1.a.check()
# Bundle preprocessing and modeling code in a pipeline
my_pipeline = Pipeline(steps=[('preprocessor', preprocessor1),
('model', model1)
])
# Preprocessing of training data, fit model
my_pipeline.fit(X_train, y_train)
# Preprocessing of validation data, get predictions
preds = my_pipeline.predict(X_valid)
# Evaluate the model
score = mean_absolute_error(y_valid, preds)
print('MAE:', score)
# Check your answer
step_1.b.check()
```
https://www.datanami.com/2018/09/05/how-to-build-a-better-machine-learning-pipeline/
https://www.analyticsvidhya.com/blog/2020/01/build-your-first-machine-learning-pipeline-using-scikit-learn/
https://cloud.google.com/ai-platform/prediction/docs/custom-pipeline
https://stackoverflow.com/questions/31259891/put-customized-functions-in-sklearn-pipeline
http://zacstewart.com/2014/08/05/pipelines-of-featureunions-of-pipelines.html
https://g-stat.com/using-custom-transformers-in-your-machine-learning-pipelines-with-scikit-learn/
| github_jupyter |
## A Classifier Model Performance Evaluation
The material for evaluting a classifier here are applied to all classifiers
We focused our concentration to Logistic Regression
Read this to undestand what is Logistic Regression: https://www.datacamp.com/community/tutorials/understanding-logistic-regression-python
## Activity: Obtain confusion matrix, accuracy, precision, recall for pima Diabetes dataset
Steps:
1- Load the dataset: `pd.read_csv('diabetes.csv')`
2- Use these features: `feature_cols = ['Pregnancies', 'Insulin', 'BMI', 'Age']`
3- split the data to train and test: `X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)`
4- Instantiate logistic regressiuon model
5- Obtain the statistics of `y_test`
6- Obtain the confuction matrix
https://www.ritchieng.com/machine-learning-evaluate-classification-model/
### Basic terminology
True Positives (TP): we correctly predicted that they do have diabetes: 15
True Negatives (TN): we correctly predicted that they don't have diabetes: 118
False Positives (FP): we incorrectly predicted that they do have diabetes (a "Type I error"): 12
False Negatives (FN): we incorrectly predicted that they don't have diabetes (a "Type II error"): 47
<img src="Images/confusion_matrix.png" width="500" height="500">
## What is accuracy, recall and precision?
Accuracy: overall, how often is the classifier correct? -> $accuracy = \frac {TP + TN}{TP+TN+FP+FN}$
Classification error: overall, how ofthen is the classifier incorrect? -> $error = 1- accuracy = \frac {FP + FN}{TP + TN + FP + FN}$
Recall: when the actual value is positive, how ofthen is the prediction correct? -> $recall = \frac {TP}{TP + FN}$
Precision: When a positive value is predicted, how often is the prediction correct? -> $precision = \frac {TP}{TP + FP}$
Specificity: When the actual value is negative, how often is the prediction correct? -> $Specificity = \frac {TN}{TN + FP}$
```
import pandas as pd
from sklearn.model_selection import train_test_split
pima = pd.read_csv('diabetes.csv')
print(pima.columns)
print(pima.head())
feature_cols = ['Pregnancies', 'Insulin', 'BMI', 'Age']
X = pima[feature_cols]
# print(X)
# y is a vector, hence we use dot to access 'label'
y = pima['Outcome']
# split X and y into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
y_test.value_counts()
```
## The difference between `.predict()` and `.predict_proba` for a classifier
Apply these two methods to Pima Indian Diabetes dataset
https://www.ritchieng.com/machine-learning-evaluate-classification-model/
| github_jupyter |
# Ejemplo: Reducción de palabras a su raíz (Stemming) en textos
**Autor:** Unidad de Científicos de Datos (UCD)
---
Este ejemplo muestra las principales funcionalidades del módulo `stemming`, de la librería **ConTexto**. Este módulo permite aplicar *stemming* a textos. El *stemming* es un método para reducir una palabra a su "raíz" o "tallo" (*stem*, en inglés) a todas las formas flexionadas de palabras que compartan una misma raíz. Por ejemplo, las palabras niños, niña y niñez tienen todas la misma raíz: "niñ". A diferencia de la lematización, en donde cada lema es una palabra que existe en el vocabulario del lenguaje correspondiente, las palabras raíz que se obtienen al aplicar *stemming* no necesariamente existen por sí solas como palabra. Aplicar *stemming* a textos puede simplificarlos, al unificar palabras que comparten la misma raíz, y evitando así tener un vocabulario más grande de lo necesario.
Para mayor información sobre este módulo y sus funciones, se puede consultar <a href="https://ucd-dnp.github.io/ConTexto/funciones/stemming.html" target="_blank">su documentación</a>.
---
## 1. Importar funciones necesarias y definir textos de prueba
En este caso se importa la función `stem_texto`, que aplica *stemming* a un texto de entrada, y la clase `Stemmer`, que puede ser utilizada directamente, entre otras cosas, para agilizar el proceso de hacer *stemming* a una lista de varios textos. Adicionalmente, se definen algunos textos para desarrollar los ejemplos.
```
import time
from contexto.stemming import Stemmer, stem_texto
# textos de prueba
texto = 'Esta es una prueba para ver si las funciones son correctas y funcionan bien. Perritos y gatos van a la casita'
texto_limpiar = "Este texto, con signos de puntuación y mayúsculas, ¡será limpiado antes de pasar por la función!"
texto_ingles = 'This is a test writing to study if these functions are performing well.'
textos = [
"Esta es una primera entrada en el grupo de textos",
"El Pibe Valderrama empezó a destacar jugando fútbol desde chiquitin",
"De los pájaros del monte yo quisiera ser canario",
"Finalizando esta listica, se incluye una última frase un poquito más larga que las anteriores."
]
```
---
## 2. *Stemming* de textos
La función `stem_texto` se encarga de aplicar *stemming* a un texto de entrada. Esta función tiene parámetros opcionales para determinar el lenguaje del texto de entrada (si es "auto", lo detectará automáticamente). Adicionalmente, el parámetro *limpiar* permite hacer una limpieza básica al texto antes de aplicar el *stemming*
```
# Determinar automáticamente el lenguaje del texto
texto_stem = stem_texto(texto, 'auto')
print(texto_stem)
# Prueba en otro lenguaje
stem_english = stem_texto(texto_ingles, 'inglés')
print('-------')
print(stem_english)
# Prueba limpiando un texto antes
print('-------')
print(stem_texto(texto_limpiar, limpiar=True))
```
---
## 3. *Stemming* de varios textos utilizando un solo objeto de la clase `Stemmer`
Si se desea aplicar *stemming* a un conjunto de textos, puede ser más rápido definir un único objeto de clase `Stemmer`, y pasar este objeto en el parámetro *stemmer* de la función `stem_texto`. Al hacer esto puede haber un ahorro de tiempo, pues se evita inicializar un nuevo objeto de clase `Stemmer` para cada texto. Este ahorro de tiempo será mayor a medida que sean más los textos que se desean procesar.
A continuación se muestra una comparación de tiempos para dos opciones:
1. Aplicar *stemming* a una lista de textos, aplicando la función `stem_texto` a cada uno sin ninguna otra consideración.
2. Definir un objeto de clase `Stemmer` y utilizarlo para procesar la misma lista de textos
```
# Opción 1: se inicializa el stemmer en cada texto
tic = time.time()
for t in textos:
print(stem_texto(t))
tiempo_1 = time.time() - tic
# Opción 2: se utiliza solo un lematizador para todos los textos
print('----------')
tic = time.time()
stemmer = Stemmer(lenguaje='español')
for t in textos:
print(stem_texto(t, stemmer=stemmer))
tiempo_2 = time.time() - tic
print('\n***************\n')
print(f'Tiempo con opción 1: {tiempo_1} segundos\n')
print(f'Tiempo con opción 2: {tiempo_2} segundos\n')
```
| github_jupyter |
# 1.0 Visualizing Frequency Distributions
## 1.1 Visualizing Distributions
To find patterns in a frequency table we have to look up the frequency of each unique value or class interval and at the same time compare the frequencies. This process can get time consuming for tables with many unique values or class intervals, or when the frequency values are large and hard to compare against each other.
We can solve this problem by **visualizing** the data in the tables with the help of graphs. Graphs make it much easier to scan and compare frequencies, providing us with a single picture of the entire distribution of a variable.
Because they are easy to grasp and also eye-catching, graphs are a better choice over frequency tables if we need to present our findings to a non-technical audience.
In this lesson, we'll learn about three kinds of graphs:
- Bar plots.
- Pie charts.
- Histograms.
By the end of the mission, we'll know how to generate ourselves the graphs below, and we'll know when it makes sense to use each:
<center><img width="1000" src="https://drive.google.com/uc?export=view&id=1Rxdp-_t01VXmbJayEqTs4WOn4_-SAL6t"></center>
We've already learned about bar plots and histograms in the EDA lessons. In this mission we build upon that knowledge and discuss the graphs in the context of statistics by learning for what kind of variables each graph is most suitable for.
## 1.2 Bar Plots
For variables measured on a **nominal** or an **ordinal** scale it's common to use a **bar plot** to visualize their distribution. To generate a **bar plot** for the distribution of a variable we need two sets of values:
- One set containing the unique values.
- Another set containing the frequency for each unique value.
We can get this data easily from a frequency table. We can use **Series.value_counts()** to generate the table, and then use the [Series.plot.bar()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.bar.html) method on the resulting table to generate a **bar plot**. Using the same WNBA dataset we've been working with for the past missions, this is how we'd do that for the Pos (player position) variable:
```python
>> wnba['Pos'].value_counts().plot.bar()
```
<center><img width="400" src="https://drive.google.com/uc?export=view&id=1xPuz8XKNPPmGVdpMbqLcVNniySRGGuDc"></center>
The **Series.plot.bar()** method generates a vertical bar plot with the frequencies on the y-axis, and the unique values on the x-axis. To generate a horizontal bar plot, we can use the [Series.plot.barh() method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.barh.html):
```python
>> wnba['Pos'].value_counts().plot.barh()
```
<center><img width="400" src="https://drive.google.com/uc?export=view&id=1jQCBSxV20rDElban00Shk08R6ykT0oXk"></center>
As we'll see in the next screen, horizontal bar plots are ideal to use when the labels of the unique values are long.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- We've taken information from the **Experience** column, and created a new column named **Exp_ordinal**, which is measured on an ordinal scale. The new column has five unique labels, and each one corresponds to a number of years a player has played in WNBA:
<img width="300" src="https://drive.google.com/uc?export=view&id=1tqqE0d76Xk1baGCTWNEkYbJ9Muevfuw3">
- Create a **bar plot** to display the distribution of the **Exp_ordinal** variable:
- Generate a frequency table for the **Exp_ordinal** variable.
- Sort the table by unique labels in an ascending order using the techiques we learned in the previous mission.
- Generate a bar plot using the **Series.plot.bar()** method.
```
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# put your code here
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 200
pd.options.display.max_columns = 50
wnba = pd.read_csv("wnba.csv")
def make_exp_ordinal(row):
if row['Experience'] == 'R':
return 'Rookie'
if (1 <= int(row['Experience']) <= 3):
return 'Little experience'
if (4 <= int(row['Experience']) <= 5):
return 'Experienced'
if (5 <= int(row['Experience']) <= 10):
return 'Very experienced'
else:
return 'Veteran'
wnba['Exp_ordinal'] = wnba.apply(make_exp_ordinal, axis = 1)
wnba['Exp_ordinal'].value_counts().iloc[[3, 0, 2, 1, 4]].plot.bar()
```
## 1.3 Horizontal Bar Plots
One of the problems with the bar plot we built in the last exercise is that the tick labels of the x-axis are hard to read:
<img width="400" src="https://drive.google.com/uc?export=view&id=1gKTo1l94020_BBnk7ilS8WzllVFzYfFE">
To fix this we can rotate the labels, or we can switch to a horizontal bar plot. We can rotate the labels using the rot parameter of **Series.plot.bar()** method we used. The labels are already rotated at 90°, and we can tilt them a bit at 45°:
```python
>> wnba['Exp_ordinal'].value_counts().iloc[[3,0,2,1,4]].plot.bar(rot = 45)
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1wD2TvUAm0fyBrVwWeE5KnTxuz2IS6RKf">
Slightly better, but we can do a better job with a horizontal bar plot. If we wanted to publish this bar plot, we'd also have to make it more informative by adding a title. This is what we'll do in the next exercise, but for now this is how we could do that for the **Pos** variable (note that we use the [Series.plot.barh()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.barh.html) method, not **Series.plot.bar()**):
```python
>> wnba['Pos'].value_counts().plot.barh(title = 'Number of players in WNBA by position')
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1cNU5qtgeawYB-9ec-pC16XtN_6Rtb1Sd">
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Create a horizontal bar plot to visualize the distribution of the **Exp_ordinal** variable.
- Generate a frequency table for the **Exp_ordinal** variable.
- Sort the table by unique labels in an ascending order.
- Use the **Series.plot.barh()** method to generate the horizontal bar plot.
- Add the following title to the plot: **Number of players in WNBA by level of experience.**
```
# put your code here
wnba['Exp_ordinal'].value_counts().iloc[[3, 0, 2, 1, 4]].plot.barh(title = 'Number of players in WNBA by level of experience.')
```
## 1.4 Pie Charts
Another kind of graph we can use to visualize the distribution of **nominal** and **ordinal** variables is a **pie chart**.
Just as the name suggests, a pie chart is structured pretty much like a regular pie: it takes the form of a circle and is divided in wedges. Each wedge in a pie chart represents a category (one of the unique labels), and the size of each wedge is given by the proportion (or percentage) of that category in the distribution.
<img width="600" src="https://drive.google.com/uc?export=view&id=1KKprkhfZaGe0CkLO0p3FzoJ6i71QrSv-">
We can generate pie charts using the [Series.plot.pie() method](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.pie.html). This is how we'd do that for the **Pos** variable:
```python
>> wnba['Pos'].value_counts().plot.pie()
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1MvVv73TYBDN5VNZ2gELxp5I4O3bn6UJC">
The main advantage of pie charts over bar plots is that they provide a much better sense for the relative frequencies (proportions and percentages) in the distribution. Looking at a bar plot, we can see that categories are more or less numerous than others, but it's really hard to tell what proportion in the distribution each category takes.
With pie charts, we can immediately get a visual sense for the proportion each category takes in a distribution. Just by eyeballing the pie chart above we can make a series of observations in terms of proportions:
- Guards ("G") take about two fifths (2/5) of the distribution.
- Forwards ("F") make up roughly a quarter (1/4) of the distribution.
- Close to one fifth (1/5) of the distribution is made of centers ("C").
- Combined positions ("G/F" and "F/C") together make up roughly one fifth (1/5) of the distribution.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Generate a pie chart to visualize the distribution of the **Exp_ordinal** variable.
- Generate a frequency table for the **Exp_ordinal** variable. Don't sort the table this time.
- Use the **Series.plot.pie()** method to generate the pie plot.
```
# put your code here
wnba['Exp_ordinal'].value_counts().plot.pie(title = 'Number of players in WNBA by level of experience.')
```
## 1.5 Customizing a Pie Chart
The pie chart we generated in the previous exercise is more an ellipsis than a circle, and the **Exp_ordinal** label is unaesthetic and hard to read:
<img width="400" src="https://drive.google.com/uc?export=view&id=1tVBOULsftVpM-DH75lZvNGY09Jdavsik">
To give a pie chart the right shape, we need to specify equal values for height and width in the **figsize** parameter of **Series.plot.pie()**. The **Exp_ordinal** is the label of a hidden y-axis, which means we can use the **plt.ylabel()** function to remove it. This is how we can do this for the **Pos** variable:
```python
>> import matplotlib.pyplot as plt
>> wnba['Pos'].value_counts().plot.pie(figsize = (6,6))
>> plt.ylabel('')
```
<img width="250" src="https://drive.google.com/uc?export=view&id=1-ER6QNonL-CTu6tBMnvdL4PPER17igTt">
Ideally, we'd have proportions or percentages displayed on each wedge of the pie chart. Fortunately, this is easy to get using the **autopct** parameter. This parameter accepts Python string formatting, and we'll use the string **'%.1f%%'** to have percentages displayed with a precision of one decimal place. Let's break down this string formatting:
<center><img width="400" src="https://drive.google.com/uc?export=view&id=1Q6E-FXJCl4qVDplM3tM2n71i21eDk7vi"></center>
This is how the process looks for the Pos variable:
```ptyhon
>> wnba['Pos'].value_counts().plot.pie(figsize = (6,6), autopct = '%.1f%%')
```
<img width="300" src="https://drive.google.com/uc?export=view&id=1KnC3jlcoXgEXKcWMzQvS2hS6kxewrfLY">
Notice that the percentages were automatically determined under the hood, which means we don't have to transform to percentages ourselves using **Series.value_counts(normalize = True) * 100.**
Other display formats are possible, and more documentation on the the syntax of string formatting in Python can be found [here](https://docs.python.org/3/library/string.html#format-specification-mini-language). Documentation on **autopct** and other nice customization parameters can be found [here](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.pie.html).
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Generate and customize a pie chart to visualize the distribution of the **Exp_ordinal** variable.
- Generate a frequency table for the **Exp_ordinal** variable. Don't sort the table this time.
- Use the **Series.plot.pie()** method to generate the **pie plot.**
- Use the **figsize** parameter to specify a **width** and a **height** of 6 inches each.
- Use the **autopct** parameter to have percentages displayed with a precision of 2 decimal places.
- Add the following title to the plot: **Percentage of players in WNBA by level of experience.**
- Remove the **Exp_ordinal** label.
```
# put your code here
wnba['Exp_ordinal'].value_counts().plot.pie(
figsize = (6,6),
title = 'Percentage of players in WNBA by level of experience.'
)
```
## 1.6 Histograms
Because of the special properties of variables measured on interval and ratio scales, we can describe distributions in more elaborate ways. Let's examine the **PTS** (total points) variable, which is discrete and measured on a ratio scale:
```python
>> wnba['PTS'].describe()
count 143.000000
mean 201.790210
std 153.381548
min 2.000000
25% 75.000000
50% 177.000000
75% 277.500000
max 584.000000
Name: PTS, dtype: float64
```
We can see that 75% of the values are distributed within a relatively narrow interval (between 2 and 277), while the remaining 25% are distributed in an interval that's slightly larger.
<img width="500" src="https://drive.google.com/uc?export=view&id=1eKJa7moOQBWYg5iswrL5KZv6ZHJXZeNJ">
To visualize the distribution of the **PTS** variable, we need to use a graph that allows us to see immediately the patterns outlined above. The most commonly used graph for this scenario is the **histogram**.
To generate a histogram for the **PTS** variable, we can use the **Series.plot.hist()** method directly on the **wnba['PTS']** column (we don't have to generate a frequency table in this case):
```python
>> wnba['PTS'].plot.hist()
```
<img width="500" src="https://drive.google.com/uc?export=view&id=1Qa6ZdGR-918onl80zHS6ckoF5rMRuYOx">
In the next screen, we'll explain the statistics happening under the hood when we run **wnba['PTS'].plot.hist()** and discuss the histogram above in more detail. Until then, let's practice generating the histogram above ourselves.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Using **Series.plot.hist()**, generate a histogram to visualize the distribution of the **PTS** variable.
```
# put your code here
wnba['PTS'].plot.hist()
```
## 1.7 The Statistics Behind Histograms
Under the hood, the **wnba['PTS'].plot.hist()** method:
- Generated a grouped frequency distribution table for the **PTS** variable with ten class intervals.
- For each class interval it plotted a bar with a height corresponding to the frequency of the interval.
Let's examine the grouped frequency distribution table of the **PTS** variable:
```python
>> wnba['PTS'].describe()
count 143.000000
mean 201.790210
std 153.381548
min 2.000000
25% 75.000000
50% 177.000000
75% 277.500000
max 584.000000
Name: PTS, dtype: float64
```
Each bar in the histogram corresponds to one class interval. To show this is true, we'll generate below the same histogram as in the previous screen, but this time:
- We'll add the values of the x-ticks manually using the xticks parameter.
- The values will be the limits of each class interval.
- We use the [arange()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.arange.html?highlight=arange#numpy.arange) function from numpy to generate the values and avoid spending time with typing all the values ourselves.
- We start at 2, not at 1.417, because this is the actual minimum value of the first class interval (we discussed about this in more detail in the previous mission).
- We'll add a **grid** line using the grid parameter to demarcate clearly each bar.
- We'll rotate the tick labels of the x-axis using the rot parameter for better readability.
```python
>> from numpy import arange
>> wnba['PTS'].plot.hist(grid = True, xticks = arange(2,585,58.2), rot = 30)
```
Looking on the histogram above, we can extract the same information as from the grouped frequency table. We can see that there are 20 players in the interval (176.6, 234.8], 10 players in the interval (351.2, 409.4], etc.
More importantly, we can see the patterns we wanted to see in the last screen when we examined the output of **wnba['PTS'].describe()**.
<img width="700" src="https://drive.google.com/uc?export=view&id=1qQtYC5R9oylmOkpx_YjzpCZLv7cUo_5b">
From the output of **wnba['PTS'].describe()** we can see that most of the values (75%) are distributed within a relatively narrow interval (between 2 and 277). This tells us that:
- The values are distributed unevenly across the 2 - 584 range (2 is the minimum value in the **PTS** variable, and 584 is the maximum).
- Most values are clustered in the first (left) part of the the distribution's range.
<img width="500" src="https://drive.google.com/uc?export=view&id=178mhCdacbAzjXqwfDSbVZJtGQ0ucuqzW">
We can immediately see the same two patterns on the histogram above:
- The distribution of values is uneven, with each class interval having a different frequency. If the distribution was even, all the class intervals would have the same frequency.
- Most values (roughly three quarters) are clustered in the left half of the histogram.
While it's easy and fast to make good estimates simply by looking at a histogram, it's always a good idea to add precision to our estimates using the percentile values we get from **Series.describe().**
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Examine the distribution of the **Games Played** variable using the **Series.describe()** method. Just from the output of this method, predict how the histogram of the **Games Played** variable should look like.
- Once you have a good idea of what histogram shape to expect, plot a histogram for the **Games Played** variable using **Series.plot.hist()**.
```
# put your code here
wnba['Games Played'].describe()
wnba['Games Played'].plot.hist()
```
## 1.8 Histograms as Modified Bar Plots
It should now be clear that a histogram is basically the visual form of a grouped frequency table. Structurally, a histogram can also be understood as a modified version of a bar plot. The main difference is that in the case of a histogram there are no gaps between bars, and each bar represents an interval, not a single value.
The main reason we remove the gaps between bars in case of a histogram is that we want to show that the class intervals we plot are adjacent to one another. With the exception of the last interval, the ending point of an interval is the starting point of the next interval, and we want that to be seen on the graph.
<img width="300" src="https://drive.google.com/uc?export=view&id=15v9hcTmgnSArcOD5jwZSlKJQTfwoVtHt">
For bar plots we add gaps because in most cases we don't know whether the unique values of ordinal variables are adjacent to one another in the same way as two class intervals are. It's safer to assume that the values are not adjacent, and add gaps.
<img width="600" src="https://drive.google.com/uc?export=view&id=1Bdl2qABVrU1nGjT1jIha-qwK7B62Caga">
For nominal variables, values can't be numerically ajdacent in principle, and we add gaps to emphasize that the values are fundamentally distinct.
Below we summarize what we've learned so far:
<img width="400" src="https://drive.google.com/uc?export=view&id=19NxFZUcKvnQFnXvvAQl5xjOpnF_reJD-">
## 1.9 Binning for Histograms
You might have noticed that **Series.plot.hist()** splits a distribution by default into 10 class intervals. In the previous mission, we learned that 10 is a good number of class intervals to choose because it offers a good balance between information and comprehensibility.
<img width="400" src="https://drive.google.com/uc?export=view&id=1C8q5Zxpdkd_rILrIr4hqO7jCmoUgT10c">
With histograms, the breakdown point is generally larger than 10 because visualizing a picture is much easier than reading a grouped frequency table. However, once the number of class intervals goes over 30 or so, the granularity increases so much that for some intervals the frequency will be zero. This will result in a discontinued histogram from which is hard to discern patterns.
Below, we can see how the histogram of the **PTS** variable changes as we vary the number of class intervals.
<img width="600" src="https://drive.google.com/uc?export=view&id=1yLJH1J-aXojOhzPg0FICmv6IQ9aXcpn8">
To modify the number of class intervals used for a histogram, we can use the **bins** parameter of **Series.plot.hist()**. A bin is the same thing as a class interval, and, when it comes to histograms, the term "bin" is used much more often.
Also, we'll often want to avoid letting pandas work out the intervals, and use instead intervals that we think make more sense. We can do this in two steps:
- We start with specifying the range of the entire distribution using the **range** parameter of **Series.plot.hist().**
- Then we combine that with the number of bins to get the intervals we want.
Let's say we want to get these three intervals for the distribution of the PTS variable:
- [1, 200)
- [200, 400)
- [400, 600]
If the histogram ranges from 1 to 600, and we specify that we want three bins, then the bins will automatically take the intervals above. This is because the bins must have equal interval lengths, and, at the same time, cover together the entire range between 1 and 600. To cover a range of 600 with three bins, we need each bin to cover 200 points, with the first bin starting at 1, and the last bin ending at 600.
<img width="600" src="https://drive.google.com/uc?export=view&id=1W8a-hbTW_ex0BI53go4xWvjKfMMP-WoO">
This is how we can generate a histogram with three bins and a 1 - 600 range for the **PTS** variable:
```python
>> wnba['PTS'].plot.hist(range = (1,600), bins = 3)
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1O-MfgUUOn1oEfTn1elEkpru5E49XP04c">
If we keep the same range, but change to six bins, then we'll get these six intervals: [1, 100), [100, 200), [200, 300), [300, 400), [400, 500), [500, 600].
```python
>> wnba['PTS'].plot.hist(range = (1,600), bins = 6)
```
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Generate a histogram for the **Games Played** variable, and customize it in the following way:
- Each bin must cover an interval of 4 games. The first bin must start at 1, the last bin must end at 32.
- Add the title "The distribution of players by games played".
- Add a label to the x-axis named "Games played".
```
# put your code here
wnba['Games Played'].plot.hist(
range = (1, 32),
bins = 8,
title = 'The distribution of players by games played'
)
plt.xlabel("Games played")
```
## 1.10 Skewed Distributions
There are a couple of histogram shapes that appear often in practice. So far, we've met two of these shapes:
<img width="600" src="https://drive.google.com/uc?export=view&id=1kosNk32RFOru1alq7taeD9u4DWMWanoi">
In the histogram on the left, we can see that:
- Most values pile up toward the endpoint of the range (32 games played).
- There are less and less values toward the opposite end (0 games played).
On the right histogram, we can see that:
- Most values pile up toward the starting point of the range (0 points).
- There are less and less values toward the opposite end.
Both these histograms show **skewed distributions**. In a skewed distribution:
- The values pile up toward the end or the starting point of the range, making up the body of the distribution.
- Then the values decrease in frequency toward the opposite end, forming the tail of the distribution.
<img width="400" src="https://drive.google.com/uc?export=view&id=1KR4lZFs4Z3D9GrEqpma9wRxRJbBvUzuC">
If the tail points to the left, then the distribution is said to be **left skewed**. When it points to the left, the tail points at the same time in the direction of negative numbers, and for this reason the distribution is sometimes also called **negatively skewed.**
If the tail points to the right, then the distribution is **right skewed**. The distribution is sometimes also said to be **positively skewed** because the tail points in the direction of positive numbers.
<img width="600" src="https://drive.google.com/uc?export=view&id=1FIUz6XuJTcU74IHfJvJP6Il_JkUUXD3g">
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Examine the distribution of the following two variables:
- **AST** (number of assists).
- **FT%** (percentage of free throws made out of all attempts).
- Depending on the shape of the distribution, assign the string **'left skewed'** or **'right skewed'** to the following variables:
- **assists_distro** for the **AST** column.
- **ft_percent_distro** for the **FT%** column.
For instance, if you think the **AST** variable has a right skewed distribution, your answer should be **assists_distro = 'right skewed'.**
```
# put your code here
assists_distro = 'right skewed'
ft_percent_distro = 'left skewed'
```
## 1.11 Symmetrical Distributions
Besides skewed distributions, we often see histograms with a shape that is more or less symmetrical. If we draw a vertical line exactly in the middle of a symmetrical histogram, then we'll divide the histogram in two halves that are mirror images of one another.
<img width="500" src="https://drive.google.com/uc?export=view&id=1FAOJOZTgCGv6FAQMfFH7ap2HjQok2ZEs">
If the shape of the histogram is **symmetrical**, then we say that we have a **symmetrical distribution.**
A very common symmetrical distribution is one where the values pile up in the middle and gradually decrease in frequency toward both ends of the histogram. This pattern is specific to what we call a **normal distribution** (also called **Gaussian distribution**).
<img width="500" src="https://drive.google.com/uc?export=view&id=16AQYkLid4MnTqFLK8CID3wDWybbKvHgJ">
Another common symmetrical distribution is one where the values are distributed uniformly across the entire range. This pattern is specific to a **uniform distribution.**
<img width="500" src="https://drive.google.com/uc?export=view&id=1WvoDjqD-S-W9qOkqaPt9o3JZs438T9TJ">
In practice, we rarely see perfectly **symmetrical distributions**. However, it's common to use perfectly symmetrical distributions as baselines for describing the distributions we see in practice. For instance, we'd describe the distribution of the **Weight** variable as resembling closely a normal distribution:
```python
>> wnba['Weight'].plot.hist()
```
<img width="400" src="https://drive.google.com/uc?export=view&id=1eoNnxKtW_V5PrdZ5V1P83_YKWKNh61bZ">
When we say that the distribution above resembles closely a normal distribution, we mean that most values pile up somewhere close to the middle and decrease in frequency more or less gradually toward both ends of the histogram.
A similar reasoning applies to skewed distributions. We don't see very often clear-cut skewed distributions, and we use the left and right skewed distributions as baselines for comparison. For instance, we'd say that the distribution of the **BMI** variable is slightly **right skewed**:
```python
>> wnba['BMI'].plot.hist()
```
There's more to say about distribution shapes, and we'll continue this discussion in the next course when we'll learn new concepts. Until then, let's practice what we've learned.
**Exercise**
<img width="100" src="https://drive.google.com/uc?export=view&id=1E8tR7B9YYUXsU_rddJAyq0FrM0MSelxZ">
- Examine the distribution of the following variables, trying to determine which one resembles the most a normal distribution:
- Age
- Height
- MIN
- Assign to the variable **normal_distribution** the name of the variable (as a string) whose distribution resembles the most a normal one.
For instance, if you think the **MIN** variable is the correct answer, then your answer should be **normal_distribution = 'MIN'.**
```
# put your code here
normal_distrinbution = 'Height'
```
## 1.12 Next Steps
In this mission, we learned about the graphs we can use to visualize the distributions of various kinds of variables. If a variable is measured on a nominal or ordinal scale, we can use a bar plot or a pie chart. If the variable is measured on an interval or ratio scale, then a histogram is good choice.
Here's the summary table once again to help you recollect what we did in this mission:
<img width="400" src="https://drive.google.com/uc?export=view&id=19NxFZUcKvnQFnXvvAQl5xjOpnF_reJD-">
We're one mission away from finishing the workflow we set out to complete in the first mission. Next, we'll continue the discussion about data visualization by learning how to compare frequency distributions using graphs.
<img width="600" src="https://drive.google.com/uc?export=view&id=1U88LilHa2asEN9vnC_PQFXz9UtjmRIzh">
| github_jupyter |
# NTDS'18 milestone 1: network collection and properties
[Effrosyni Simou](https://lts4.epfl.ch/simou), [EPFL LTS4](https://lts4.epfl.ch)
## Students
* Team: `42`
* Students: `Alexandre Poussard, Robin Leurent, Vincent Coriou, Pierre Fouché`
* Dataset: [`Flight routes`](https://openflights.org/data.html)
## Rules
* Milestones have to be completed by teams. No collaboration between teams is allowed.
* Textual answers shall be short. Typically one to three sentences.
* Code has to be clean.
* You cannot import any other library than we imported.
* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.
* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter.
## Objective
The purpose of this milestone is to start getting acquainted to the network that you will use for this class. In the first part of the milestone you will import your data using [Pandas](http://pandas.pydata.org) and you will create the adjacency matrix using [Numpy](http://www.numpy.org). This part is project specific. In the second part you will have to compute some basic properties of your network. **For the computation of the properties you are only allowed to use the packages that have been imported in the cell below.** You are not allowed to use any graph-specific toolboxes for this milestone (such as networkx and PyGSP). Furthermore, the aim is not to blindly compute the network properties, but to also start to think about what kind of network you will be working with this semester.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
## Part 1 - Import your data and manipulate them.
### A. Load your data in a Panda dataframe.
First, you should define and understand what are your nodes, what features you have and what are your labels. Please provide below a Panda dataframe where each row corresponds to a node with its features and labels. For example, in the the case of the Free Music Archive (FMA) Project, each row of the dataframe would be of the following form:
| Track | Feature 1 | Feature 2 | . . . | Feature 518| Label 1 | Label 2 |. . .|Label 16|
|:-------:|:-----------:|:---------:|:-----:|:----------:|:--------:|:--------:|:---:|:------:|
| | | | | | | | | |
It is possible that in some of the projects either the features or the labels are not available. This is OK, in that case just make sure that you create a dataframe where each of the rows corresponds to a node and its associated features or labels.
```
routesCol = ['Airline','AirlineID','SourceAirport','SourceAirportID',
'DestAirport','DestAirportID','CodeShare','Stops','Equipment']
airportsCol = ['AirportID', 'Name', 'City', 'Country', 'IATA', 'ICAO', 'Latitude', 'Longitude',
'Altitude', 'Timezone', 'DST', 'TzDatabaseTimezone', 'Type', 'Source']
routes = pd.read_csv("data/routes.dat",header = None,names = routesCol,encoding = 'utf-8', na_values='\\N')
airports= pd.read_csv("data/airports.dat",header = None, names = airportsCol, encoding = 'utf-8')
# We drop nan value for source and destination airport ID because they are not in the airports file
# and it does not make sense to keep routes that go nowhere.
routes.dropna(subset=["SourceAirportID", "DestAirportID"], inplace=True)
# Get unique airlines for a source airport.
routesAirline = routes[['SourceAirportID','Airline']].groupby('SourceAirportID').nunique().drop(['SourceAirportID'], axis=1)
# Get average number of stops of outbound flights (from Source)
routesStop = routes[['SourceAirportID', 'Stops']].groupby('SourceAirportID').mean()
# Get number of routes that leave the airport
routesSource = routes[['SourceAirportID', 'DestAirportID']].groupby('SourceAirportID').count()
# Get number of routes that arrive to the airport
routesDest = routes[['SourceAirportID', 'DestAirportID']].groupby('DestAirportID').count()
# Concatenate everything and fill nan with 0 (if no departure = 0 airline, 0 stop and ratio of 0)
features = pd.concat([routesAirline, routesStop, routesSource, routesDest], axis=1, sort=True).fillna(0)
features.index = features.index.astype('int64')
# Create Ratio
features['DestSourceRatio'] = features['DestAirportID']/features['SourceAirportID']
# Add countries (as labels)
airports_countries = airports[['AirportID', 'Country']].set_index(['AirportID'])
features = features.join(airports_countries).sort_index()
features.reset_index(level=0, inplace=True)
features = features.rename(columns={'index':'AirportID'})
features.reset_index(level=0, inplace=True)
features = features.rename(columns={'index':'node_idx'})
features.head()
```
For now our features are:
* Airline : the number our unique airline leaving an airport.
* Stops : the average number of stops of an airport's outgoing flights.
* DestAirportID: number of the airport incoming flights.
* SourceAirportID: number of the airport outgoing flights.
* DestSourceRatio: Ratio of the incoming fligths over the outgoing flights. (can be infinite)
The only label we have for now is the country.
### B. Create the adjacency matrix of your network.
Remember that there are edges connecting the attributed nodes that you organized in the dataframe above. The connectivity of the network is captured by the adjacency matrix $W$. If $N$ is the number of nodes, the adjacency matrix is an $N \times N$ matrix where the value of $W(i,j)$ is the weight of the edge connecting node $i$ to node $j$.
There are two possible scenarios for your adjacency matrix construction, as you already learned in the tutorial by Benjamin:
1) The edges are given to you explicitly. In this case you should simply load the file containing the edge information and parse it in order to create your adjacency matrix. See how to do that in the [graph from edge list]() demo.
2) The edges are not given to you. In that case you will have to create a feature graph. In order to do that you will have to chose a distance that will quantify how similar two nodes are based on the values in their corresponding feature vectors. In the [graph from features]() demo Benjamin showed you how to build feature graphs when using Euclidean distances between feature vectors. Be curious and explore other distances as well! For instance, in the case of high-dimensional feature vectors, you might want to consider using the cosine distance. Once you compute the distances between your nodes you will have a fully connected network. Do not forget to sparsify by keeping the most important edges in your network.
Follow the appropriate steps for the construction of the adjacency matrix of your network and provide it in the Numpy array ``adjacency`` below:
```
edges = routes[['SourceAirportID', 'DestAirportID']]
edges = edges.astype('int64')
airportID2idx = features[['node_idx', 'AirportID']]
airportID2idx = airportID2idx.set_index('AirportID')
edges = edges.join(airportID2idx, on='SourceAirportID')
edges = edges.join(airportID2idx, on='DestAirportID', rsuffix='_dest')
edges = edges.drop(columns=['SourceAirportID','DestAirportID'])
n_nodes = len(features)
adjacency = np.zeros((n_nodes, n_nodes), dtype=int)
# The weights of the adjacency matrix are the sum of the outgoing flights
for idx, row in edges.iterrows():
i, j = int(row.node_idx), int(row.node_idx_dest)
adjacency[i, j] += 1
print("The number of nodes in the network is {}".format(n_nodes))
adjacency
```
## Part 2
Execute the cell below to plot the (weighted) adjacency matrix of your network.
```
plt.spy(adjacency, markersize=1)
plt.title('adjacency matrix')
```
### Question 1
What is the maximum number of links $L_{max}$ in a network with $N$ nodes (where $N$ is the number of nodes in your network)? How many links $L$ are there in your collected network? Comment on the sparsity of your network.
```
# Our graph is directed therefore :
Lmax = n_nodes * (n_nodes - 1)
print("Maximum number of links in our network : {}".format(Lmax))
links = np.count_nonzero(adjacency)
print("Number of links in our network : {}".format(links))
sparsity = links * 100 / Lmax
print("The sparsity of our network is : {:.4f}%".format(sparsity))
```
The maximum number of links $L_{max}$ in an network with $N$ nodes is $L_{maxUndirected}=\binom{N}{2}=\frac{N(N-1)}{2}$ and $L_{maxDirected}=N(N-1)$.
We can see that our network is very sparse which makes sense as we work on a flights routes dataset. Thus, small airports have few connections compared to the big ones.
### Question 2
Is your graph directed or undirected? If it is directed, convert it to an undirected graph by symmetrizing the adjacency matrix.
Our graph is directed. (Some airports don't have the same number of incoming and outgoing flights.)
```
# We symmetrize our network by summing our weights (the number of incoming and outgoing flights)
# since it allows us to keep the maximum number of information.
adjacency_sym = adjacency + adjacency.T
```
### Question 3
In the cell below save the features dataframe and the **symmetrized** adjacency matrix. You can use the Pandas ``to_csv`` to save the ``features`` and Numpy's ``save`` to save the ``adjacency``. We will reuse those in the following milestones.
```
features.to_csv('features.csv')
np.save('adjacency_sym', adjacency_sym)
```
### Question 4
Are the edges of your graph weighted?
Yes, the weights of the symmetrized adjacency matrix are the total number of outgoing and incoming flights from each node.
### Question 5
What is the degree distibution of your network?
```
# The degree of a node is the sum of its corresponding row or column in the adjacency matrix.
degree = [(line > 0).sum() for line in adjacency_sym]
assert len(degree) == n_nodes
```
Execute the cell below to see the histogram of the degree distribution.
```
weights = np.ones_like(degree) / float(n_nodes)
plt.hist(degree, weights=weights);
```
What is the average degree?
```
avg_deg = np.mean(degree)
print("The average degree is {:.4f}".format(avg_deg))
```
### Question 6
Comment on the degree distribution of your network.
```
test = np.unique(degree,return_counts=True)
ax = plt.gca()
ax.scatter(test[0],test[1])
ax.set_yscale("log")
ax.set_xscale("log")
```
Our degree distribution follows a power law distribution, hence our network seems to be scale free as we saw in the lecture.
### Question 7
Write a function that takes as input the adjacency matrix of a graph and determines whether the graph is connected or not.
```
def BFS(adjacency, labels, state):
"""
return a component with an array of updated labels (the visited nodes during the BFS)
for a given adjacency matrix
:param adjacency: The adjacency matrix where to find the component
:param labels: An array of labels (0 : the node is not yet explored, 1: it is explored)
:param state: The # of the component we are looking for
:return: updated labels array and the component found
"""
queue = []
# current node is the first one with a label to 0
current_node = np.argwhere(labels == 0).flatten()[0]
labels[current_node] = state
queue.append(current_node)
current_component = []
current_component.append(current_node)
while len(queue) != 0:
# all the weight of the other nodes for a given one
all_nodes = adjacency_sym[current_node]
# all the nodes reachable from the current one that are not yet labeled
neighbours = np.argwhere((all_nodes > 0) & (labels == 0)).flatten()
# add those nodes to the queue and to the component
queue += list(neighbours)
current_component += list(neighbours)
for i in neighbours:
# we update the labels array
labels[i] = state
if len(queue) > 1:
# and we update the queue and the current node
queue = queue[1:]
current_node = queue[0]
else :
queue = []
return np.array(labels), current_component
def connected_graph(adjacency):
"""Determines whether a graph is connected.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
Returns
-------
bool
True if the graph is connected, False otherwise.
"""
#Run the BFS, find a component and see if all the nodes are in it. If so, the graph is connected.
first_labels = np.zeros(n_nodes, dtype=int)
labels, component = BFS(adjacency, first_labels, 1)
return labels.sum() == n_nodes
```
Is your graph connected? Run the ``connected_graph`` function to determine your answer.
```
connected = connected_graph(adjacency)
print("Is our graph connected ? {}".format(connected))
```
### Question 8
Write a function that extracts the connected components of a graph.
```
def find_components(adjacency):
"""Find the connected components of a graph.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
Returns
-------
list of numpy arrays
A list of adjacency matrices, one per connected component.
"""
#Find the first component
components = []
first_labels = np.zeros(n_nodes, dtype=int)
labels, component = BFS(adjacency, first_labels, 1)
components.append(component)
current_state = 2
#Redo BFS while we haven't found all the components
while (labels > 0).sum() != n_nodes:
labels, component = BFS(adjacency, labels, current_state)
components.append(component)
current_state += 1
return np.array(components)
```
How many connected components is your network composed of? What is the size of the largest connected component? Run the ``find_components`` function to determine your answer.
```
components = find_components(adjacency_sym)
print("Number of connected components : {}".format(len(components)))
size_compo = [len(compo) for compo in components]
print("Size of largest connected component : {}".format(np.max(size_compo)))
```
### Question 9
Write a function that takes as input the adjacency matrix and a node (`source`) and returns the length of the shortest path between that node and all nodes in the graph using Dijkstra's algorithm. **For the purposes of this assignment we are interested in the hop distance between nodes, not in the sum of weights. **
Hint: You might want to mask the adjacency matrix in the function ``compute_shortest_path_lengths`` in order to make sure you obtain a binary adjacency matrix.
```
#Find all the neighbours of a given node
def neighbours(node, adjacency_sym):
n = adjacency_sym[node]
neighbours = np.argwhere(n != 0).flatten()
return neighbours
def compute_shortest_path_lengths(adjacency, source):
"""Compute the shortest path length between a source node and all nodes.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
source: int
The source node. A number between 0 and n_nodes-1.
Returns
-------
list of ints
The length of the shortest path from source to all nodes. Returned list should be of length n_nodes.
"""
adjacency[adjacency != 0] = 1
shortest_path_lengths = np.ones(adjacency.shape[0]) * np.inf
shortest_path_lengths[source] = 0
visited = [source]
queue = [source]
while queue:
node = queue[0]
queue = queue[1:]
neighbors = neighbours(node, adjacency)
neighbors = np.setdiff1d(neighbors,visited).tolist()
neighbors = np.setdiff1d(neighbors,queue).tolist()
queue += neighbors
visited += neighbors
shortest_path_lengths[neighbors] = shortest_path_lengths[node] + 1
return shortest_path_lengths
```
### Question 10
The diameter of the graph is the length of the longest shortest path between any pair of nodes. Use the above developed function to compute the diameter of the graph (or the diameter of the largest connected component of the graph if the graph is not connected). If your graph (or largest connected component) is very large, computing the diameter will take very long. In that case downsample your graph so that it has 1.000 nodes. There are many ways to reduce the size of a graph. For the purposes of this milestone you can chose to randomly select 1.000 nodes.
```
max_component = components[np.argmax(size_compo)]
adjacency_max = adjacency_sym[max_component, :]
adjacency_max = adjacency_max[:, max_component]
longest = []
a = adjacency_max[:1000,:1000]
for node in range(len(a)):
short = compute_shortest_path_lengths(a, node)
longest.append(max(short[ short < np.inf ]))
print("The diameter of the graph is {}".format(max(longest)))
```
### Question 11
Write a function that takes as input the adjacency matrix, a path length, and two nodes (`source` and `target`), and returns the number of paths of the given length between them.
```
def compute_paths(adjacency, source, target, length):
"""Compute the number of paths of a given length between a source and target node.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
source: int
The source node. A number between 0 and n_nodes-1.
target: int
The target node. A number between 0 and n_nodes-1.
length: int
The path length to be considered.
Returns
-------
int
The number of paths.
"""
adjacency[adjacency != 0] = 1
adjacency = np.linalg.matrix_power(adjacency, length)
return adjacency[source, target]
```
Test your function on 5 pairs of nodes, with different lengths.
```
print(compute_paths(adjacency_sym, 0, 10, 1))
print(compute_paths(adjacency_sym, 0, 10, 2))
print(compute_paths(adjacency_sym, 0, 10, 3))
print(compute_paths(adjacency_sym, 23, 67, 2))
print(compute_paths(adjacency_sym, 15, 93, 4))
```
### Question 12
How many paths of length 3 are there in your graph? Hint: calling the `compute_paths` function on every pair of node is not an efficient way to do it.
```
# We could have used np.linalg.matrix_power(a, 3), but for performance reasons we prefered to make
#the multiplication explicitely.
a = adjacency_sym.copy()
a[a != 0] = 1
a = a @ a @ a
print("Number of path of length 3: {}".format(a.sum()))
```
### Question 13
Write a function that takes as input the adjacency matrix of your graph (or of the largest connected component of your graph) and a node and returns the clustering coefficient of that node.
```
def compute_clustering_coefficient(adjacency, node):
"""Compute the clustering coefficient of a node.
Parameters
----------
adjacency: numpy array
The (weighted) adjacency matrix of a graph.
node: int
The node whose clustering coefficient will be computed. A number between 0 and n_nodes-1.
Returns
-------
float
The clustering coefficient of the node. A number between 0 and 1.
"""
adjacency[adjacency >1]=1
neighbors = adjacency[node]
indices = np.argwhere(neighbors > 0).flatten()
if len(indices) <2:
return 0
#Take the neighbors of the neighbors, and find wich one are linked together
neiofnei = adjacency[indices]
test = neiofnei * neighbors
count = sum(test.flatten())
#Compute the clustering coefficient for the node. Since each link is counted twice, we don't multiply the formula by 2.
clustering_coefficient = count / (len(indices)*(len(indices)-1))
return clustering_coefficient
```
### Question 14
What is the average clustering coefficient of your graph (or of the largest connected component of your graph if your graph is disconnected)? Use the function ``compute_clustering_coefficient`` to determine your answer.
```
adjacency_component = adjacency_sym[components[0]]
adjacency_component = adjacency_component.T[components[0]].T
N = len(components[0])
count = 0
for i in range(N):
count+=compute_clustering_coefficient(adjacency_component,i)
avg_coeff = count/N
print("The average coefficient of our largest connected component is : {}".format(str(avg_coeff)))
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Start-to-Finish Example: Evolving Maxwell's Equations with Toriodal Dipole Field Initial Data, in Flat Spacetime and Curvilinear Coordinates
## Following the work of [Knapp, Walker & Baumgarte (2002)](https://arxiv.org/abs/gr-qc/0201051), we numerically implement the second version of Maxwell’s equations - System II (BSSN - like) in curvilinear coordinates.
## Author: Terrence Pierre Jacques and Zachariah Etienne
### Formatting improvements courtesy Brandon Clark
**Notebook Status:** <font color = green><b> Validated </b></font>
**Validation Notes:** This module has been validated to exhibit convergence to the exact solution for the electric field $E^i$ and vector potential $A^i$ at the expected order, *after a short numerical evolution of the initial data* (see [plots at bottom](#convergence)).
### NRPy+ Source Code for this module:
* [Maxwell/InitialData.py](../edit/Maxwell/InitialData.py); [\[**tutorial**\]](Tutorial-VacuumMaxwell_InitialData.ipynb): Purely toriodal dipole field initial data; sets all electromagnetic variables in the Cartesian basis
* [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear.py](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Cartesian.py); [\[**tutorial**\]](Tutorial-Tutorial-VacuumMaxwell_Curvilinear_RHS-Rescaling.ipynb): Generates the right-hand sides for Maxwell's equations in curvilinear coordinates
## Introduction:
Here we use NRPy+ to generate the C source code necessary to set up initial data for a purely toriodal dipole field, as defined in [Knapp, Walker & Baumgarte (2002)](https://arxiv.org/abs/gr-qc/0201051). We then evolve the RHSs of Maxwell's equations using the [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on an [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4 is chosen below, but multiple options exist).
The entire algorithm is outlined as follows, with links to the relevant NRPy+ tutorial notebooks listed at each step:
1. Allocate memory for gridfunctions, including temporary storage for the Method of Lines time integration
* [**NRPy+ tutorial on Method of Lines algorithm**](Tutorial-Method_of_Lines-C_Code_Generation.ipynb).
1. Set gridfunction values to initial data
* [**NRPy+ tutorial on purely toriodal dipole field initial data**](Tutorial-VacuumMaxwell_InitialData.ipynb)
1. Next, integrate the initial data forward in time using the Method of Lines coupled to a Runge-Kutta explicit timestepping algorithm:
1. At the start of each iteration in time, output the divergence constraint violation
* [**NRPy+ tutorial on Maxwell's equations constraints**](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb)
1. At each RK time substep, do the following:
1. Evaluate RHS expressions
* [**NRPy+ tutorial on Maxwell's equations right-hand sides**](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb)
1. Apply Sommerfeld boundary conditions in curvilinear coordinates
* [**NRPy+ tutorial on setting up Sommerfeld boundary conditions**](Tutorial-SommerfeldBoundaryCondition.ipynb)
1. Repeat above steps at two numerical resolutions to confirm convergence to zero.
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids and reference metric
1. [Step 1.a](#cfl) Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep
1. [Step 2](#mw): Generate symbolic expressions and output C code for evolving Maxwell's equations
1. [Step 2.a](#mwid): Generate symbolic expressions for toroidal dipole field initial data
1. [Step 2.b](#mwevol): Generate symbolic expressions for evolution equations
1. [Step 2.c](#mwcon): Generate symbolic expressions for constraint equations
1. [Step 2.d](#mwcart): Generate symbolic expressions for converting $A^i$ and $E^i$ to the Cartesian basis
1. [Step 2.e](#mwccode): Output C codes for initial data and evolution equations
1. [Step 2.f](#mwccode_con): Output C code for constraint equations
1. [Step 2.g](#mwccode_cart): Output C code for converting $A^i$ and $E^i$ to Cartesian coordinates
1. [Step 2.h](#mwccode_xzloop): Output C code for printing 2D data
1. [Step 2.i](#cparams_rfm_and_domainsize): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`
1. [Step 3](#bc_functs): Set up Sommerfeld boundary condition functions
1. [Step 4](#mainc): `Maxwell_Playground.c`: The Main C Code
1. [Step 5](#compileexec): Compile generated C codes & perform simulation of the propagating toriodal electromagnetic field
1. [Step 6](#visualize): Visualize the output!
1. [Step 6.a](#installdownload): Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded
1. [Step 6.b](#genimages): Generate images for visualization animation
1. [Step 6.c](#genvideo): Generate visualization animation
1. [Step 7](#convergence): Plot the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling)
1. [Step 8](#div_e): Comparison of Divergence Constrain Violation
1. [Step 9](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='initializenrpy'></a>
# Step 1: Set core NRPy+ parameters for numerical grids and reference metric \[Back to [top](#toc)\]
$$\label{initializenrpy}$$
```
# Step P1: Import needed NRPy+ core modules:
from outputC import outputC,lhrh,outCfunction # NRPy+: Core C code output module
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import reference_metric as rfm # NRPy+: Reference metric support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions, benchmarking
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("MaxwellEvolCurvi_Playground_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Then create a fresh directory
cmd.mkdir(Ccodesdir)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(outdir)
# Step 1: Set the spatial dimension parameter
# to three (BSSN is a 3+1 decomposition
# of Einstein's equations), and then read
# the parameter as DIM.
par.set_parval_from_str("grid::DIM",3)
DIM = par.parval_from_str("grid::DIM")
# Step 2: Set some core parameters, including CoordSystem MoL BoundaryCondition timestepping algorithm,
# FD order, floating point precision, and CFL factor:
# Choices are: Spherical, SinhSpherical, SinhSphericalv2, Cylindrical, SinhCylindrical,
# SymTP, SinhSymTP
CoordSystem = "SinhCylindrical"
# Step 2.a: Set boundary conditions
# Current choices are QuadraticExtrapolation (quadratic polynomial extrapolation) or Sommerfeld
BoundaryCondition = "Sommerfeld"
# Step 2.b: Set defaults for Coordinate system parameters.
# These are perhaps the most commonly adjusted parameters,
# so we enable modifications at this high level.
# domain_size sets the default value for:
# * Spherical's params.RMAX
# * SinhSpherical*'s params.AMAX
# * Cartesians*'s -params.{x,y,z}min & .{x,y,z}max
# * Cylindrical's -params.ZMIN & .{Z,RHO}MAX
# * SinhCylindrical's params.AMPL{RHO,Z}
# * *SymTP's params.AMAX
domain_size = 10.0 # Needed for all coordinate systems.
# sinh_width sets the default value for:
# * SinhSpherical's params.SINHW
# * SinhCylindrical's params.SINHW{RHO,Z}
# * SinhSymTP's params.SINHWAA
sinh_width = 0.4 # If Sinh* coordinates chosen
# sinhv2_const_dr sets the default value for:
# * SinhSphericalv2's params.const_dr
# * SinhCylindricalv2's params.const_d{rho,z}
sinhv2_const_dr = 0.05 # If Sinh*v2 coordinates chosen
# SymTP_bScale sets the default value for:
# * SinhSymTP's params.bScale
SymTP_bScale = 0.5 # If SymTP chosen
# Step 2.c: Set the order of spatial and temporal derivatives;
# the core data type, and the CFL factor.
# RK_method choices include: Euler, "RK2 Heun", "RK2 MP", "RK2 Ralston", RK3, "RK3 Heun", "RK3 Ralston",
# SSPRK3, RK4, DP5, DP5alt, CK5, DP6, L6, DP8
RK_method = "RK4"
FD_order = 4 # Finite difference order: even numbers only, starting with 2. 12 is generally unstable
REAL = "double" # Best to use double here.
default_CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (RHS_string)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
RHS_string = "rhs_eval(&rfmstruct, ¶ms, RK_INPUT_GFS, RK_OUTPUT_GFS);"
if BoundaryCondition == "QuadraticExtrapolation":
# Extrapolation BCs are applied to the evolved gridfunctions themselves after the MoL update
post_RHS_string = "apply_bcs_curvilinear(¶ms, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_OUTPUT_GFS);"
elif BoundaryCondition == "Sommerfeld":
# Sommerfeld BCs are applied to the gridfunction RHSs directly
RHS_string += "\n apply_bcs_sommerfeld(¶ms, xx, &bcstruct, NUM_EVOL_GFS, evol_gf_parity, RK_INPUT_GFS, RK_OUTPUT_GFS);"
post_RHS_string = ""
else:
print("Invalid choice of boundary condition")
sys.exit(1)
MoL.MoL_C_Code_Generation(RK_method, RHS_string = RHS_string, post_RHS_string = post_RHS_string,
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
# Step 4: Set the coordinate system for the numerical grid
par.set_parval_from_str("reference_metric::CoordSystem",CoordSystem)
rfm.reference_metric()
# Step 5: Set the finite differencing order to 4.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",FD_order)
# Step 6: Copy SIMD/SIMD_intrinsics.h to $Ccodesdir/SIMD/SIMD_intrinsics.h
cmd.mkdir(os.path.join(Ccodesdir,"SIMD"))
shutil.copy(os.path.join("SIMD/")+"SIMD_intrinsics.h",os.path.join(Ccodesdir,"SIMD/"))
# par.set_parval_from_str("indexedexp::symmetry_axes","2")
```
<a id='cfl'></a>
## Step 1.a: Output needed C code for finding the minimum proper distance between grid points, needed for [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673)-limited timestep \[Back to [top](#toc)\]
$$\label{cfl}$$
In order for our explicit-timestepping numerical solution to Maxwell's equations to be stable, it must satisfy the [CFL](https://en.wikipedia.org/w/index.php?title=Courant%E2%80%93Friedrichs%E2%80%93Lewy_condition&oldid=806430673) condition:
$$
\Delta t \le \frac{\min(ds_i)}{c},
$$
where $c$ is the wavespeed, and
$$ds_i = h_i \Delta x^i$$
is the proper distance between neighboring gridpoints in the $i$th direction (in 3D, there are 3 directions), $h_i$ is the $i$th reference metric scale factor, and $\Delta x^i$ is the uniform grid spacing in the $i$th direction:
```
# Output the find_timestep() function to a C file.
rfm.out_timestep_func_to_file(os.path.join(Ccodesdir,"find_timestep.h"))
```
<a id='mw'></a>
# Step 2: Generate symbolic expressions and output C code for evolving Maxwell's equations \[Back to [top](#toc)\]
$$\label{mw}$$
Here we read in the symbolic expressions from the NRPy+ [InitialData](../edit/Maxwell/InitialData.py) and [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) modules to define the initial data, evolution equations, and constraint equations.
<a id='mwid'></a>
## Step 2.a: Generate symbolic expressions for toroidal dipole field initial data \[Back to [top](#toc)\]
$$\label{mwid}$$
Here we use the NRPy+ [InitialData](../edit/Maxwell/InitialData.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_InitialData.ipynb), to write initial data to the grid functions for both systems.
We define the rescaled quantities $a^i$ and $e^i$ in terms of $A^i$ and $E^i$ in curvilinear coordinates within the NRPy+ [InitialData](../edit/Maxwell/InitialData.py) module (see [this tutorial](Tutorial-VacuumMaxwell_formulation_Curvilinear.ipynb) for more detail);
\begin{align}
a^i &= \frac{A^i}{\text{ReU}[i]},\\ \\
e^i &= \frac{E^i}{\text{ReU}[i]}.
\end{align}
```
import Maxwell.InitialData as mwid
# Set which system to use, which are defined in Maxwell/InitialData.py
par.set_parval_from_str("Maxwell.InitialData::System_to_use","System_II")
mwid.InitialData()
aidU = ixp.zerorank1()
eidU = ixp.zerorank1()
for i in range(DIM):
aidU[i] = mwid.AidU[i]/rfm.ReU[i]
eidU[i] = mwid.EidU[i]/rfm.ReU[i]
Maxwell_ID_SymbExpressions = [\
lhrh(lhs="*AU0_exact",rhs=aidU[0]),\
lhrh(lhs="*AU1_exact",rhs=aidU[1]),\
lhrh(lhs="*AU2_exact",rhs=aidU[2]),\
lhrh(lhs="*EU0_exact",rhs=eidU[0]),\
lhrh(lhs="*EU1_exact",rhs=eidU[1]),\
lhrh(lhs="*EU2_exact",rhs=eidU[2]),\
lhrh(lhs="*PSI_exact",rhs=mwid.psi_ID),\
lhrh(lhs="*GAMMA_exact",rhs=mwid.Gamma_ID)]
Maxwell_ID_CcodeKernel = fin.FD_outputC("returnstring", Maxwell_ID_SymbExpressions)
```
<a id='mwevol'></a>
## Step 2.b: Generate symbolic expressions for evolution equations \[Back to [top](#toc)\]
$$\label{mwevol}$$
Here we use the NRPy+ [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb), to ascribe the evolution equations to the grid functions.
```
import Maxwell.VacuumMaxwell_Flat_Evol_Curvilinear_rescaled as rhs
# Set which system to use, which are defined in Maxwell/InitialData.py
par.set_parval_from_str("Maxwell.InitialData::System_to_use","System_II")
cmd.mkdir(os.path.join(Ccodesdir,"rfm_files/"))
par.set_parval_from_str("reference_metric::enable_rfm_precompute","True")
par.set_parval_from_str("reference_metric::rfm_precompute_Ccode_outdir",os.path.join(Ccodesdir,"rfm_files/"))
rhs.VacuumMaxwellRHSs_rescaled()
Maxwell_RHSs_SymbExpressions = [\
lhrh(lhs=gri.gfaccess("rhs_gfs","aU0"),rhs=rhs.arhsU[0]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","aU1"),rhs=rhs.arhsU[1]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","aU2"),rhs=rhs.arhsU[2]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","eU0"),rhs=rhs.erhsU[0]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","eU1"),rhs=rhs.erhsU[1]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","eU2"),rhs=rhs.erhsU[2]),\
lhrh(lhs=gri.gfaccess("rhs_gfs","psi"),rhs=rhs.psi_rhs),\
lhrh(lhs=gri.gfaccess("rhs_gfs","Gamma"),rhs=rhs.Gamma_rhs)]
par.set_parval_from_str("reference_metric::enable_rfm_precompute","False") # Reset to False to disable rfm_precompute.
rfm.ref_metric__hatted_quantities()
Maxwell_RHSs_string = fin.FD_outputC("returnstring",
Maxwell_RHSs_SymbExpressions,
params="SIMD_enable=True").replace("IDX4","IDX4S")
```
<a id='mwcon'></a>
## Step 2.c: Generate symbolic expressions for constraint equations \[Back to [top](#toc)\]
$$\label{mwcon}$$
We now use the NRPy+ [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb), to ascribe the constraint equations.
```
C = gri.register_gridfunctions("AUX", "C")
G = gri.register_gridfunctions("AUX", "G")
Constraints_string = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs", "C"), rhs=rhs.C),
lhrh(lhs=gri.gfaccess("aux_gfs", "G"), rhs=rhs.G)],
params="outCverbose=False").replace("IDX4","IDX4S")
```
<a id='mwcart'></a>
## Step 2.d: Generate symbolic expressions for converting $A^i$ and $E^i$ to the Cartesian basis \[Back to [top](#toc)\]
$$\label{mwcart}$$
We now use the NRPy+ [Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled](../edit/Maxwell/VacuumMaxwell_Flat_Evol_Curvilinear_rescaled.py) module, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb), to ascribe the coordinate conversion, to make our convergence tests slightly easier.
```
AUCart = ixp.register_gridfunctions_for_single_rank1("AUX", "AUCart")
EUCart = ixp.register_gridfunctions_for_single_rank1("AUX", "EUCart")
Cartesian_Vectors_string = fin.FD_outputC("returnstring",
[lhrh(lhs=gri.gfaccess("aux_gfs", "AUCart0"), rhs=rhs.AU_Cart[0]),
lhrh(lhs=gri.gfaccess("aux_gfs", "AUCart1"), rhs=rhs.AU_Cart[1]),
lhrh(lhs=gri.gfaccess("aux_gfs", "AUCart2"), rhs=rhs.AU_Cart[2]),
lhrh(lhs=gri.gfaccess("aux_gfs", "EUCart0"), rhs=rhs.EU_Cart[0]),
lhrh(lhs=gri.gfaccess("aux_gfs", "EUCart1"), rhs=rhs.EU_Cart[1]),
lhrh(lhs=gri.gfaccess("aux_gfs", "EUCart2"), rhs=rhs.EU_Cart[2])],
params="outCverbose=False").replace("IDX4","IDX4S")
```
<a id='mwccode'></a>
## Step 2.e: Output C codes for initial data and evolution equations \[Back to [top](#toc)\]
$$\label{mwccode}$$
Next we write the C codes for the initial data and evolution equations to files, to be used later by our main C code.
```
# Step 11: Generate all needed C functions
Part_P1_body = Maxwell_ID_CcodeKernel
desc="Part P1: Declare the function for the exact solution at a single point. time==0 corresponds to the initial data."
name="exact_solution_single_point"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const REAL xx0,const REAL xx1,const REAL xx2,const paramstruct *restrict params,\
REAL *EU0_exact, \
REAL *EU1_exact, \
REAL *EU2_exact, \
REAL *AU0_exact, \
REAL *AU1_exact, \
REAL *AU2_exact, \
REAL *PSI_exact,\
REAL *GAMMA_exact",
body = Part_P1_body,
loopopts = "")
desc="Part P2: Declare the function for the exact solution at all points. time==0 corresponds to the initial data."
name="exact_solution_all_points"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *restrict params,REAL *restrict xx[3], REAL *restrict in_gfs",
body ="""
REAL xx0 = xx[0][i0]; REAL xx1 = xx[1][i1]; REAL xx2 = xx[2][i2];
exact_solution_single_point(xx0,xx1,xx2,params,&in_gfs[IDX4S(EU0GF,i0,i1,i2)],
&in_gfs[IDX4S(EU1GF,i0,i1,i2)],
&in_gfs[IDX4S(EU2GF,i0,i1,i2)],
&in_gfs[IDX4S(AU0GF,i0,i1,i2)],
&in_gfs[IDX4S(AU1GF,i0,i1,i2)],
&in_gfs[IDX4S(AU2GF,i0,i1,i2)],
&in_gfs[IDX4S(PSIGF,i0,i1,i2)],
&in_gfs[IDX4S(GAMMAGF,i0,i1,i2)]);""",
loopopts = "AllPoints")
Part_P3_body = Maxwell_RHSs_string
desc="Part P3: Declare the function to evaluate the RHSs of Maxwell's equations"
name="rhs_eval"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="""rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
const REAL *restrict in_gfs, REAL *restrict rhs_gfs""",
body =Part_P3_body,
loopopts = "InteriorPoints,EnableSIMD,Enable_rfm_precompute")
```
<a id='mwccode_con'></a>
## Step 2.f: Output C code for constraint equations \[Back to [top](#toc)\]
$$\label{mwccode_con}$$
Finally output the C code for evaluating the divergence constraint, described in [this tutorial](Tutorial-VacuumMaxwell_Curvilinear_RHSs.ipynb). In the absence of numerical error, this constraint should evaluate to zero, but due to numerical (typically truncation and roundoff) error it does not. We will therefore measure the divergence constraint violation to gauge the accuracy of our simulation, and ultimately determine whether errors are dominated by numerical finite differencing (truncation) error as expected. Specifically, we take the L2 norm of the constraint violation, via
\begin{align}
\lVert C \rVert^2 &= \frac{\int C^2 d\mathcal{V}}{\int d\mathcal{V}}.
\end{align}
Numerically approximating this integral, in spherical coordinates for example, then gives us
\begin{align}
\lVert C \rVert^2 &= \frac{\sum C^2 r^2 \sin^2 (\theta) dr d\theta d\phi}{\sum r^2 \sin^2 (\theta) dr d\theta d\phi}, \\ \\
&= \frac{\sum C^2 r^2 \sin^2 (\theta)}{\sum r^2 \sin^2 (\theta) } , \\ \\
&= \frac{\sum C^2 \ \sqrt{\text{det} \ \hat{\gamma}}}{\sum \sqrt{\text{det} \ \hat{\gamma}}},
\end{align}
where $\hat{\gamma}$ is the reference metric. Thus, along with the C code to calculate the constraints, we also print out the code required to evaluate $\sqrt{\text{det} \ \hat{\gamma}}$ at any given point.
```
# Set up the C function for the calculating the constraints
Part_P4_body = Constraints_string
desc="Evaluate the constraints"
name="Constraints"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params,
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = Part_P4_body,
loopopts = "InteriorPoints,Enable_rfm_precompute")
# intgrand to be used to calculate the L2 norm of the constraint
diagnostic_integrand_body = outputC(rfm.detgammahat,"*detg",filename='returnstring',
params="includebraces=False")
desc="Evaluate the volume element at a specific point"
name="diagnostic_integrand"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const REAL xx0,const REAL xx1,const REAL xx2,const paramstruct *restrict params, REAL *restrict detg",
body = diagnostic_integrand_body,
loopopts = "")
```
<a id='mwccode_cart'></a>
## Step 2.g: Output C code for converting $A^i$ and $E^i$ to Cartesian coordinates \[Back to [top](#toc)\]
$$\label{mwccode_cart}$$
Here we write the C code for the coordinate transformation to Cartesian coordinates.
```
desc="Convert EU and AU to Cartesian basis"
name="Cartesian_basis"
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params = """rfm_struct *restrict rfmstruct,const paramstruct *restrict params, REAL *restrict xx[3],
REAL *restrict in_gfs, REAL *restrict aux_gfs""",
body = "REAL xx0 = xx[0][i0]; REAL xx1 = xx[1][i1]; REAL xx2 = xx[2][i2];\n"+Cartesian_Vectors_string,
loopopts = "AllPoints, Enable_rfm_precompute")
```
<a id='mwccode_xzloop'></a>
## Step 2.h: Output C code for printing 2D data \[Back to [top](#toc)\]
$$\label{mwccode_xzloop}$$
Here we output the neccesary C code to print out a 2D slice of the data in the xz-plane, using the `xz_loop` macro
```
def xz_loop(CoordSystem):
ret = """// xz-plane output for """ + CoordSystem + r""" coordinates:
#define LOOP_XZ_PLANE(ii, jj, kk) \
"""
if "Spherical" in CoordSystem or "SymTP" in CoordSystem:
ret += r"""for (int i2 = 0; i2 < Nxx_plus_2NGHOSTS2; i2++) \
if(i2 == NGHOSTS || i2 == Nxx_plus_2NGHOSTS2/2) \
for (int i1 = 0; i1 < Nxx_plus_2NGHOSTS1; i1++) \
for (int i0 = 0; i0 < Nxx_plus_2NGHOSTS0; i0++)
"""
elif "Cylindrical" in CoordSystem:
ret += r"""for (int i2 = 0; i2 < Nxx_plus_2NGHOSTS2; i2++) \
for (int i1 = 0; i1 < Nxx_plus_2NGHOSTS1; i1++) \
if(i1 == NGHOSTS || i1 == Nxx_plus_2NGHOSTS1/2) \
for (int i0 = 0; i0 < Nxx_plus_2NGHOSTS0; i0++)
"""
elif "Cartesian" in CoordSystem:
ret += r"""for (int i2 = 0; i2 < Nxx_plus_2NGHOSTS2; i2++) \
for (int i1 = Nxx_plus_2NGHOSTS1/2; i1 < Nxx_plus_2NGHOSTS1/2+1; i1++) \
for (int i0 = 0; i0 < Nxx_plus_2NGHOSTS0; i0++)
"""
return ret
with open(os.path.join(Ccodesdir,"xz_loop.h"),"w") as file:
file.write(xz_loop(CoordSystem))
```
<a id='cparams_rfm_and_domainsize'></a>
## Step 2.i: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](#toc)\]
$$\label{cparams_rfm_and_domainsize}$$
Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.
Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
```
# Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""
// Set free-parameter values.
params.time = 0.0; // Initial simulation time time corresponds to exact solution at time=0.
params.amp = 1.0;
params.lam = 1.0;
params.wavespeed = 1.0;\n""")
# Append to $Ccodesdir/free_parameters.h reference metric parameters based on generic
# domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale,
# parameters set above.
rfm.out_default_free_parameters_for_rfm(os.path.join(Ccodesdir,"free_parameters.h"),
domain_size,sinh_width,sinhv2_const_dr,SymTP_bScale)
# Generate set_Nxx_dxx_invdx_params__and__xx.h:
rfm.set_Nxx_dxx_invdx_params__and__xx_h(Ccodesdir, grid_centering="cell")
# Generate xx_to_Cart.h, which contains xx_to_Cart() for
# (the mapping from xx->Cartesian) for the chosen CoordSystem:
rfm.xx_to_Cart_h("xx_to_Cart","./set_Cparameters.h",os.path.join(Ccodesdir,"xx_to_Cart.h"))
# Step 3.d.v: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
```
<a id='bc_functs'></a>
# Step 3: Set up Sommerfeld boundary condition for Cartesian coordinate system \[Back to [top](#toc)\]
$$\label{bc_functs}$$
Next we output the C code necessary to implement the Sommerfeld boundary condition in Cartesian coordinates, [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-SommerfeldBoundaryCondition.ipynb)
```
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(os.path.join(Ccodesdir,"boundary_conditions/"),
Cparamspath=os.path.join("../"),
BoundaryCondition=BoundaryCondition)
if BoundaryCondition == "Sommerfeld":
bcs = cbcs.sommerfeld_boundary_condition_class(fd_order=4,
vars_radial_falloff_power_default=3,
vars_speed_default=1.,
vars_at_inf_default=0.)
bcs.write_sommerfeld_file(Ccodesdir)
```
<a id='mainc'></a>
# Step 4: `Maxwell_Playground.c`: The Main C Code \[Back to [top](#toc)\]
$$\label{mainc}$$
```
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"Maxwell_Playground_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(int(FD_order/2)+1)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the CFL Factor. Can be overwritten at command line.
REAL CFL_FACTOR = """+str(default_CFL_FACTOR)+";")
%%writefile $Ccodesdir/Maxwell_Playground.c
// Step P0: Define REAL and NGHOSTS; and declare CFL_FACTOR. This header is generated in NRPy+.
#include "Maxwell_Playground_REAL__NGHOSTS__CFL_FACTOR.h"
#include "rfm_files/rfm_struct__declare.h"
#include "declare_Cparameters_struct.h"
// All SIMD intrinsics used in SIMD-enabled C code loops are defined here:
#include "SIMD/SIMD_intrinsics.h"
// Step P1: Import needed header files
#include "stdio.h"
#include "stdlib.h"
#include "math.h"
#include "time.h"
#include "stdint.h" // Needed for Windows GCC 6.x compatibility
#ifndef M_PI
#define M_PI 3.141592653589793238462643383279502884L
#endif
#ifndef M_SQRT1_2
#define M_SQRT1_2 0.707106781186547524400844362104849039L
#endif
// Step P2: Declare the IDX4S(gf,i,j,k) macro, which enables us to store 4-dimensions of
// data in a 1D array. In this case, consecutive values of "i"
// (all other indices held to a fixed value) are consecutive in memory, where
// consecutive values of "j" (fixing all other indices) are separated by
// Nxx_plus_2NGHOSTS0 elements in memory. Similarly, consecutive values of
// "k" are separated by Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1 in memory, etc.
#define IDX4S(g,i,j,k) \
( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) + Nxx_plus_2NGHOSTS2 * (g) ) ) )
#define IDX4ptS(g,idx) ( (idx) + (Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2) * (g) )
#define IDX3S(i,j,k) ( (i) + Nxx_plus_2NGHOSTS0 * ( (j) + Nxx_plus_2NGHOSTS1 * ( (k) ) ) )
#define LOOP_REGION(i0min,i0max, i1min,i1max, i2min,i2max) \
for(int i2=i2min;i2<i2max;i2++) for(int i1=i1min;i1<i1max;i1++) for(int i0=i0min;i0<i0max;i0++)
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
// Step P3: Set UUGF and VVGF macros, as well as xx_to_Cart()
#include "boundary_conditions/gridfunction_defines.h"
// Step P4: Defines set_Nxx_dxx_invdx_params__and__xx(const int EigenCoord, const int Nxx[3],
// paramstruct *restrict params, REAL *restrict xx[3]),
// which sets params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for
// the chosen Eigen-CoordSystem if EigenCoord==1, or
// CoordSystem if EigenCoord==0.
#include "set_Nxx_dxx_invdx_params__and__xx.h"
// Step P5: Include basic functions needed to impose curvilinear
// parity and boundary conditions.
#include "boundary_conditions/CurviBC_include_Cfunctions.h"
// Step P6: Find the CFL-constrained timestep
#include "find_timestep.h"
// Part P7: Declare the function for the exact solution at a single point. time==0 corresponds to the initial data.
#include "exact_solution_single_point.h"
// Part P8: Declare the function for the exact solution at all points. time==0 corresponds to the initial data.
#include "exact_solution_all_points.h"
// Step P9: Declare function for evaluating constraints (diagnostic)
#include "Constraints.h"
// Step P10: Declare rhs_eval function, which evaluates the RHSs of Maxwells equations
#include "rhs_eval.h"
// Step P11: Declare function to calculate det gamma_hat, used in calculating the L2 norm
#include "diagnostic_integrand.h"
// Step P12: Declare function to transform to the Cartesian basis
#include "Cartesian_basis.h"
// Step P13: Declare macro to print out data along the xz-plane
#include "xz_loop.h"
// Step P14: Define xx_to_Cart(const paramstruct *restrict params,
// REAL *restrict xx[3],
// const int i0,const int i1,const int i2,
// REAL xCart[3]),
// which maps xx->Cartesian via
// {xx[0][i0],xx[1][i1],xx[2][i2]}->{xCart[0],xCart[1],xCart[2]}
#include "xx_to_Cart.h"
// main() function:
// Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
// Step 1: Set up initial data to an exact solution
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
// Step 3.a: Output 2D data file periodically, for visualization
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
// Step 3.c: If t=t_final, output x and y components of evolution variables and the
// constraint violation to 2D data file
// Step 3.d: Progress indicator printing to stderr
// Step 4: Free all allocated memory
int main(int argc, const char *argv[]) {
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if((argc != 5 && argc != 6) || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < 2 /* FIXME; allow for axisymmetric sims */) {
fprintf(stderr,"Error: Expected three command-line arguments: ./Maxwell_Playground Nx0 Nx1 Nx2,\n");
fprintf(stderr,"where Nx[0,1,2] is the number of grid points in the 0, 1, and 2 directions.\n");
fprintf(stderr,"Nx[] MUST BE larger than NGHOSTS (= %d)\n",NGHOSTS);
exit(1);
}
if(argc == 5) {
CFL_FACTOR = strtod(argv[5],NULL);
if(CFL_FACTOR > 0.5 && atoi(argv[3])!=2) {
fprintf(stderr,"WARNING: CFL_FACTOR was set to %e, which is > 0.5.\n",CFL_FACTOR);
fprintf(stderr," This will generally only be stable if the simulation is purely axisymmetric\n");
fprintf(stderr," However, Nx2 was set to %d>2, which implies a non-axisymmetric simulation\n",atoi(argv[3]));
}
}
// Step 0b: Set up numerical grid structure, first in space...
const int Nxx[3] = { atoi(argv[1]), atoi(argv[2]), atoi(argv[3]) };
if(Nxx[0]%2 != 0 || Nxx[1]%2 != 0 || Nxx[2]%2 != 0) {
fprintf(stderr,"Error: Cannot guarantee a proper cell-centered grid if number of grid cells not set to even number.\n");
fprintf(stderr," For example, in case of angular directions, proper symmetry zones will not exist.\n");
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
// Step 0d: Uniform coordinate grids are stored to *xx[3]
REAL *xx[3];
// Step 0d.i: Set bcstruct
bc_struct bcstruct;
{
int EigenCoord = 1;
// Step 0d.ii: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen Eigen-CoordSystem.
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0d.iii: Set Nxx_plus_2NGHOSTS_tot
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0e: Find ghostzone mappings; set up bcstruct
#include "boundary_conditions/driver_bcstruct.h"
// Step 0e.i: Free allocated space for xx[][] array
for(int i=0;i<3;i++) free(xx[i]);
}
// Step 0f: Call set_Nxx_dxx_invdx_params__and__xx(), which sets
// params Nxx,Nxx_plus_2NGHOSTS,dxx,invdx, and xx[] for the
// chosen (non-Eigen) CoordSystem.
int EigenCoord = 0;
set_Nxx_dxx_invdx_params__and__xx(EigenCoord, Nxx, ¶ms, xx);
// Step 0g: Set all C parameters "blah" for params.blah, including
// Nxx_plus_2NGHOSTS0 = params.Nxx_plus_2NGHOSTS0, etc.
#include "set_Cparameters-nopointer.h"
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0h: Time coordinate parameters
const REAL t_final = strtod(argv[4],NULL);
// Step 0i: Set timestep based on smallest proper distance between gridpoints and CFL factor
REAL dt = find_timestep(¶ms, xx);
//fprintf(stderr,"# Timestep set to = %e\n",(double)dt);
int N_final = (int)(t_final / dt + 0.5); // The number of points in time.
// Add 0.5 to account for C rounding down
// typecasts to integers.
int output_every_N = (int)((REAL)N_final/800.0);
if(output_every_N == 0) output_every_N = 1;
// Step 0j: Error out if the number of auxiliary gridfunctions outnumber evolved gridfunctions.
// This is a limitation of the RK method. You are always welcome to declare & allocate
// additional gridfunctions by hand.
if(NUM_AUX_GFS > NUM_EVOL_GFS) {
fprintf(stderr,"Error: NUM_AUX_GFS > NUM_EVOL_GFS. Either reduce the number of auxiliary gridfunctions,\n");
fprintf(stderr," or allocate (malloc) by hand storage for *diagnostic_output_gfs. \n");
exit(1);
}
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict y_0_gfs = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *restrict diagnostic_output_gfs_0 = (REAL *)malloc(sizeof(REAL) * NUM_AUX_GFS * Nxx_plus_2NGHOSTS_tot);
// Step 0l: Set up precomputed reference metric arrays
// Step 0l.i: Allocate space for precomputed reference metric arrays.
#include "rfm_files/rfm_struct__malloc.h"
// Step 0l.ii: Define precomputed reference metric arrays.
{
#include "set_Cparameters-nopointer.h"
#include "rfm_files/rfm_struct__define.h"
}
LOOP_ALL_GFS_GPS(i) {
y_n_gfs[i] = 0.0/0.0;
}
// Step 1: Set up initial data to be exact solution at time=0:
params.time = 0.0;
exact_solution_all_points(¶ms, xx, y_n_gfs);
// Step 2: Start the timer, for keeping track of how fast the simulation is progressing.
#ifdef __linux__ // Use high-precision timer in Linux.
struct timespec start, end;
clock_gettime(CLOCK_REALTIME, &start);
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
// http://www.cplusplus.com/reference/ctime/time/
time_t start_timer,end_timer;
time(&start_timer); // Resolution of one second...
#endif
// Step 3: Integrate the initial data forward in time using the chosen RK-like Method of
// Lines timestepping algorithm, and output periodic simulation diagnostics
for(int n=0;n<=N_final+1;n++) { // Main loop to progress forward in time.
// At each timestep, set Constraints to NaN in the grid interior
LOOP_REGION(NGHOSTS,NGHOSTS+Nxx0,
NGHOSTS,NGHOSTS+Nxx1,
NGHOSTS,NGHOSTS+Nxx2) {
const int idx = IDX3S(i0,i1,i2);
diagnostic_output_gfs[IDX4ptS(CGF,idx)] = 0.0/0.0;
}
// Step 3.a: Output 2D data file periodically, for visualization
params.time = ((REAL)n)*dt;
// Evaluate Divergence constraint violation
Constraints(&rfmstruct, ¶ms, y_n_gfs, diagnostic_output_gfs);
// log_L2_Norm = log10( sqrt[Integral( [numerical - exact]^2 * dV)] )
REAL L2Norm_sum_C = 0.;
int sum = 0;
LOOP_REGION(NGHOSTS,NGHOSTS+Nxx0,
NGHOSTS,NGHOSTS+Nxx1,
NGHOSTS,NGHOSTS+Nxx2) {
const int idx = IDX3S(i0,i1,i2);
double C = (double)diagnostic_output_gfs[IDX4ptS(CGF,idx)];
REAL xx0 = xx[0][i0];
REAL xx1 = xx[1][i1];
REAL xx2 = xx[2][i2];
REAL detghat; diagnostic_integrand(xx0, xx1, xx2, ¶ms, &detghat);
L2Norm_sum_C += C*C*sqrt(detghat);
sum = sum + sqrt(detghat);
}
REAL L2Norm_C = sqrt(L2Norm_sum_C/(sum));
printf("%e %.15e\n",params.time, L2Norm_C);
// Step 3.a: Output 2D data file periodically, for visualization
if(n%20 == 0) {
exact_solution_all_points(¶ms, xx, y_0_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_n_gfs, diagnostic_output_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_0_gfs, diagnostic_output_gfs_0);
char filename[100];
sprintf(filename,"out%d-%08d.txt",Nxx[0],n);
FILE *out2D = fopen(filename, "w");
LOOP_XZ_PLANE(ii, jj, kk){
REAL xCart[3];
xx_to_Cart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
REAL Ex_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART0GF,idx)];
REAL Ey_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART1GF,idx)];
REAL Ax_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART0GF,idx)];
REAL Ay_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART1GF,idx)];
double C = (double)diagnostic_output_gfs[IDX4ptS(CGF,idx)];
fprintf(out2D,"%e %e %e %.15e %.15e %.15e %.15e %.15e\n", params.time,
xCart[0],xCart[2], Ex_num, Ey_num, Ax_num, Ay_num, C);
}
fclose(out2D);
}
if(n==N_final-1) {
exact_solution_all_points(¶ms, xx, y_0_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_n_gfs, diagnostic_output_gfs);
Cartesian_basis(&rfmstruct, ¶ms, xx, y_0_gfs, diagnostic_output_gfs_0);
char filename[100];
sprintf(filename,"out%d.txt",Nxx[0]);
FILE *out2D = fopen(filename, "w");
LOOP_XZ_PLANE(ii, jj, kk){
REAL xCart[3];
xx_to_Cart(¶ms,xx,i0,i1,i2,xCart);
int idx = IDX3S(i0,i1,i2);
REAL Ex_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART0GF,idx)];
REAL Ey_num = (double)diagnostic_output_gfs[IDX4ptS(EUCART1GF,idx)];
REAL Ax_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART0GF,idx)];
REAL Ay_num = (double)diagnostic_output_gfs[IDX4ptS(AUCART1GF,idx)];
REAL Ex_exact = (double)diagnostic_output_gfs_0[IDX4ptS(EUCART0GF,idx)];
REAL Ey_exact = (double)diagnostic_output_gfs_0[IDX4ptS(EUCART1GF,idx)];
REAL Ax_exact = (double)diagnostic_output_gfs_0[IDX4ptS(AUCART0GF,idx)];
REAL Ay_exact = (double)diagnostic_output_gfs_0[IDX4ptS(AUCART1GF,idx)];
REAL Ex__E_rel = log10(fabs(Ex_num - Ex_exact));
REAL Ey__E_rel = log10(fabs(Ey_num - Ey_exact));
REAL Ax__E_rel = log10(fabs(Ax_num - Ax_exact));
REAL Ay__E_rel = log10(fabs(Ay_num - Ay_exact));
fprintf(out2D,"%e %e %.15e %.15e %.15e %.15e\n",xCart[0],xCart[2],
Ex__E_rel, Ey__E_rel, Ax__E_rel, Ay__E_rel);
}
fclose(out2D);
}
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
// Step 3.d: Progress indicator printing to stderr
// Step 3.d.i: Measure average time per iteration
#ifdef __linux__ // Use high-precision timer in Linux.
clock_gettime(CLOCK_REALTIME, &end);
const long long unsigned int time_in_ns = 1000000000L * (end.tv_sec - start.tv_sec) + end.tv_nsec - start.tv_nsec;
#else // Resort to low-resolution, standards-compliant timer in non-Linux OSs
time(&end_timer); // Resolution of one second...
REAL time_in_ns = difftime(end_timer,start_timer)*1.0e9+0.5; // Round up to avoid divide-by-zero.
#endif
const REAL s_per_iteration_avg = ((REAL)time_in_ns / (REAL)n) / 1.0e9;
const int iterations_remaining = N_final - n;
const REAL time_remaining_in_mins = s_per_iteration_avg * (REAL)iterations_remaining / 60.0;
const REAL num_RHS_pt_evals = (REAL)(Nxx[0]*Nxx[1]*Nxx[2]) * 4.0 * (REAL)n; // 4 RHS evals per gridpoint for RK4
const REAL RHS_pt_evals_per_sec = num_RHS_pt_evals / ((REAL)time_in_ns / 1.0e9);
// Step 3.d.ii: Output simulation progress to stderr
if(n % 10 == 0) {
fprintf(stderr,"%c[2K", 27); // Clear the line
fprintf(stderr,"It: %d t=%.2f dt=%.2e | %.1f%%; ETA %.0f s | t/h %.2f | gp/s %.2e\r", // \r is carriage return, move cursor to the beginning of the line
n, n * (double)dt, (double)dt, (double)(100.0 * (REAL)n / (REAL)N_final),
(double)time_remaining_in_mins*60, (double)(dt * 3600.0 / s_per_iteration_avg), (double)RHS_pt_evals_per_sec);
fflush(stderr); // Flush the stderr buffer
} // End progress indicator if(n % 10 == 0)
} // End main loop to progress forward in time.
fprintf(stderr,"\n"); // Clear the final line of output from progress indicator.
// Step 4: Free all allocated memory
#include "rfm_files/rfm_struct__freemem.h"
#include "boundary_conditions/bcstruct_freemem.h"
#include "MoLtimestepping/RK_Free_Memory.h"
free(y_0_gfs);
free(diagnostic_output_gfs_0);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
}
```
<a id='compileexec'></a>
# Step 5: Compile generated C codes & perform simulation of the propagating toriodal electromagnetic field \[Back to [top](#toc)\]
$$\label{compileexec}$$
To aid in the cross-platform-compatible (with Windows, MacOS, & Linux) compilation and execution, we make use of `cmdline_helper` [(**Tutorial**)](Tutorial-cmdline_helper.ipynb).
```
import cmdline_helper as cmd
CFL_FACTOR=0.5
cmd.C_compile(os.path.join(Ccodesdir,"Maxwell_Playground.c"),
os.path.join(outdir,"Maxwell_Playground"),compile_mode="optimized")
# Change to output directory
os.chdir(outdir)
# Clean up existing output files
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
cmd.delete_existing_files("out-*resolution.txt")
# Set tme to end simulation
t_final = str(0.5*domain_size) + ' '
# Run executables
if 'Spherical' in CoordSystem or 'SymTP' in CoordSystem:
cmd.Execute("Maxwell_Playground", "64 48 4 "+t_final+str(CFL_FACTOR), "out-lowresolution.txt")
cmd.Execute("Maxwell_Playground", "96 72 4 "+t_final+str(CFL_FACTOR), "out-medresolution.txt")
Nxx0_low = '64'
Nxx0_med = '96'
elif 'Cylindrical' in CoordSystem:
cmd.Execute("Maxwell_Playground", "50 4 100 "+t_final+str(CFL_FACTOR), "out-lowresolution.txt")
cmd.Execute("Maxwell_Playground", "80 4 160 "+t_final+str(CFL_FACTOR), "out-medresolution.txt")
Nxx0_low = '50'
Nxx0_med = '80'
else:
# Cartesian
cmd.Execute("Maxwell_Playground", "64 64 64 "+t_final+str(CFL_FACTOR), "out-lowresolution.txt")
cmd.Execute("Maxwell_Playground", "128 128 128 "+t_final+str(CFL_FACTOR), "out-medresolution.txt")
Nxx0_low = '64'
Nxx0_med = '128'
# Return to root directory
os.chdir(os.path.join("../../"))
print("Finished this code cell.")
```
<a id='visualize'></a>
# Step 6: Visualize the output! \[Back to [top](#toc)\]
$$\label{visualize}$$
In this section we will generate a movie, plotting the x component of the electric field on a 2D grid..
<a id='installdownload'></a>
## Step 6.a: Install `scipy` and download `ffmpeg` if they are not yet installed/downloaded \[Back to [top](#toc)\]
$$\label{installdownload}$$
Note that if you are not running this within `mybinder`, but on a Windows system, `ffmpeg` must be installed using a separate package (on [this site](http://ffmpeg.org/)), or if running Jupyter within Anaconda, use the command: `conda install -c conda-forge ffmpeg`.
```
!pip install scipy > /dev/null
check_for_ffmpeg = !which ffmpeg >/dev/null && echo $?
if check_for_ffmpeg != ['0']:
print("Couldn't find ffmpeg, so I'll download it.")
# Courtesy https://johnvansickle.com/ffmpeg/
!wget http://astro.phys.wvu.edu/zetienne/ffmpeg-static-amd64-johnvansickle.tar.xz
!tar Jxf ffmpeg-static-amd64-johnvansickle.tar.xz
print("Copying ffmpeg to ~/.local/bin/. Assumes ~/.local/bin is in the PATH.")
!mkdir ~/.local/bin/
!cp ffmpeg-static-amd64-johnvansickle/ffmpeg ~/.local/bin/
print("If this doesn't work, then install ffmpeg yourself. It should work fine on mybinder.")
```
<a id='genimages'></a>
## Step 6.b: Generate images for visualization animation \[Back to [top](#toc)\]
$$\label{genimages}$$
Here we loop through the data files output by the executable compiled and run in [the previous step](#mainc), generating a [png](https://en.wikipedia.org/wiki/Portable_Network_Graphics) image for each data file.
**Special thanks to Terrence Pierre Jacques. His work with the first versions of these scripts greatly contributed to the scripts as they exist below.**
```
## VISUALIZATION ANIMATION, PART 1: Generate PNGs, one per frame of movie ##
import numpy as np
from scipy.interpolate import griddata
import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from IPython.display import HTML
import matplotlib.image as mgimg
import glob
import sys
from matplotlib import animation
globby = glob.glob(os.path.join(outdir,'out'+Nxx0_med+'-00*.txt'))
file_list = []
for x in sorted(globby):
file_list.append(x)
bound = domain_size/2.0
pl_xmin = -bound
pl_xmax = +bound
pl_zmin = -bound
pl_zmax = +bound
for filename in file_list:
fig = plt.figure()
t, x, z, Ex, Ey, Ax, Ay, C = np.loadtxt(filename).T #Transposed for easier unpacking
plotquantity = Ex
time = np.round(t[0], decimals=3)
plotdescription = "Numerical Soln."
plt.title(r"$E_x$ at $t$ = "+str(time))
plt.xlabel("x")
plt.ylabel("z")
grid_x, grid_z = np.mgrid[pl_xmin:pl_xmax:200j, pl_zmin:pl_zmax:200j]
points = np.zeros((len(x), 2))
for i in range(len(x)):
# Zach says: No idea why x and y get flipped...
points[i][0] = x[i]
points[i][1] = z[i]
grid = griddata(points, plotquantity, (grid_x, grid_z), method='nearest')
gridcub = griddata(points, plotquantity, (grid_x, grid_z), method='cubic')
im = plt.imshow(gridcub, extent=(pl_xmin,pl_xmax, pl_zmin,pl_zmax))
ax = plt.colorbar()
plt.clim(-3,3)
ax.set_label(plotdescription)
savefig(os.path.join(filename+".png"),dpi=150)
plt.close(fig)
sys.stdout.write("%c[2K" % 27)
sys.stdout.write("Processing file "+filename+"\r")
sys.stdout.flush()
```
<a id='genvideo'></a>
## Step 6.c: Generate visualization animation \[Back to [top](#toc)\]
$$\label{genvideo}$$
In the following step, [ffmpeg](http://ffmpeg.org) is used to generate an [mp4](https://en.wikipedia.org/wiki/MPEG-4) video file, which can be played directly from this Jupyter notebook.
```
## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
fig = plt.figure(frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
myimages = []
for i in range(len(file_list)):
img = mgimg.imread(file_list[i]+".png")
imgplot = plt.imshow(img)
myimages.append([imgplot])
ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
plt.close()
ani.save(os.path.join(outdir,'Maxwell_ToroidalDipole.mp4'), fps=5,dpi=150)
## VISUALIZATION ANIMATION, PART 3: Display movie as embedded HTML5 (see next cell) ##
# https://stackoverflow.com/questions/18019477/how-can-i-play-a-local-video-in-my-ipython-notebook
# Embed video based on suggestion:
# https://stackoverflow.com/questions/39900173/jupyter-notebook-html-cell-magic-with-python-variable
HTML("""
<video width="480" height="360" controls>
<source src=\""""+os.path.join(outdir,"Maxwell_ToroidalDipole.mp4")+"""\" type="video/mp4">
</video>
""")
```
<a id='convergence'></a>
# Step 7: Plot the numerical error, and confirm that it converges to zero with increasing numerical resolution (sampling) \[Back to [top](#toc)\]
$$\label{convergence}$$
```
x_med, z_med, Ex__E_rel_med, Ey__E_rel_med, Ax__E_rel_med, Ay__E_rel_med = np.loadtxt(os.path.join(outdir,'out'+Nxx0_med+'.txt')).T #Transposed for easier unpacking
pl_xmin = -domain_size/2.
pl_xmax = +domain_size/2.
pl_ymin = -domain_size/2.
pl_ymax = +domain_size/2.
grid_x, grid_z = np.mgrid[pl_xmin:pl_xmax:100j, pl_ymin:pl_ymax:100j]
points_med = np.zeros((len(x_med), 2))
for i in range(len(x_med)):
points_med[i][0] = x_med[i]
points_med[i][1] = z_med[i]
grid_med = griddata(points_med, Ex__E_rel_med, (grid_x, grid_z), method='nearest')
grid_medcub = griddata(points_med, Ex__E_rel_med, (grid_x, grid_z), method='cubic')
plt.clf()
plt.title(r"Nxx0="+Nxx0_med+" Num. Err.: $\log_{10}|Ex|$ at $t$ = "+t_final)
plt.xlabel("x")
plt.ylabel("z")
fig_medcub = plt.imshow(grid_medcub.T, extent=(pl_xmin,pl_xmax, pl_ymin,pl_ymax))
cb = plt.colorbar(fig_medcub)
x_low,z_low, EU0__E_rel_low, EU1__E_rel_low, AU0__E_rel_low, AU1__E_rel_low = np.loadtxt(os.path.join(outdir,'out'+Nxx0_low+'.txt')).T #Transposed for easier unpacking
points_low = np.zeros((len(x_low), 2))
for i in range(len(x_low)):
points_low[i][0] = x_low[i]
points_low[i][1] = z_low[i]
grid_low = griddata(points_low, EU0__E_rel_low, (grid_x, grid_z), method='nearest')
griddiff__low_minus__med = np.zeros((100,100))
griddiff__low_minus__med_1darray = np.zeros(100*100)
gridx_1darray_yeq0 = np.zeros(100)
grid_low_1darray_yeq0 = np.zeros(100)
grid_med_1darray_yeq0 = np.zeros(100)
count = 0
for i in range(100):
for j in range(100):
griddiff__low_minus__med[i][j] = grid_low[i][j] - grid_med[i][j]
griddiff__low_minus__med_1darray[count] = griddiff__low_minus__med[i][j]
if j==49:
gridx_1darray_yeq0[i] = grid_x[i][j]
grid_low_1darray_yeq0[i] = grid_low[i][j] + np.log10((float(Nxx0_low)/float(Nxx0_med))**4)
grid_med_1darray_yeq0[i] = grid_med[i][j]
count = count + 1
fig, ax = plt.subplots()
plt.title(r"4th-order Convergence of $E_x$ at t = "+t_final)
plt.xlabel("x")
plt.ylabel("log10(Absolute error)")
ax.plot(gridx_1darray_yeq0, grid_med_1darray_yeq0, 'k-', label='Nr = '+Nxx0_med)
ax.plot(gridx_1darray_yeq0, grid_low_1darray_yeq0, 'k--', label='Nr = '+Nxx0_low+', mult. by ('+Nxx0_low+'/'+Nxx0_med+')^4')
# ax.set_ylim([-8.5,0.5])
legend = ax.legend(loc='lower right', shadow=True, fontsize='x-large')
legend.get_frame().set_facecolor('C1')
plt.show()
```
<a id='div_e'></a>
# Step 8: Comparison Divergence Constrain Violation \[Back to [top](#toc)\]
$$\label{div_e}$$
Here we calculate the violation quantity
$$
\mathcal{C} \equiv \nabla^i E_i,
$$
at each point on our grid (excluding the ghost zones) then calculate the normalized L2 norm over the entire volume via
\begin{align}
\lVert C \rVert &= \left( \frac{\int C^2 d\mathcal{V}}{\int d\mathcal{V}} \right)^{1/2}.
\end{align}
```
# Plotting the constraint violation
nrpy_div_low = np.loadtxt(os.path.join(outdir,'out-lowresolution.txt')).T
nrpy_div_med = np.loadtxt(os.path.join(outdir,'out-medresolution.txt')).T
plt.plot(nrpy_div_low[0]/domain_size, (nrpy_div_low[1]), 'k-',label='Nxx0 = '+Nxx0_low)
plt.plot(nrpy_div_med[0]/domain_size, (nrpy_div_med[1]), 'k--',label='Nxx0 = '+Nxx0_med)
plt.yscale('log')
plt.xlabel('Light Crossing Times')
plt.ylabel('||C||')
# plt.xlim(0,2.2)
# plt.ylim(1e-4, 1e-5)
plt.title('L2 Norm of the Divergence of E - '+par.parval_from_str("reference_metric::CoordSystem")+' Coordinates')
plt.legend(loc='center right')
plt.show()
```
<a id='latex_pdf_output'></a>
# Step 9: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear.pdf](Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-Solving_Maxwells_Equations_in_Vacuum-Curvilinear")
```
| github_jupyter |
```
import matplotlib
# matplotlib.use('Agg') # Or any other X11 back-end
import numpy as np
import torch.nn as nn
import torch.nn.init as init
import matplotlib.pyplot as plt
import numpy as np
import os
from scipy import signal
import time
import torch
import torch.nn as nn
from torch.autograd import Variable
import h5py
import numpy as np
import os
import pandas as pd
from torch.utils.data import Dataset
import wfdb
%matplotlib inline
class SleepDatasetValid(Dataset):
"""Physionet 2018 dataset."""
def __init__(self, records_file, root_dir, s, f, window_size, hanning_window):
"""
Args:
records_file (string): Path to the records file.
root_dir (string): Directory with all the signals.
"""
self.landmarks_frame = pd.read_csv(records_file)[s:f]
self.root_dir = root_dir
self.window_size = window_size
self.hw = hanning_window
self.num_bins = window_size//hanning_window
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
folder_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
file_name = self.landmarks_frame.iloc[idx, 0]
# print(file_name)
# print(folder_name)
# file_name='tr03-0005/'
# folder_name='../data/training/tr03-0005/'
signals = wfdb.rdrecord(os.path.join(folder_name, file_name[:-1]))
arousals = h5py.File(os.path.join(folder_name, file_name[:-1] + '-arousal.mat'), 'r')
tst_ann = wfdb.rdann(os.path.join(folder_name, file_name[:-1]), 'arousal')
positive_indexes = []
negative_indexes = []
arous_data = arousals['data']['arousals'].value.ravel()
for w in range(len(arous_data)//self.window_size):
if arous_data[w*self.window_size:(w+1)*self.window_size].max() > 0:
positive_indexes.append(w)
else:
negative_indexes.append(w)
# max_in_window = arous_data[].max()
if len(positive_indexes) < len(negative_indexes):
windexes = np.append(positive_indexes, np.random.choice(negative_indexes, len(positive_indexes)//10, replace=False))
else:
windexes = np.append(negative_indexes, np.random.choice(positive_indexes, len(negative_indexes), replace=False))
windexes = np.sort(windexes)
labels = []
total = 0
positive = 0
for i in windexes:
tmp = []
window_s = i*self.window_size
window_e = (i+1)*self.window_size
for j in range(self.num_bins):
total += 1
bin_s = j*self.hw + window_s
bin_e = (j+1)*self.hw + window_s
if arous_data[bin_s:bin_e].max() > 0:
tmp.append(1)
positive += 1
else:
tmp.append(0)
labels.append(tmp)
# print('sample percent ratio: {:.2f}'.format(positive/total))
interested = [0]
# for i in range(13):
# if signals.sig_name[i] in ['SaO2', 'ABD', 'F4-M1', 'C4-M1', 'O2-M1', 'AIRFLOW']:
# interested.append(i)
# POI = arousal_centers
sample = ((signals.p_signal[:,interested], windexes), arous_data)
return sample
class SleepDataset(Dataset):
"""Physionet 2018 dataset."""
def __init__(self, records_file, root_dir, s, f, window_size, hanning_window, validation=False):
"""
Args:
records_file (string): Path to the records file.
root_dir (string): Directory with all the signals.
"""
self.landmarks_frame = pd.read_csv(records_file)[s:f]
self.root_dir = root_dir
self.window_size = window_size
self.hw = hanning_window
self.num_bins = window_size//hanning_window
self.validation=validation
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
np.random.seed(12345)
folder_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
file_name = self.landmarks_frame.iloc[idx, 0]
# print(file_name)
# print(folder_name)
# file_name='tr03-0005/'
# folder_name='../data/training/tr03-0005/'
signals = wfdb.rdrecord(os.path.join(folder_name, file_name[:-1]))
arousals = h5py.File(os.path.join(folder_name, file_name[:-1] + '-arousal.mat'), 'r')
tst_ann = wfdb.rdann(os.path.join(folder_name, file_name[:-1]), 'arousal')
positive_indexes = []
negative_indexes = []
arous_data = arousals['data']['arousals'].value.ravel()
for w in range(len(arous_data)//self.window_size):
if arous_data[w*self.window_size:(w+1)*self.window_size].max() > 0:
positive_indexes.append(w)
else:
negative_indexes.append(w)
# max_in_window = arous_data[].max()
if self.validation:
windexes = np.append(positive_indexes, negative_indexes)
else:
if len(positive_indexes) < len(negative_indexes):
windexes = np.append(positive_indexes, np.random.choice(negative_indexes, len(positive_indexes)//10, replace=False))
else:
windexes = np.append(negative_indexes, np.random.choice(positive_indexes, len(negative_indexes), replace=False))
windexes = np.sort(windexes)
# windexes = np.array(positive_indexes)
labels = []
total = 0
positive = 0
for i in windexes:
tmp = []
window_s = i*self.window_size
window_e = (i+1)*self.window_size
for j in range(self.num_bins):
total += 1
bin_s = j*self.hw + window_s
bin_e = (j+1)*self.hw + window_s
if arous_data[bin_s:bin_e].max() > 0:
positive += 1
tmp.append(1.)
else:
tmp.append(0.)
labels.append(tmp)
interested = []
# print('# sample positive: {:.2f} #'.format(positive/total))
for i in range(13):
# if signals.sig_name[i] in ['SaO2', 'ABD', 'F4-M1', 'C4-M1', 'O2-M1', 'AIRFLOW']:
interested.append(i)
# POI = arousal_centers
# tst_sig = np.random.rand(len(signals.p_signal[:,interested]),1)
sample = ((signals.p_signal[:,interested], windexes), np.array(labels))
# sample = ((tst_sig, windexes), np.array(labels))
return sample
class Model_V3(nn.Module):
def __init__(self, window_size, han_size):
super(Model_V3, self).__init__()
num_bins = window_size//han_size
self.cnn1 = nn.Conv2d(13, num_bins, 3, padding=1)
init.xavier_uniform(self.cnn1.weight, gain=nn.init.calculate_gain('relu'))
init.constant(self.cnn1.bias, 0.1)
self.cnn2 = nn.Conv2d(4, 8, 3, padding=1)
init.xavier_uniform(self.cnn2.weight, gain=nn.init.calculate_gain('relu'))
init.constant(self.cnn2.bias, 0.1)
# self.cnn3 = nn.Conv2d(32, num_bins, 3, padding=1)
self.cnn3 = nn.Conv2d(8, num_bins, 3, padding=1)
init.xavier_uniform(self.cnn3.weight, gain=nn.init.calculate_gain('relu'))
init.constant(self.cnn3.bias, 0.1)
self.pool = nn.MaxPool2d(2, 2)
self.relu = nn.ReLU()
# out_dim = ((han_size//2+1)//8)*((window_size//han_size)//8)*16
self.output = nn.AdaptiveMaxPool2d(1)
# self.fc = nn.Linear(out_dim, num_bins)
self.fc = nn.Linear(num_bins, num_bins)
self.sigmoid = nn.Sigmoid()
self.do = nn.Dropout()
def forward(self, x):
x = self.relu(self.pool(self.cnn1(x)))
# x = self.relu(self.pool(self.cnn2(x)))
# x = self.relu(self.pool(self.cnn3(x)))
x = self.output(x)
x = self.fc(x.view(-1))
x = self.sigmoid(x)
return x.view(-1)
minutes = 2
raw_window_size = minutes*60*200
hanning_window = 2048
window_size = raw_window_size + (hanning_window - (raw_window_size + hanning_window) % hanning_window)
print('adjusted window size: {}, num bins: {}'.format(window_size, window_size//hanning_window))
output_pixels = ((window_size//hanning_window * (hanning_window//2+1))//64)*16
print('FC # params: {}'.format(output_pixels*window_size//hanning_window))
learning_rate = 1e-3
def to_spectogram(matrix):
spectograms = []
for i in range(all_data.size()[2]):
f, t, Sxx = signal.spectrogram(matrix[0,:,i].numpy(),
window=signal.get_window('hann',hanning_window, False),
fs=200,
scaling='density',
mode='magnitude',
noverlap=0
)
if (Sxx.min() != 0 or Sxx.max() != 0):
spectograms.append((Sxx - Sxx.mean()) / Sxx.std())
else:
spectograms.append(Sxx)
return torch.FloatTensor(spectograms).unsqueeze(0).cuda()
#TODO add torch.save(the_model.state_dict(), PATH) this to save the best models weights
train_dataset = SleepDataset('/beegfs/ga4493/projects/groupb/data/training/RECORDS',
'/beegfs/ga4493/projects/groupb/data/training/', 20, 21, window_size, hanning_window)
train_loaders = torch.utils.data.DataLoader(dataset=train_dataset,
batch_size=1,
shuffle=True)
test_dataset = SleepDataset('/beegfs/ga4493/projects/groupb/data/training/RECORDS',
'/beegfs/ga4493/projects/groupb/data/training/', 0, 1, window_size, hanning_window, validation=False)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset,
batch_size=1,
shuffle=False)
model_v1 = Model_V3(window_size, hanning_window)
if torch.cuda.is_available():
print('using cuda')
model_v1.cuda()
criterion = nn.BCELoss(size_average=False)
optimizer = torch.optim.Adam(model_v1.parameters(), lr=learning_rate)
sig = nn.Sigmoid()
# i, ((data, cent), v_l) = next(enumerate(test_loader))
losses = []
v_losses = []
accuracy = []
v_accuracy = []
l = None
for epoch in range(20):
loss_t = 0.0
acc_t = 0.0
count_t = 0
start_time = time.time()
val_l = None
v_out = None
v_all = []
for c, ((all_data, windexes), labels) in enumerate(train_loaders):
for i, win in enumerate(windexes.numpy()[0]):
inp_subs = Variable(to_spectogram(all_data[:,win*window_size:(win+1)*window_size,]))
l = None
l = labels[0, i].type(torch.FloatTensor)
if torch.cuda.is_available():
l = l.cuda()
l = Variable(l)
output = model_v1(inp_subs)
# print(output)
loss = criterion(output, l)
optimizer.zero_grad()
loss.backward()
optimizer.step()
loss_t += loss.data[0]
comparison = (output.cpu().data.numpy().ravel() > 0.5) == (l.cpu().data.numpy())
acc_t += comparison.sum() / (window_size//hanning_window)
count_t += 1
losses.append(loss_t/count_t)
accuracy.append(acc_t/count_t)
loss_v = 0.0
acc_v = 0.0
count_v = 0
for c, ((data, windexes), v_l) in enumerate(test_loader):
for i, win in enumerate(windexes.numpy()[0]):
inp_subs = Variable(to_spectogram(data[:,win*window_size:(win+1)*window_size,]))
l = None
l = v_l[0, i].type(torch.FloatTensor)
if torch.cuda.is_available():
l = l.cuda()
l = Variable(l)
output = model_v1(inp_subs)
loss = criterion(output, l)
loss_v += loss.data[0]
count_v += 1
comparison = (output.cpu().data.numpy().ravel() > 0.5) == (l.cpu().data.numpy())
acc_v += comparison.sum() / (window_size//hanning_window)
v_losses.append(loss_v/count_v)
v_accuracy.append(acc_v/count_v)
print('#'*45)
print('# epoch - {:>10} | time(s) -{:>10.2f} #'.format(epoch, time.time() - start_time))
print('# T loss - {:>10.2f} | V loss - {:>10.2f} #'.format(loss_t/count_t, loss_v/count_v))
print('# T acc - {:>10.2f} | V acc - {:>10.2f} #'.format(acc_t/count_t, acc_v/count_v))
print('#'*45)
v_dataset = SleepDatasetValid('/beegfs/ga4493/projects/groupb/data/training/RECORDS',
'/beegfs/ga4493/projects/groupb/data/training/', 20, 21, window_size, hanning_window)
v_loader = torch.utils.data.DataLoader(dataset=v_dataset,
batch_size=1,
shuffle=False)
start = 0
stop = 2000000
ones = np.ones(hanning_window)
# plt.plot(all_data.cpu().view(-1).numpy())
# plt.show()
for c, ((data, windexes), v_l) in enumerate(v_loader):
out_for_plot = []
# for i in range((data.size()[1]//window_size)):
for i in range((start//window_size), (stop//window_size)):
inp_subs = Variable(to_spectogram(data[:,i*window_size:(i+1)*window_size,]))
output = model_v1(inp_subs).cpu().data.numpy()
out_for_plot = np.append(out_for_plot, output)
out_for_plot = np.repeat(out_for_plot, hanning_window)
f = plt.figure(figsize=(20, 10))
plt.plot(out_for_plot)
# plt.plot((v_l.numpy()[0][:len(v_l.numpy()[0])] > 0).astype(float)*1.1, alpha=0.3)
# plt.plot((v_l.numpy()[0][:len(v_l.numpy()[0])] < 0).astype(float)*1.1, alpha=0.3)
plt.plot((v_l.numpy()[0][(start): (stop)] > 0).astype(float)*1.1, alpha=0.3)
plt.plot((v_l.numpy()[0][(start): (stop)] < 0).astype(float)*1.1, alpha=0.3)
plt.ylim((0,1.15))
plt.axhline(y=0.5, color='r', linestyle='-')
plt.show()
plt.imshow(inp_subs.cpu().squeeze(0).squeeze(0).data.numpy(), aspect='auto')
inp_subs.cpu().squeeze(0).squeeze(0)
```
| github_jupyter |
# 1A.data - Décorrélation de variables aléatoires
On construit des variables corrélées gaussiennes et on cherche à construire des variables décorrélées en utilisant le calcul matriciel.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
```
Ce TD appliquera le calcul matriciel aux vecteurs de variables normales [corrélées](http://fr.wikipedia.org/wiki/Covariance) ou aussi [décomposition en valeurs singulières](https://fr.wikipedia.org/wiki/D%C3%A9composition_en_valeurs_singuli%C3%A8res).
## Création d'un jeu de données
### Q1
La première étape consiste à construire des variables aléatoires normales corrélées dans une matrice $N \times 3$. On cherche à construire cette matrice au format [numpy](http://www.numpy.org/). Le programme suivant est un moyen de construire un tel ensemble à l'aide de combinaisons linéaires. Complétez les lignes contenant des ``....``.
```
import random
import numpy as np
def combinaison () :
x = random.gauss(0,1) # génère un nombre aléatoire
y = random.gauss(0,1) # selon une loi normale
z = random.gauss(0,1) # de moyenne null et de variance 1
x2 = x
y2 = 3*x + y
z2 = -2*x + y + 0.2*z
return [x2, y2, z2]
# mat = [ ............. ]
# npm = np.matrix ( mat )
```
### Q2
A partir de la matrice ``npm``, on veut construire la matrice des corrélations.
```
npm = ... # voir question précédente
t = npm.transpose ()
a = t * npm
a /= npm.shape[0]
```
A quoi correspond la matrice ``a`` ?
### Corrélation de matrices
### Q3
Construire la matrice des corrélations à partir de la matrice ``a``. Si besoin, on pourra utiliser le module [copy](https://docs.python.org/3/library/copy.html).
```
import copy
b = copy.copy (a) # remplacer cette ligne par b = a
b[0,0] = 44444444
print(b) # et comparer le résultat ici
```
### Q4
Construire une fonction qui prend comme argument la matrice ``npm`` et qui retourne la matrice de corrélation. Cette fonction servira plus pour vérifier que nous avons bien réussi à décorréler.
```
def correlation(npm):
# ..........
return "....."
```
## Un peu de mathématiques
Pour la suite, un peu de mathématique. On note $M$ la matrice ``npm``. $V=\frac{1}{n}M'M$ correspond à la matrice des *covariances* et elle est nécessairement symétrique. C'est une matrice diagonale si et seulement si les variables normales sont indépendantes. Comme toute matrice symétrique, elle est diagonalisable. On peut écrire :
$$\frac{1}{n}M'M = P \Lambda P'$$
$P$ vérifie $P'P= PP' = I$. La matrice $\Lambda$ est diagonale et on peut montrer que toutes les valeurs propres sont positives ($\Lambda = \frac{1}{n}P'M'MP = \frac{1}{n}(MP)'(MP)$).
On définit alors la racine carrée de la matrice $\Lambda$ par :
$$\begin{array}{rcl} \Lambda &=& diag(\lambda_1,\lambda_2,\lambda_3) \\ \Lambda^{\frac{1}{2}} &=& diag\left(\sqrt{\lambda_1},\sqrt{\lambda_2},\sqrt{\lambda_3}\right)\end{array}$$
On définit ensuite la racine carrée de la matrice $V$ :
$$V^{\frac{1}{2}} = P \Lambda^{\frac{1}{2}} P'$$
On vérifie que $\left(V^{\frac{1}{2}}\right)^2 = P \Lambda^{\frac{1}{2}} P' P \Lambda^{\frac{1}{2}} P' = P \Lambda^{\frac{1}{2}}\Lambda^{\frac{1}{2}} P' = V = P \Lambda P' = V$.
## Calcul de la racine carrée
### Q6
Le module [numpy](http://www.numpy.org/) propose une fonction qui retourne la matrice $P$ et le vecteur des valeurs propres $L$ :
```
L,P = np.linalg.eig(a)
```
Vérifier que $P'P=I$. Est-ce rigoureusement égal à la matrice identité ?
### Q7
Que fait l'instruction suivante : ``np.diag(L)`` ?
### Q8
Ecrire une fonction qui calcule la racine carrée de la matrice $\frac{1}{n}M'M$ (on rappelle que $M$ est la matrice ``npm``). Voir aussi [Racine carrée d'une matrice](https://fr.wikipedia.org/wiki/Racine_carr%C3%A9e_d%27une_matrice).
## Décorrélation
``np.linalg.inv(a)`` permet d'obtenir l'inverse de la matrice ``a``.
### Q9
Chaque ligne de la matrice $M$ représente un vecteur de trois variables corrélées. La matrice de covariance est $V=\frac{1}{n}M'M$. Calculer la matrice de covariance de la matrice $N=M V^{-\frac{1}{2}}$ (mathématiquement).
### Q10
Vérifier numériquement.
## Simulation de variables corrélées
### Q11
A partir du résultat précédent, proposer une méthode pour simuler un vecteur de variables corrélées selon une matrice de covariance $V$ à partir d'un vecteur de lois normales indépendantes.
### Q12
Proposer une fonction qui crée cet échantillon :
```
def simultation (N, cov) :
# simule un échantillon de variables corrélées
# N : nombre de variables
# cov : matrice de covariance
# ...
return M
```
### Q13
Vérifier que votre échantillon a une matrice de corrélations proche de celle choisie pour simuler l'échantillon.
| github_jupyter |
# CME Session
### Goals
1. Search and download some coronagraph images
2. Load into Maps
3. Basic CME front enhancement
4. Extract CME front positions
6. Convert positions to height
5. Fit some simple models to height-time data
```
%matplotlib notebook
import warnings
warnings.filterwarnings("ignore")
import astropy.units as u
import numpy as np
from astropy.time import Time
from astropy.coordinates import SkyCoord
from astropy.visualization import time_support, quantity_support
from matplotlib import pyplot as plt
from matplotlib.colors import LogNorm, SymLogNorm
from scipy import ndimage
from scipy.optimize import curve_fit
from sunpy.net import Fido, attrs as a
from sunpy.map import Map
from sunpy.coordinates.frames import Heliocentric, Helioprojective
```
# LASCO C2
## Data Search and Download
```
c2_query = Fido.search(a.Time('2017-09-10T15:10', '2017-09-10T18:00'),
a.Instrument.lasco, a.Detector.c2)
c2_query
c2_results = Fido.fetch(c2_query);
c2_results
c2_results
```
## Load into maps and plot
```
# results = ['/Users/shane/sunpy/data/22650296.fts', '/Users/shane/sunpy/data/22650294.fts', '/Users/shane/sunpy/data/22650292.fts', '/Users/shane/sunpy/data/22650297.fts', '/Users/shane/sunpy/data/22650295.fts', '/Users/shane/sunpy/data/22650290.fts', '/Users/shane/sunpy/data/22650293.fts', '/Users/shane/sunpy/data/22650291.fts', '/Users/shane/sunpy/data/22650298.fts', '/Users/shane/sunpy/data/22650289.fts']
c2_maps = Map(c2_results, sequence=True);
c2_maps.plot();
```
Check the polarisation and filter to make sure they don't change
```
[(m.exposure_time, m.meta.get('polar'), m.meta.get('filter')) for m in c2_maps]
```
Rotate the map so standard orintation and pixel are aligned with wcs axes
```
c2_maps = [m.rotate() for m in c2_maps];
```
# Running and Base Difference
The corona above $\sim 2 R_{Sun}$ is dominated by the F-corona (Fraunhofer corona) which is composed of photospheric radiation Rayleigh scattered off dust particles It forms a continuous spectrum with the Fraunhofer absorption lines superimposed. The radiation has a very low degree of polarisation.
There are a number of approaches to remove this the most straight forward are
* Running Difference $I(x,y)=I_i(x,y) - I_{i-1}(x,y)$
* Base Difference $I(x,y)=I_i(x,y) - I_{B}(x,y)$
* Background Subtraction $I(x,y)=I_i(x,y) - I_{BG}(x,y)$$
Can create new map using data and meta from other maps
```
c2_bdiff_maps = Map([(c2_maps[i+1].data/c2_maps[i+1].exposure_time
- c2_maps[0].data/c2_maps[0].exposure_time, c2_maps[i+1].meta)
for i in range(len(c2_maps)-1)], sequence=True)
```
In jupyter notebook sunpy has very nice preview functionality
```
c2_bdiff_maps
```
## CME front
One technique to help extract CME front is to create a space-time plot or j-plot we can define a region of interest and then sum ovber the region to increase signal to noise.
```
fig, ax = plt.subplots(subplot_kw={'projection': c2_bdiff_maps[3]})
c2_bdiff_maps[3].plot(clip_interval=[1,99]*u.percent, axes=ax)
bottom_left = SkyCoord(0*u.arcsec, -200*u.arcsec, frame=c2_bdiff_maps[3].coordinate_frame)
top_right = SkyCoord(6500*u.arcsec, 200*u.arcsec, frame=c2_bdiff_maps[3].coordinate_frame)
c2_bdiff_maps[9].draw_quadrangle(bottom_left, top_right=top_right, axes=ax)
```
Going to extract the data in the region above and then sum over the y-direction to a 1-d plot of intensity vs pixel coordinate.
```
c2_submaps = []
for m in c2_bdiff_maps:
# define the coordinates of the bottom left and top right for each map should really define once and then transform
bottom_left = SkyCoord(0*u.arcsec, -200*u.arcsec, frame=m.coordinate_frame)
top_right = SkyCoord(6500*u.arcsec, 200*u.arcsec, frame=m.coordinate_frame)
c2_submaps.append(m.submap(bottom_left, top_right=top_right))
c2_submaps[0].data.shape
```
No we can create a space-time diagram by stack these slices one after another,
```
c2_front_pix = []
def onclick(event):
global coords
ax.plot(event.xdata, event.ydata, 'o', color='r')
c2_front_pix.append((event.ydata, event.xdata))
fig, ax = plt.subplots()
ax.imshow(np.stack([m.data.mean(axis=0)/m.data.mean(axis=0).max() for m in c2_submaps]).T,
aspect='auto', origin='lower', interpolation='none', norm=SymLogNorm(0.1,vmax=2))
cid = fig.canvas.mpl_connect('button_press_event', onclick)
if not c2_front_pix:
c2_front_pix = [(209.40188156061873, 3.0291329045449533),
(391.58261749135465, 3.9464716142223724)]
pix, index = c2_front_pix[0]
index = round(index)
fig, ax = plt.subplots(subplot_kw={'projection': c2_bdiff_maps[index]})
c2_bdiff_maps[index].plot(clip_interval=[1,99]*u.percent, axes=ax)
pp = c2_submaps[index].pixel_to_world(*[pix,34/4]*u.pix)
ax.plot_coord(pp, marker='x', ms=20, color='k');
```
Extract the times an coordinates for later
```
c2_times = [m.date for m in c2_submaps[3:5] ]
c2_coords = [c2_submaps[i].pixel_to_world(*[c2_front_pix[i][0],34/4]*u.pix) for i, m in enumerate(c2_submaps[3:5])]
c2_times, c2_coords
```
# Lasco C3
## Data seach and download
```
c3_query = Fido.search(a.Time('2017-09-10T15:10', '2017-09-10T19:00'),
a.Instrument.lasco, a.Detector.c3)
c3_query
```
Download
```
c3_results = Fido.fetch(c3_query);
```
Load into maps
```
c3_maps = Map(c3_results, sequence=True);
c3_maps
```
Rotate
```
c3_maps = [m.rotate() for m in c3_maps]
```
## Create Base Differnce maps
```
c3_bdiff_maps = Map([(c3_maps[i+1].data/c3_maps[i+1].exposure_time.to_value('s')
- c3_maps[0].data/c3_maps[0].exposure_time.to_value('s'),
c3_maps[i+1].meta)
for i in range(0, len(c3_maps)-1)], sequence=True)
c3_bdiff_maps
```
Can use a median filter to reduce some of the noise and make front easier to identify
```
c3_bdiff_maps = Map([(ndimage.median_filter(c3_maps[i+1].data/c3_maps[i+1].exposure_time.to_value('s')
- c3_maps[0].data/c3_maps[0].exposure_time.to_value('s'), size=5),
c3_maps[i+1].meta)
for i in range(0, len(c3_maps)-1)], sequence=True)
```
## CME front
```
fig, ax = plt.subplots(subplot_kw={'projection': c3_bdiff_maps[9]})
c3_bdiff_maps[9].plot(clip_interval=[1,99]*u.percent, axes=ax)
bottom_left = SkyCoord(0*u.arcsec, -2000*u.arcsec, frame=c3_bdiff_maps[9].coordinate_frame)
top_right = SkyCoord(29500*u.arcsec, 2000*u.arcsec, frame=c3_bdiff_maps[9].coordinate_frame)
c3_bdiff_maps[9].draw_quadrangle(bottom_left, top_right=top_right,
axes=ax)
```
Extract region of data
```
c3_submaps = []
for m in c3_bdiff_maps:
bottom_left = SkyCoord(0*u.arcsec, -2000*u.arcsec, frame=m.coordinate_frame)
top_right = SkyCoord(29500*u.arcsec, 2000*u.arcsec, frame=m.coordinate_frame)
c3_submaps.append(m.submap(bottom_left, top_right=top_right))
c3_front_pix = []
def onclick(event):
global coords
ax.plot(event.xdata, event.ydata, 'o', color='r')
c3_front_pix.append((event.ydata, event.xdata))
fig, ax = plt.subplots()
ax.imshow(np.stack([m.data.mean(axis=0)/m.data.mean(axis=0).max() for m in c3_submaps]).T,
aspect='auto', origin='lower', interpolation='none', norm=SymLogNorm(0.1,vmax=2))
cid = fig.canvas.mpl_connect('button_press_event', onclick)
if not c3_front_pix:
c3_front_pix = [(75.84577056752656, 3.007459455920803),
(124.04923377098979, 3.9872981655982223),
(173.6704458922019, 5.039717520436931),
(216.20291342466945, 5.874394939791771),
(248.81113853289455, 6.854233649469189),
(287.0903593121153, 7.797782036565963),
(328.20507792683395, 8.995362681727254),
(369.3197965415526, 9.866330423662738),
(401.92802164977775, 10.991330423662738)]
pix, index = c3_front_pix[5]
index = round(index)
fig, ax = plt.subplots(subplot_kw={'projection': c3_bdiff_maps[index]})
c3_bdiff_maps[index].plot(clip_interval=[1,99]*u.percent, axes=ax)
pp = c3_submaps[index].pixel_to_world(*[pix,37/4]*u.pix)
ax.plot_coord(pp, marker='x', ms=20, color='r');
```
Extract times and coordinates for later
```
c3_times = [m.date for m in c3_submaps[3:12] ]
c3_coords = [c3_submaps[i].pixel_to_world(*[c3_front_pix[i][0],37/4]*u.pix) for i, m in enumerate(c3_submaps[3:12])]
```
# Coordintes to Heights
```
times = Time(np.concatenate([c2_times, c3_times]))
coords = c2_coords + c3_coords
heights_pos = np.hstack([(c.observer.radius * np.tan(c.Tx)) for c in coords])
heights_pos
heights_pos_error = np.hstack([(c.observer.radius * np.tan(c.Tx + 56*u.arcsecond*5)) for c in coords])
height_err = heights_pos_error - heights_pos
heights_pos, height_err
times.shape, height_err.shape
with Helioprojective.assume_spherical_screen(center=coords[0].observer):
heights_sph = np.hstack([np.sqrt(c.transform_to('heliocentric').x**2
+ c.transform_to('heliocentric').y**2
+ c.transform_to('heliocentric').z**2) for c in coords])
heights_sph
fig, axs = plt.subplots()
axs.errorbar(times.datetime, heights_pos.to_value(u.Rsun), yerr=height_err.to_value(u.Rsun), fmt='.')
axs.plot(times.datetime, heights_sph.to(u.Rsun), '+')
```
C2 data points look off not uncommon as different telescope different sensetivity we'll just drop these for the moment
```
times = Time(np.hstack([c2_times, c3_times]))
heights_pos = heights_pos
height_err = height_err
```
# Model Fitting
### Models
Constant velocity model
\begin{align}
a = \frac{dv}{dt} = 0 \\
h(t) = h_0 + v_0 t \\
\end{align}
Constant acceleration model
\begin{align}
a = a_{0} \\
v(t) = v_0 + a_0 t \\
h(t) = h_0 + v_0 t + \frac{1}{2}a_0 t^{2}
\end{align}
```
def const_vel(t0, h0, v0):
return h0 + v0*t0
def const_accel(t0, h0, v0, a0):
return h0 + v0*t0 + 0.5 * a0*t0**2
t0 = (times-times[0]).to(u.s)
const_vel_fit = curve_fit(const_vel, t0, heights_pos, sigma=height_err,
p0=[heights_pos[0].to_value(u.m), 350000])
h0, v0 = const_vel_fit[0]
delta_h0, delta_v0 = np.sqrt(const_vel_fit[1].diagonal())
h0 = h0*u.m
v0 = v0*u.m/u.s
delta_h0 = delta_h0*u.m
delta_v0 = delta_v0*(u.m/u.s)
print(f'h0: {h0.to(u.Rsun).round(2)} +/- {delta_h0.to(u.Rsun).round(2)}')
print(f'v0: {v0.to(u.km/u.s).round(2)} +/- {delta_v0.to(u.km/u.s).round(2)}')
const_accel_fit = curve_fit(const_accel, t0, heights_pos, p0=[heights_pos[0].to_value(u.m), 600000, -5])
h0, v0, a0 = const_accel_fit[0]
delta_h0, delta_v0, delta_a0 = np.sqrt(const_accel_fit[1].diagonal())
h0 = h0*u.m
v0 = v0*u.m/u.s
a0 = a0*u.m/u.s**2
delta_h0 = delta_h0*u.m
delta_v0 = delta_v0*(u.m/u.s)
delta_a0 = delta_a0*(u.m/u.s**2)
print(f'h0: {h0.to(u.Rsun).round(2)} +/- {delta_h0.to(u.Rsun).round(2)}')
print(f'v0: {v0.to(u.km/u.s).round(2)} +/- {delta_v0.to(u.km/u.s).round(2)}')
print(f'a0: {a0.to(u.m/u.s**2).round(2)} +/- {delta_a0.to(u.m/u.s**2).round(2)}')
```
# Check against CDAW CME list
* https://cdaw.gsfc.nasa.gov/CME_list/UNIVERSAL/2017_09/univ2017_09.html
```
with quantity_support():
fig, axes = plt.subplots()
axes.errorbar(times.datetime, heights_pos.to(u.Rsun), fmt='.', yerr=height_err)
axes.plot(times.datetime, const_vel(t0.value, *const_vel_fit[0])*u.m, 'r-')
axes.plot(times.datetime, const_accel(t0.value, *const_accel_fit[0])*u.m, 'r-')
fig.autofmt_xdate()
with quantity_support() and time_support(format='isot'):
fig, axes = plt.subplots()
axes.plot(times, heights_pos.to(u.Rsun), 'x')
axes.plot(times, const_vel(t0.value, *const_vel_fit[0])*u.m, 'r-')
axes.plot(times, const_accel(t0.value, *const_accel_fit[0])*u.m, 'r-')
fig.autofmt_xdate()
```
Estimate the arrival time at Earth like distance for constant velocity model
```
(((1*u.AU) - const_vel_fit[0][0] * u.m) / (const_vel_fit[0][1] * u.m/u.s)).decompose().to(u.hour)
roots = np.roots([((1*u.AU) - const_accel_fit[0][0] * u.m).to_value(u.m),
const_accel_fit[0][1], 0.5*const_accel_fit[0][2]][::-1])
(roots*u.s).to(u.hour)
```
| github_jupyter |
# GPU-accelerated LightGBM
This kernel explores a GPU-accelerated LGBM model to predict customer transaction.
## Notebook Content
1. [Re-compile LGBM with GPU support](#1)
1. [Loading the data](#2)
1. [Training the model on CPU](#3)
1. [Training the model on GPU](#4)
1. [Submission](#5)
<a id="1"></a>
## 1. Re-compile LGBM with GPU support
In Kaggle notebook setting, set the `Internet` option to `Internet connected`, and `GPU` to `GPU on`.
We first remove the existing CPU-only lightGBM library and clone the latest github repo.
```
!rm -r /opt/conda/lib/python3.6/site-packages/lightgbm
!git clone --recursive https://github.com/Microsoft/LightGBM
```
Next, the Boost development library must be installed.
```
!apt-get install -y -qq libboost-all-dev
```
The next step is to build and re-install lightGBM with GPU support.
```
%%bash
cd LightGBM
rm -r build
mkdir build
cd build
cmake -DUSE_GPU=1 -DOpenCL_LIBRARY=/usr/local/cuda/lib64/libOpenCL.so -DOpenCL_INCLUDE_DIR=/usr/local/cuda/include/ ..
make -j$(nproc)
!cd LightGBM/python-package/;python3 setup.py install --precompile
```
Last, carry out some post processing tricks for OpenCL to work properly, and clean up.
```
!mkdir -p /etc/OpenCL/vendors && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
!rm -r LightGBM
```
<a id="2"></a>
## 2. Loading the data
```
import pandas as pd
import numpy as np
from sklearn.model_selection import StratifiedKFold
import lightgbm as lgb
from sklearn import metrics
import gc
pd.set_option('display.max_columns', 200)
train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
#extracting a subset for quick testing
#train_df = train_df[1:1000]
```
<a id="3"></a>
## 3. Training the model on CPU
```
param = {
'num_leaves': 10,
'max_bin': 127,
'min_data_in_leaf': 11,
'learning_rate': 0.02,
'min_sum_hessian_in_leaf': 0.00245,
'bagging_fraction': 1.0,
'bagging_freq': 5,
'feature_fraction': 0.05,
'lambda_l1': 4.972,
'lambda_l2': 2.276,
'min_gain_to_split': 0.65,
'max_depth': 14,
'save_binary': True,
'seed': 1337,
'feature_fraction_seed': 1337,
'bagging_seed': 1337,
'drop_seed': 1337,
'data_random_seed': 1337,
'objective': 'binary',
'boosting_type': 'gbdt',
'verbose': 1,
'metric': 'auc',
'is_unbalance': True,
'boost_from_average': False,
}
%%time
nfold = 2
target = 'target'
predictors = train_df.columns.values.tolist()[2:]
skf = StratifiedKFold(n_splits=nfold, shuffle=True, random_state=2019)
oof = np.zeros(len(train_df))
predictions = np.zeros(len(test_df))
i = 1
for train_index, valid_index in skf.split(train_df, train_df.target.values):
print("\nfold {}".format(i))
xg_train = lgb.Dataset(train_df.iloc[train_index][predictors].values,
label=train_df.iloc[train_index][target].values,
feature_name=predictors,
free_raw_data = False
)
xg_valid = lgb.Dataset(train_df.iloc[valid_index][predictors].values,
label=train_df.iloc[valid_index][target].values,
feature_name=predictors,
free_raw_data = False
)
clf = lgb.train(param, xg_train, 5000, valid_sets = [xg_valid], verbose_eval=50, early_stopping_rounds = 50)
oof[valid_index] = clf.predict(train_df.iloc[valid_index][predictors].values, num_iteration=clf.best_iteration)
predictions += clf.predict(test_df[predictors], num_iteration=clf.best_iteration) / nfold
i = i + 1
print("\n\nCV AUC: {:<0.2f}".format(metrics.roc_auc_score(train_df.target.values, oof)))
```
<a id="4"></a>
## 4. Train model on GPU
First, check the GPU availability.
```
!nvidia-smi
```
In order to leverage the GPU, we need to set the following parameters:
'device': 'gpu',
'gpu_platform_id': 0,
'gpu_device_id': 0
```
param = {
'num_leaves': 10,
'max_bin': 127,
'min_data_in_leaf': 11,
'learning_rate': 0.02,
'min_sum_hessian_in_leaf': 0.00245,
'bagging_fraction': 1.0,
'bagging_freq': 5,
'feature_fraction': 0.05,
'lambda_l1': 4.972,
'lambda_l2': 2.276,
'min_gain_to_split': 0.65,
'max_depth': 14,
'save_binary': True,
'seed': 1337,
'feature_fraction_seed': 1337,
'bagging_seed': 1337,
'drop_seed': 1337,
'data_random_seed': 1337,
'objective': 'binary',
'boosting_type': 'gbdt',
'verbose': 1,
'metric': 'auc',
'is_unbalance': True,
'boost_from_average': False,
'device': 'gpu',
'gpu_platform_id': 0,
'gpu_device_id': 0
}
%%time
nfold = 2
target = 'target'
predictors = train_df.columns.values.tolist()[2:]
skf = StratifiedKFold(n_splits=nfold, shuffle=True, random_state=2019)
oof = np.zeros(len(train_df))
predictions = np.zeros(len(test_df))
i = 1
for train_index, valid_index in skf.split(train_df, train_df.target.values):
print("\nfold {}".format(i))
xg_train = lgb.Dataset(train_df.iloc[train_index][predictors].values,
label=train_df.iloc[train_index][target].values,
feature_name=predictors,
free_raw_data = False
)
xg_valid = lgb.Dataset(train_df.iloc[valid_index][predictors].values,
label=train_df.iloc[valid_index][target].values,
feature_name=predictors,
free_raw_data = False
)
clf = lgb.train(param, xg_train, 5000, valid_sets = [xg_valid], verbose_eval=50, early_stopping_rounds = 50)
oof[valid_index] = clf.predict(train_df.iloc[valid_index][predictors].values, num_iteration=clf.best_iteration)
predictions += clf.predict(test_df[predictors], num_iteration=clf.best_iteration) / nfold
i = i + 1
print("\n\nCV AUC: {:<0.2f}".format(metrics.roc_auc_score(train_df.target.values, oof)))
```
<a id="5"></a>
## 5. Submission
```
sub_df = pd.DataFrame({"ID_code": test_df.ID_code.values})
sub_df["target"] = predictions
sub_df[:10]
sub_df.to_csv("lightgbm_gpu.csv", index=False)
```
| github_jupyter |
```
import matplotlib
import matplotlib.pyplot as plt
from mmcg import mmcg
import numpy as np
import operator
import pyart
radar = pyart.io.read('/home/zsherman/sgpxsaprcmacsurI5.c1.20171004.203018.nc')
grid = mmcg(radar, grid_shape=(31, 101, 101),
grid_limits=((0, 15000), (-50000, 50000), (-50000, 50000)),
z_linear_interp=True, toa=15000, weighting_function='cressman')
display = pyart.graph.GridMapDisplay(grid)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 0
vmin = -8
vmax = 64
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('reflectivity', level=level, vmin=vmin, vmax=vmax,
cmap='pyart_HomeyerRainbow')
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap='pyart_HomeyerRainbow')
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('reflectivity', lon=lon, lat=lat, vmin=vmin, vmax=vmax,
cmap='pyart_HomeyerRainbow')
# plt.savefig('')
cat_dict = {}
print('##')
print('## Keys for each gate id are as follows:')
for pair_str in grid.fields['gate_id']['notes'].split(','):
print('## ', str(pair_str))
cat_dict.update({pair_str.split(':')[1]:int(pair_str.split(':')[0])})
sorted_cats = sorted(cat_dict.items(), key=operator.itemgetter(1))
cat_colors = {'rain': 'green',
'multi_trip': 'red',
'no_scatter': 'gray',
'snow': 'cyan',
'melting': 'yellow'}
lab_colors = ['red', 'cyan', 'grey', 'green', 'yellow']
if 'xsapr_clutter' in grid.fields.keys():
cat_colors['clutter'] = 'black'
lab_colors = np.append(lab_colors, 'black')
lab_colors = [cat_colors[kitty[0]] for kitty in sorted_cats]
cmap = matplotlib.colors.ListedColormap(lab_colors)
display = pyart.graph.GridMapDisplay(grid)
fig = plt.figure(figsize=[15, 7])
# Panel sizes.
map_panel_axes = [0.05, 0.05, .4, .80]
x_cut_panel_axes = [0.55, 0.10, .4, .25]
y_cut_panel_axes = [0.55, 0.50, .4, .25]
# Parameters.
level = 0
vmin = 0
vmax = 5
lat = 36.5
lon = -97.7
# Panel 1, basemap, radar reflectivity and NARR overlay.
ax1 = fig.add_axes(map_panel_axes)
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('gate_id', level=level, vmin=vmin, vmax=vmax,
cmap=cmap)
display.plot_crosshairs(lon=lon, lat=lat)
# Panel 2, longitude slice.
ax2 = fig.add_axes(x_cut_panel_axes)
display.plot_longitude_slice('gate_id', lon=lon, lat=lat, vmin=vmin,
vmax=vmax, cmap=cmap)
ax2.set_ylim([0, 15])
ax2.set_xlim([-50, 50])
ax2.set_xlabel('Distance from SGP CF (km)')
# Panel 3, latitude slice.
ax3 = fig.add_axes(y_cut_panel_axes)
ax3.set_ylim([0, 15])
ax3.set_xlim([-50, 50])
display.plot_latitude_slice('gate_id', lon=lon, lat=lat, vmin=vmin,
vmax=vmax, cmap=cmap)
level = 0
vmin = 0
vmax = 5
lat = 36.5
lon = -97.7
display.plot_basemap(lon_lines = np.arange(-104, -93, 2))
display.plot_grid('gate_id', level=level, vmin=vmin, vmax=vmax,
cmap=cmap)
```
| github_jupyter |
# Predictable t-SNE
[t-SNE](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html) is not a transformer which can produce outputs for other inputs than the one used to train the transform. The proposed solution is train a predictor afterwards to try to use the results on some other inputs the model never saw.
```
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
```
## t-SNE on MNIST
Let's reuse some part of the example of [Manifold learning on handwritten digits: Locally Linear Embedding, Isomap…](https://scikit-learn.org/stable/auto_examples/manifold/plot_lle_digits.html#sphx-glr-auto-examples-manifold-plot-lle-digits-py).
```
import numpy
from sklearn import datasets
digits = datasets.load_digits(n_class=6)
Xd = digits.data
yd = digits.target
imgs = digits.images
n_samples, n_features = Xd.shape
n_samples, n_features
```
Let's split into train and test.
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test, imgs_train, imgs_test = train_test_split(Xd, yd, imgs)
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2, init='pca', random_state=0)
X_train_tsne = tsne.fit_transform(X_train, y_train)
X_train_tsne.shape
import matplotlib.pyplot as plt
from matplotlib import offsetbox
def plot_embedding(Xp, y, imgs, title=None, figsize=(12, 4)):
x_min, x_max = numpy.min(Xp, 0), numpy.max(Xp, 0)
X = (Xp - x_min) / (x_max - x_min)
fig, ax = plt.subplots(1, 2, figsize=figsize)
for i in range(X.shape[0]):
ax[0].text(X[i, 0], X[i, 1], str(y[i]),
color=plt.cm.Set1(y[i] / 10.),
fontdict={'weight': 'bold', 'size': 9})
if hasattr(offsetbox, 'AnnotationBbox'):
# only print thumbnails with matplotlib > 1.0
shown_images = numpy.array([[1., 1.]]) # just something big
for i in range(X.shape[0]):
dist = numpy.sum((X[i] - shown_images) ** 2, 1)
if numpy.min(dist) < 4e-3:
# don't show points that are too close
continue
shown_images = numpy.r_[shown_images, [X[i]]]
imagebox = offsetbox.AnnotationBbox(
offsetbox.OffsetImage(imgs[i], cmap=plt.cm.gray_r),
X[i])
ax[0].add_artist(imagebox)
ax[0].set_xticks([]), ax[0].set_yticks([])
ax[1].plot(Xp[:, 0], Xp[:, 1], '.')
if title is not None:
ax[0].set_title(title)
return ax
plot_embedding(X_train_tsne, y_train, imgs_train, "t-SNE embedding of the digits");
```
## Repeatable t-SNE
We use class *PredictableTSNE* but it works for other trainable transform too.
```
from mlinsights.mlmodel import PredictableTSNE
ptsne = PredictableTSNE()
ptsne.fit(X_train, y_train)
X_train_tsne2 = ptsne.transform(X_train)
plot_embedding(X_train_tsne2, y_train, imgs_train, "Predictable t-SNE of the digits");
```
The difference now is that it can be applied on new data.
```
X_test_tsne2 = ptsne.transform(X_test)
plot_embedding(X_test_tsne2, y_test, imgs_test, "Predictable t-SNE on new digits on test database");
```
By default, the output data is normalized to get comparable results over multiple tries such as the *loss* computed between the normalized output of *t-SNE* and their approximation.
```
ptsne.loss_
```
## Repeatable t-SNE with another predictor
The predictor is a [MLPRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html).
```
ptsne.estimator_
```
Let's replace it with a [KNeighborsRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html) and a normalizer [StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html).
```
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler
ptsne_knn = PredictableTSNE(normalizer=StandardScaler(),
estimator=KNeighborsRegressor())
ptsne_knn.fit(X_train, y_train)
X_train_tsne2 = ptsne_knn.transform(X_train)
plot_embedding(X_train_tsne2, y_train, imgs_train,
"Predictable t-SNE of the digits\nStandardScaler+KNeighborsRegressor");
X_test_tsne2 = ptsne_knn.transform(X_test)
plot_embedding(X_test_tsne2, y_test, imgs_test,
"Predictable t-SNE on new digits\nStandardScaler+KNeighborsRegressor");
```
The model seems to work better as the loss is better but as it is evaluated on the training dataset, it is just a way to check it is not too big.
```
ptsne_knn.loss_
```
| github_jupyter |
# Wiener Filter + UNet
https://github.com/vpronina/DeepWienerRestoration
```
# Import some libraries
import numpy as np
from skimage import color, data, restoration
import matplotlib.pyplot as plt
import torch
import utils
import torch.nn as nn
from networks import UNet
import math
import os
from skimage import io
import skimage
import warnings
warnings.filterwarnings('ignore')
def show_images(im1, im1_title, im2, im2_title, im3, im3_title, font):
fig, (image1, image2, image3) = plt.subplots(1, 3, figsize=(15, 50))
image1.imshow(im1, cmap='gray')
image1.set_title(im1_title, fontsize=font)
image1.set_axis_off()
image2.imshow(im2, cmap='gray')
image2.set_title(im2_title, fontsize=font)
image2.set_axis_off()
image3.imshow(im3, cmap='gray')
image3.set_title(im3_title, fontsize=font)
image3.set_axis_off()
fig.subplots_adjust(wspace=0.02, hspace=0.2,
top=0.9, bottom=0.05, left=0, right=1)
fig.show()
```
# Load the data
```
#Load the target image
image = io.imread('./image.tif')
#Load the blurred and distorted images
blurred = io.imread('./blurred.tif')
distorted = io.imread('./distorted.tif')
#Load the kernel
psf = io.imread('./PSF.tif')
show_images(image, 'Original image', blurred, 'Blurred image',\
distorted, 'Blurred and noisy image', font=18)
```
We know, that the solution is described as follows:
$\hat{\mathbf{x}} = \arg\min_\mathbf{x}\underbrace{\frac{1}{2}\|\mathbf{y}-\mathbf{K} \mathbf{x}\|_{2}^{2}+\lambda r(\mathbf{x})}_{\mathbf{J}(\mathbf{x})}$,
where $\mathbf{J}$ is the objective function.
According to the gradient descent iterative scheme,
$\hat{\mathbf{x}}_{k+1}=\hat{\mathbf{x}}_{k}-\beta \nabla \mathbf{J}(\mathbf{x})$.
Solution is described with the iterative gradient descent equation:
$\hat{\mathbf{x}}_{k+1} = \hat{\mathbf{x}}_{k} - \beta\left[\mathbf{K}^\top(\mathbf{K}\hat{\mathbf{x}}_{k} - \mathbf{y}) + e^\alpha f^{CNN}(\hat{\mathbf{x}}_{k})\right]$, and here $\lambda = e^\alpha$ and $r(\mathbf{x}) = f^{CNN}(\hat{\mathbf{x}})$.
```
# Anscombe transform to transform Poissonian data into Gaussian
#https://en.wikipedia.org/wiki/Anscombe_transform
def anscombe(x):
'''
Compute the anscombe variance stabilizing transform.
the input x is noisy Poisson-distributed data
the output fx has variance approximately equal to 1.
Reference: Anscombe, F. J. (1948), "The transformation of Poisson,
binomial and negative-binomial data", Biometrika 35 (3-4): 246-254
'''
return 2.0*torch.sqrt(x + 3.0/8.0)
# Exact unbiased Anscombe transform to transform Gaussian data back into Poissonian
def exact_unbiased(z):
return (1.0 / 4.0 * z.pow(2) +
(1.0/4.0) * math.sqrt(3.0/2.0) * z.pow(-1) -
(11.0/8.0) * z.pow(-2) +
(5.0/8.0) * math.sqrt(3.0/2.0) * z.pow(-3) - (1.0 / 8.0))
class WienerUNet(torch.nn.Module):
def __init__(self):
'''
Deconvolution function for a batch of images. Although the regularization
term does not have a shape of Tikhonov regularizer, with a slight abuse of notations
the function is called WienerUNet.
The function is built upon the iterative gradient descent scheme:
x_k+1 = x_k - lamb[K^T(Kx_k - y) + exp(alpha)*reg(x_k)]
Initial parameters are:
regularizer: a neural network to parametrize the prior on each iteration x_k.
alpha: power of the trade-off coefficient.
lamb: step of the gradient descent algorithm.
'''
super(WienerUNet, self).__init__()
self.regularizer = UNet(mode='instance')
self.alpha = nn.Parameter(torch.FloatTensor([0.0]))
self.lamb = nn.Parameter(torch.FloatTensor([0.3]))
def forward(self, x, y, ker):
'''
Function that performs one iteration of the gradient descent scheme of the deconvolution algorithm.
:param x: (torch.(cuda.)Tensor) Image, restored with the previous iteration of the gradient descent scheme, B x C x H x W
:param y: (torch.(cuda.)Tensor) Input blurred and noisy image, B x C x H x W
:param ker: (torch.(cuda.)Tensor) Blurring kernel, B x C x H_k x W_k
:return: (torch.(cuda.)Tensor) Restored image, B x C x H x W
'''
#Calculate Kx_k
x_filtered = utils.imfilter2D_SpatialDomain(x, ker, padType='symmetric', mode="conv")
Kx_y = x_filtered - y
#Calculate K^T(Kx_k - y)
y_filtered = utils.imfilter_transpose2D_SpatialDomain(Kx_y, ker,
padType='symmetric', mode="conv")
#Calculate exp(alpha)*reg(x_k)
regul = torch.exp(self.alpha) * self.regularizer(x)
brackets = y_filtered + regul
out = x - self.lamb * brackets
return out
class WienerFilter_UNet(nn.Module):
'''
Module that uses UNet to predict individual gradient of a regularizer for each input image and then
applies gradient descent scheme with predicted gradient of a regularizers per-image.
'''
def __init__(self):
super(WienerFilter_UNet, self).__init__()
self.function = WienerUNet()
#Perform gradient descent iterations
def forward(self, y, ker, n_iter):
output = y.clone()
for i in range(n_iter):
output = self.function(output, y, ker)
return output
#Let's transform our numpy data into pytorch data
x = torch.Tensor(distorted[None, None])
ker = torch.Tensor(psf[None, None])
#Define the model
model = WienerFilter_UNet()
#Load the pretrained weights
state_dict = torch.load(os.path.join('./', 'WF_UNet_poisson'))
state_dict = state_dict['model_state_dict']
from collections import OrderedDict
new_state_dict = OrderedDict()
for k, v in state_dict.items():
name = k[7:] # remove `module.`
new_state_dict[name] = v
# load params
model.load_state_dict(new_state_dict)
model.eval()
#Perform Anscombe transform
x = anscombe(x)
#Calculate output
out = model(x, ker, 10)
#Perform inverse Anscombe transform
out = exact_unbiased(out)
#Some post-processing of data
out = out/image.max()
image = image/image.max()
show_images(image, 'Original image', distorted, 'Blurred image',\
out[0][0].detach().cpu().numpy().clip(0,1), 'Restored with WF-UNet', font=18)
```
| github_jupyter |
```
import torch
import numpy as np
import torch.utils.data
from utils import dataloader
from models.casenet import casenet101 as CaseNet101
import os
import cv2
import tqdm
import argparse
import argparse
import os
import random
import shutil
import time
import warnings
import math
import numpy as np
from PIL import ImageOps, Image
import pprint
import tempfile
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.backends.cudnn as cudnn
import torch.distributed as dist
import torch.optim
import torch.multiprocessing as mp
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
import torch.nn.functional as F
from tqdm import tqdm
import matplotlib.pyplot as plt
#############################################
# This is code to generate our test dataset
#
# We take the list of Imagenet-R classes, sort the list, and
# take 100/200 of them by going through it with stride 2.
#
# This is how our distorted dataset is made, so these classes should match the
# classes used for trainig and the classes used for eval.
#############################################
# 200 classes used in ImageNet-R
imagenet_r_wnids = ['n01443537', 'n01484850', 'n01494475', 'n01498041', 'n01514859', 'n01518878', 'n01531178', 'n01534433', 'n01614925', 'n01616318', 'n01630670', 'n01632777', 'n01644373', 'n01677366', 'n01694178', 'n01748264', 'n01770393', 'n01774750', 'n01784675', 'n01806143', 'n01820546', 'n01833805', 'n01843383', 'n01847000', 'n01855672', 'n01860187', 'n01882714', 'n01910747', 'n01944390', 'n01983481', 'n01986214', 'n02007558', 'n02009912', 'n02051845', 'n02056570', 'n02066245', 'n02071294', 'n02077923', 'n02085620', 'n02086240', 'n02088094', 'n02088238', 'n02088364', 'n02088466', 'n02091032', 'n02091134', 'n02092339', 'n02094433', 'n02096585', 'n02097298', 'n02098286', 'n02099601', 'n02099712', 'n02102318', 'n02106030', 'n02106166', 'n02106550', 'n02106662', 'n02108089', 'n02108915', 'n02109525', 'n02110185', 'n02110341', 'n02110958', 'n02112018', 'n02112137', 'n02113023', 'n02113624', 'n02113799', 'n02114367', 'n02117135', 'n02119022', 'n02123045', 'n02128385', 'n02128757', 'n02129165', 'n02129604', 'n02130308', 'n02134084', 'n02138441', 'n02165456', 'n02190166', 'n02206856', 'n02219486', 'n02226429', 'n02233338', 'n02236044', 'n02268443', 'n02279972', 'n02317335', 'n02325366', 'n02346627', 'n02356798', 'n02363005', 'n02364673', 'n02391049', 'n02395406', 'n02398521', 'n02410509', 'n02423022', 'n02437616', 'n02445715', 'n02447366', 'n02480495', 'n02480855', 'n02481823', 'n02483362', 'n02486410', 'n02510455', 'n02526121', 'n02607072', 'n02655020', 'n02672831', 'n02701002', 'n02749479', 'n02769748', 'n02793495', 'n02797295', 'n02802426', 'n02808440', 'n02814860', 'n02823750', 'n02841315', 'n02843684', 'n02883205', 'n02906734', 'n02909870', 'n02939185', 'n02948072', 'n02950826', 'n02951358', 'n02966193', 'n02980441', 'n02992529', 'n03124170', 'n03272010', 'n03345487', 'n03372029', 'n03424325', 'n03452741', 'n03467068', 'n03481172', 'n03494278', 'n03495258', 'n03498962', 'n03594945', 'n03602883', 'n03630383', 'n03649909', 'n03676483', 'n03710193', 'n03773504', 'n03775071', 'n03888257', 'n03930630', 'n03947888', 'n04086273', 'n04118538', 'n04133789', 'n04141076', 'n04146614', 'n04147183', 'n04192698', 'n04254680', 'n04266014', 'n04275548', 'n04310018', 'n04325704', 'n04347754', 'n04389033', 'n04409515', 'n04465501', 'n04487394', 'n04522168', 'n04536866', 'n04552348', 'n04591713', 'n07614500', 'n07693725', 'n07695742', 'n07697313', 'n07697537', 'n07714571', 'n07714990', 'n07718472', 'n07720875', 'n07734744', 'n07742313', 'n07745940', 'n07749582', 'n07753275', 'n07753592', 'n07768694', 'n07873807', 'n07880968', 'n07920052', 'n09472597', 'n09835506', 'n10565667', 'n12267677']
imagenet_r_wnids.sort()
classes_chosen = imagenet_r_wnids[::2] # Choose 100 classes for our dataset
assert len(classes_chosen) == 100
imagenet_path = "/var/tmp/namespace/hendrycks/imagenet/train"
class ImageNetSubsetDataset(datasets.ImageFolder):
"""
Dataset class to take a specified subset of some larger dataset
"""
def __init__(self, root, *args, **kwargs):
print("Using {0} classes {1}".format(len(classes_chosen), classes_chosen))
self.new_root = tempfile.mkdtemp()
for _class in classes_chosen:
orig_dir = os.path.join(root, _class)
assert os.path.isdir(orig_dir)
os.symlink(orig_dir, os.path.join(self.new_root, _class))
super().__init__(self.new_root, *args, **kwargs)
def __del__(self):
# Clean up
shutil.rmtree(self.new_root)
val_transforms = [
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]
),
]
val_dset = ImageNetSubsetDataset(
imagenet_path,
transform=transforms.Compose(val_transforms)
)
val_loader = torch.utils.data.DataLoader(
dataset=val_dset,
batch_size=1,
shuffle=False
)
def show_image(img, plt):
img = img.squeeze(0).permute((1, 2, 0))
plt.imshow(img, interpolation='nearest')
def do_test_sbd(net_, val_data_loader_, num_to_test = 1):
print('Running Inference....')
net_.eval()
# define plots to show data
fig, ax = plt.subplots(num_to_test,2)
fig.subplots_adjust(wspace=0.025, hspace=0.02)
fig.set_size_inches(30, 30)
for i_batch, (input_img, _) in enumerate(val_data_loader_):
print(input_img)
im = input_img.cuda()
out_masks = net_(im)
prediction = torch.sigmoid(out_masks[0])
print(prediction.shape)
# Show images
edges, _ = torch.max(prediction, dim=1, keepdim=False)
show_image(edges.cpu().numpy(), ax[i_batch][0])
show_image(input_img.squeeze(0).cpu().numpy(), ax[i_batch][1])
if i_batch + 1 >= num_to_test:
break
net = CaseNet101()
net = torch.nn.DataParallel(net.cuda())
ckpt = './checkpoints/sbd/model_checkpoint.pt'
print('loading ckpt :%s' % ckpt)
net.load_state_dict(torch.load(ckpt), strict=True)
do_test_sbd(net, val_loader, num_to_test=1)
```
| github_jupyter |
```
from pyxnat import Interface
import xmltodict
import xnat_downloader.cli.run as run
from pyxnat import Inspector
import os
```
### Play 04/11/2018
```
central = Interface(server="https://central.xnat.org", user='')
project = 'xnatDownload'
subject = 'sub-001'
proj_obj = central.select.project(project)
proj_obj.exists()
testSub = run.Subject(proj_obj, subject)
testSub.get_sessions()
testSub.ses_dict
testSub.get_scans(testSub.ses_dict.keys()[0])
testSub.scan_dict
testSub.download_scan('PU:anat-T1w', '/home/james/Documents/myTestBIDS')
```
### Play 04/10/2018
```
central = Interface(server="https://rpacs.iibi.uiowa.edu/xnat")
project = r'BIKE_EXTEND'
subject = r'sub-999'
proj_obj = central.select.project(project)
proj_obj.exists()
testSub = run.Subject(proj_obj, subject)
sub_objs = proj_obj.subjects()
# list the subjects by their label (e.g. sub-myname)
# instead of the RPACS_1223 ID given to them
sub_objs._id_header = 'label'
subjects = sub_objs.get()
testSub.get_sessions()
testSub.ses_dict
testSub.get_scans(testSub.ses_dict.keys()[0])
testSub.download_scan('PU:anat-FLAIR', '/home/james/Documents/myTestBIDS')
scan_obj = testSub.scan_dict['PU:anat-FLAIR']
scan_res = scan_obj.resources()
scan_files = scan_res.files()
exp_obj = scan_obj.parent()
help(scan_res)
scans = testSub.ses_dict['activepre'].scans()
exp_obj.scans().download()
testSub.scan_dict['PU:anat-FLAIR'].id()
inspect = Inspector(central)
inspect.experiment_values('xnat:mrSessionData')
project = 'GE3T_DEV'
subject = r'sub-voss01'
project_1 = 'VOSS_PACRAD'
subject_1 = '102'
proj_obj = central.select.project(project).subject(subject).experiments().get('')
all_subjects = central.select.project(project).subjects()
all_subjects.get()
type(proj_obj[0])
sub_xnat_path = '/project/{project}/subjects/{subject}'.format(subject=subject,
project=project)
sub_obj = central.pro(sub_xnat_path)
sub_obj.exists()
sub_dict = xmltodict.parse(sub_obj.get())
session_labels = None
sessions = "ALL"
# participant
sub_dict['xnat:Subject']['@label']
# sessions
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']
# single session label
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['@label']
# scans
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['xnat:scans']['xnat:scan']
# single scan
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['xnat:scans']['xnat:scan'][3]
# scan type
sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']['xnat:scans']['xnat:scan'][3]['@type']
run.sort_sessions(sub_dict, session_labels, sessions)
ses_labels = [None]
ses_list = [sub_dict['xnat:Subject']['xnat:experiments']['xnat:experiment']]
ses_list
session_tech_labels = [ses['@label'] for ses in ses_list]
session_tech_labels
upload_date = [run.get_time(ses) for ses in ses_list]
upload_date
session_info = zip(ses_list, upload_date, session_tech_labels, ses_labels)
session_info
session_info.sort(key=lambda x: x[1])
session_info
ses_info = session_info[0]
ses_dict = ses_info[0]
project
central
tmp = sub_obj.resources()
tmp.get()
tmp2.get()
central.inspect.datatypes('xnat:subjectData')
central.inspect.experiment_types()
central.inspect.assessor_types()
central.inspect.scan_types()
central.inspect.structure()
central.select('//experiments').get()
constr = [('xnat:mrSessionData/PROJECT', '=', project)]
test_get = central.select('xnat:mrScanData').where(constr)
test_get.as_list()
central.inspect.datatypes()
central.inspect.datatypes('xnat:projectData')
central.inspect.datatypes('xnat:mrSessionData')
contraints = [('xnat:mrSessionData/Project','=', project)]
var = central.select('xnat:mrSessionData', ['xnat:mrSessionData/SUBJECT_ID', 'xnat:mrSessionData/AGE']).where(contraints)
var.
central.inspect.datatypes('xnat:projectData')
constraints = [('xnat:otherDicomSessionData/PROJECT','=', project)]
var2 = central.select('xnat:xnat:otherDicomSessionData', ['xnat:mrSessionData/SUBJECT_ID', 'xnat:mrSessionData/AGE']).where(contraints)
central.select('//subjects').get('ID')
proj_obj = central.select.project(project)
scan_obj = central.select.project(project).subject(subject).experiment('20180131').scans().get('')[11]
all_scans = central.select.project(project).subject(subject).experiment('20180131').scans()
ses_obj = central.select.project(project_1).subject(subject_1).experiment('20160218').scans()
first_scan = ses_obj.get('')[0]
# rsrce_obj = first_scan.resources().get('')
# fil = rsrce_obj.files().get('')[0]
parent_obj = first_scan.parent().label()
# parent_obj.label()
parent_obj
first_scan.id()
all_scans.download(dest_dir='/home/james/Documents', type='task-block_bold', extract=True)
scan_files = scan_rce.files()
dicom_rce = scan_obj.resource('DICOM')
all_files = dicom_rce.files()
file_ex = all_files.get('')[0]
file_ex.get_copy('/home/james/')
sub_obj = proj_obj.subjects()
sub_obj
sub_obj.get('ID')
subject = proj_obj.subject('sub-voss01')
subject.experiments()
ses_obj = subject.experiments()
# have to run twice to get the correct result (e.g. the date)
ses_obj.get('label')
test_ses_first = ses_obj.first()
test_scans = test_ses_first.scans()
scan = test_scans.fetchone()
scan
test_ses_first.attrs.get()
scans_obj = ses_obj.scans()
scan_tmp = scans_obj.get()[4]
# how to access scan type
scan_tmp.attrs.get('type')
rsrcs = scans_obj.resources()
tmp_first = rsrcs.first()
tmp = tmp_first.files()
scan1 = scans_obj.resource('1')
scan1.files().get()
scans_obj.resources
rsrcs.get('series_description')
scans_obj._get_array
central.inspect.datatypes('xnat:mrScanData')
```
## Test data: 05/08/2018
```
sub_obj.attrs.get('label').zfill(3)
central = Interface(server="https://central.xnat.org")
project = 'xnatDownload'
subject = '21'
proj_obj = central.select.project(project)
proj_obj.exists()
testSub = run.Subject(proj_obj, subject)
testSub.get_sessions()
ses_label = testSub.ses_dict.keys()[0]
testSub.get_scans(ses_label)
testSub.scan_dict
scan = "T1rho - SL10 (NO AUTO PRESCAN)"
dest = '/home/james/Downloads'
scan_repl_dict = {
"SAG FSPGR BRAVO": "anat-T1w",
"Field Map": "fmap",
"T1rho - SL50": "anat-T1rho_acq-SL50",
"T1rho - SL10 (NO AUTO PRESCAN)": "anat-T1rho_acq-SL10",
"fMRI Resting State": "func-bold_task-rest",
"fMRI SIMON": "func-bold_task-simon",
"DTI": "dwi",
"3D ASL": "func-asl",
"PROBE-SV 35": "mrs-fid",
"PU:SAG FSPGR BRAVO": "anat-T1w_rec-pu",
"PU:fMRI Resting State": "func-bold_task-rest_rec-pu",
"PU:fMRI SIMON": "func-bold_task-simon_rec-pu",
"Cerebral Blood Flow": "func-asl_rec-cbf",
"NOT DIAGNOSTIC: PFile-PROBE-SV 35": "mrs-fid_rec-pfile"
}
bids_num_len = 3
testSub.download_scan_unformatted(scan, dest, scan_repl_dict, bids_num_len)
```
| github_jupyter |
# Session #5: Automate ML workflows and focus on innovation (300)
In this session, you will learn how to use [SageMaker Pipelines](https://docs.aws.amazon.com/sagemaker/latest/dg/pipelines-sdk.html) to train a [Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html) Transformer model and deploy it. The SageMaker integration with Hugging Face makes it easy to train and deploy advanced NLP models. A Lambda step in SageMaker Pipelines enables you to easily do lightweight model deployments and other serverless operations.
You will learn how to:
1. Setup Environment and Permissions
2. define pipeline with preprocessing, training & deployment steps
3. Run Pipeline
4. Test Inference
Let's get started! 🚀
---
*If you are going to use Sagemaker in a local environment (not SageMaker Studio or Notebook Instances). You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it.*
**Prerequisites**:
- Make sure your notebook environment has IAM managed policy `AmazonSageMakerPipelinesIntegrations` as well as `AmazonSageMakerFullAccess`
**Blog Post**
* [Use a SageMaker Pipeline Lambda step for lightweight model deployments](https://aws.amazon.com/de/blogs/machine-learning/use-a-sagemaker-pipeline-lambda-step-for-lightweight-model-deployments/)
# Development Environment and Permissions
## Installation & Imports
We'll start by updating the SageMaker SDK, and importing some necessary packages.
```
!pip install "sagemaker>=2.48.0" --upgrade
import boto3
import os
import numpy as np
import pandas as pd
import sagemaker
import sys
import time
from sagemaker.workflow.parameters import ParameterInteger, ParameterFloat, ParameterString
from sagemaker.lambda_helper import Lambda
from sagemaker.sklearn.processing import SKLearnProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.workflow.steps import CacheConfig, ProcessingStep
from sagemaker.huggingface import HuggingFace, HuggingFaceModel
import sagemaker.huggingface
from sagemaker.inputs import TrainingInput
from sagemaker.workflow.steps import TrainingStep
from sagemaker.processing import ScriptProcessor
from sagemaker.workflow.properties import PropertyFile
from sagemaker.workflow.step_collections import CreateModelStep, RegisterModel
from sagemaker.workflow.conditions import ConditionLessThanOrEqualTo,ConditionGreaterThanOrEqualTo
from sagemaker.workflow.condition_step import ConditionStep, JsonGet
from sagemaker.workflow.pipeline import Pipeline, PipelineExperimentConfig
from sagemaker.workflow.execution_variables import ExecutionVariables
```
## Permissions
_If you are going to use Sagemaker in a local environment. You need access to an IAM Role with the required permissions for Sagemaker. You can find [here](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) more about it._
```
import sagemaker
sess = sagemaker.Session()
region = sess.boto_region_name
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket=None
if sagemaker_session_bucket is None and sess is not None:
# set to default bucket if a bucket name is not given
sagemaker_session_bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
sagemaker_session = sagemaker.Session(default_bucket=sagemaker_session_bucket)
print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sagemaker_session.default_bucket()}")
print(f"sagemaker session region: {sagemaker_session.boto_region_name}")
```
# Pipeline Overview

# Defining the Pipeline
## 0. Pipeline parameters
Before defining the pipeline, it is important to parameterize it. SageMaker Pipeline can directly be parameterized, including instance types and counts.
Read more about Parameters in the [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-parameters.html)
```
# S3 prefix where every assets will be stored
s3_prefix = "hugging-face-pipeline-demo"
# s3 bucket used for storing assets and artifacts
bucket = sagemaker_session.default_bucket()
# aws region used
region = sagemaker_session.boto_region_name
# base name prefix for sagemaker jobs (training, processing, inference)
base_job_prefix = s3_prefix
# Cache configuration for workflow
cache_config = CacheConfig(enable_caching=False, expire_after="30d")
# package versions
transformers_version = "4.11.0"
pytorch_version = "1.9.0"
py_version = "py38"
model_id_="distilbert-base-uncased"
dataset_name_="imdb"
model_id = ParameterString(name="ModelId", default_value="distilbert-base-uncased")
dataset_name = ParameterString(name="DatasetName", default_value="imdb")
```
## 1. Processing Step
A SKLearn Processing step is used to invoke a SageMaker Processing job with a custom python script - `preprocessing.py`.
### Processing Parameter
```
processing_instance_type = ParameterString(name="ProcessingInstanceType", default_value="ml.c5.2xlarge")
processing_instance_count = ParameterInteger(name="ProcessingInstanceCount", default_value=1)
processing_script = ParameterString(name="ProcessingScript", default_value="./scripts/preprocessing.py")
```
### Processor
```
processing_output_destination = f"s3://{bucket}/{s3_prefix}/data"
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1",
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name=base_job_prefix + "/preprocessing",
sagemaker_session=sagemaker_session,
role=role,
)
step_process = ProcessingStep(
name="ProcessDataForTraining",
cache_config=cache_config,
processor=sklearn_processor,
job_arguments=["--transformers_version",transformers_version,
"--pytorch_version",pytorch_version,
"--model_id",model_id_,
"--dataset_name",dataset_name_],
outputs=[
ProcessingOutput(
output_name="train",
destination=f"{processing_output_destination}/train",
source="/opt/ml/processing/train",
),
ProcessingOutput(
output_name="test",
destination=f"{processing_output_destination}/test",
source="/opt/ml/processing/test",
),
ProcessingOutput(
output_name="validation",
destination=f"{processing_output_destination}/test",
source="/opt/ml/processing/validation",
),
],
code=processing_script,
)
```
## 2. Model Training Step
We use SageMaker's [Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/sagemaker.huggingface.html) Estimator class to create a model training step for the Hugging Face [DistilBERT](https://huggingface.co/distilbert-base-uncased) model. Transformer-based models such as the original BERT can be very large and slow to train. DistilBERT, however, is a small, fast, cheap and light Transformer model trained by distilling BERT base. It reduces the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster.
The Hugging Face estimator also takes hyperparameters as a dictionary. The training instance type and size are pipeline parameters that can be easily varied in future pipeline runs without changing any code.
### Training Parameter
```
# training step parameters
training_entry_point = ParameterString(name="TrainingEntryPoint", default_value="train.py")
training_source_dir = ParameterString(name="TrainingSourceDir", default_value="./scripts")
training_instance_type = ParameterString(name="TrainingInstanceType", default_value="ml.p3.2xlarge")
training_instance_count = ParameterInteger(name="TrainingInstanceCount", default_value=1)
# hyperparameters, which are passed into the training job
epochs=ParameterString(name="Epochs", default_value="1")
eval_batch_size=ParameterString(name="EvalBatchSize", default_value="32")
train_batch_size=ParameterString(name="TrainBatchSize", default_value="16")
learning_rate=ParameterString(name="LearningRate", default_value="3e-5")
fp16=ParameterString(name="Fp16", default_value="True")
```
### Hugging Face Estimator
```
huggingface_estimator = HuggingFace(
entry_point=training_entry_point,
source_dir=training_source_dir,
base_job_name=base_job_prefix + "/training",
instance_type=training_instance_type,
instance_count=training_instance_count,
role=role,
transformers_version=transformers_version,
pytorch_version=pytorch_version,
py_version=py_version,
hyperparameters={
'epochs':epochs,
'eval_batch_size': eval_batch_size,
'train_batch_size': train_batch_size,
'learning_rate': learning_rate,
'model_id': model_id,
'fp16': fp16
},
sagemaker_session=sagemaker_session,
)
step_train = TrainingStep(
name="TrainHuggingFaceModel",
estimator=huggingface_estimator,
inputs={
"train": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"train"
].S3Output.S3Uri
),
"test": TrainingInput(
s3_data=step_process.properties.ProcessingOutputConfig.Outputs[
"test"
].S3Output.S3Uri
),
},
cache_config=cache_config,
)
```
## 3. Model evaluation Step
A ProcessingStep is used to evaluate the performance of the trained model. Based on the results of the evaluation, either the model is created, registered, and deployed, or the pipeline stops.
In the training job, the model was evaluated against the test dataset, and the result of the evaluation was stored in the `model.tar.gz` file saved by the training job. The results of that evaluation are copied into a `PropertyFile` in this ProcessingStep so that it can be used in the ConditionStep.
### Evaluation Parameter
```
evaluation_script = ParameterString(name="EvaluationScript", default_value="./scripts/evaluate.py")
```
### Evaluator
```
script_eval = SKLearnProcessor(
framework_version="0.23-1",
instance_type=processing_instance_type,
instance_count=processing_instance_count,
base_job_name=base_job_prefix + "/evaluation",
sagemaker_session=sagemaker_session,
role=role,
)
evaluation_report = PropertyFile(
name="HuggingFaceEvaluationReport",
output_name="evaluation",
path="evaluation.json",
)
step_eval = ProcessingStep(
name="HuggingfaceEvalLoss",
processor=script_eval,
inputs=[
ProcessingInput(
source=step_train.properties.ModelArtifacts.S3ModelArtifacts,
destination="/opt/ml/processing/model",
)
],
outputs=[
ProcessingOutput(
output_name="evaluation",
source="/opt/ml/processing/evaluation",
destination=f"s3://{bucket}/{s3_prefix}/evaluation_report",
),
],
code=evaluation_script,
property_files=[evaluation_report],
cache_config=cache_config,
)
```
## 4. Register the model
The trained model is registered in the Model Registry under a Model Package Group. Each time a new model is registered, it is given a new version number by default. The model is registered in the "Approved" state so that it can be deployed. Registration will only happen if the output of the [6. Condition for deployment](#6.-Condition-for-deployment) is true, i.e, the metrics being checked are within the threshold defined.
```
model = HuggingFaceModel(
model_data=step_train.properties.ModelArtifacts.S3ModelArtifacts,
role=role,
transformers_version=transformers_version,
pytorch_version=pytorch_version,
py_version=py_version,
sagemaker_session=sagemaker_session,
)
model_package_group_name = "HuggingFaceModelPackageGroup"
step_register = RegisterModel(
name="HuggingFaceRegisterModel",
model=model,
content_types=["application/json"],
response_types=["application/json"],
inference_instances=["ml.g4dn.xlarge", "ml.m5.xlarge"],
transform_instances=["ml.g4dn.xlarge", "ml.m5.xlarge"],
model_package_group_name=model_package_group_name,
approval_status="Approved",
)
```
## 5. Model Deployment
We create a custom step `ModelDeployment` derived from the provided `LambdaStep`. This Step will create a Lambda function and invocate to deploy our model as SageMaker Endpoint.
```
# custom Helper Step for ModelDeployment
from utils.deploy_step import ModelDeployment
# we will use the iam role from the notebook session for the created endpoint
# this role will be attached to our endpoint and need permissions, e.g. to download assets from s3
sagemaker_endpoint_role=sagemaker.get_execution_role()
step_deployment = ModelDeployment(
model_name=f"{model_id_}-{dataset_name_}",
registered_model=step_register.steps[0],
endpoint_instance_type="ml.g4dn.xlarge",
sagemaker_endpoint_role=sagemaker_endpoint_role,
autoscaling_policy=None,
)
```
## 6. Condition for deployment
For the condition to be `True` and the steps after evaluation to run, the evaluated accuracy of the Hugging Face model must be greater than our `TresholdAccuracy` parameter.
### Condition Parameter
```
threshold_accuracy = ParameterFloat(name="ThresholdAccuracy", default_value=0.8)
```
### Condition
```
cond_gte = ConditionGreaterThanOrEqualTo(
left=JsonGet(
step=step_eval,
property_file=evaluation_report,
json_path="eval_accuracy",
),
right=threshold_accuracy,
)
step_cond = ConditionStep(
name="CheckHuggingfaceEvalAccuracy",
conditions=[cond_gte],
if_steps=[step_register, step_deployment],
else_steps=[],
)
```
# Pipeline definition and execution
SageMaker Pipelines constructs the pipeline graph from the implicit definition created by the way pipeline steps inputs and outputs are specified. There's no need to specify that a step is a "parallel" or "serial" step. Steps such as model registration after the condition step are not listed in the pipeline definition because they do not run unless the condition is true. If so, they are run in order based on their specified inputs and outputs.
Each Parameter we defined holds a default value, which can be overwritten before starting the pipeline. [Parameter Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-parameters.html)
### Overwriting Parameters
```
# define parameter which should be overwritten
pipeline_parameters=dict(
ModelId="distilbert-base-uncased",
ThresholdAccuracy=0.7,
Epochs="3",
TrainBatchSize="32",
EvalBatchSize="64",
)
```
### Create Pipeline
```
pipeline = Pipeline(
name=f"HuggingFaceDemoPipeline",
parameters=[
model_id,
dataset_name,
processing_instance_type,
processing_instance_count,
processing_script,
training_entry_point,
training_source_dir,
training_instance_type,
training_instance_count,
evaluation_script,
threshold_accuracy,
epochs,
eval_batch_size,
train_batch_size,
learning_rate,
fp16
],
steps=[step_process, step_train, step_eval, step_cond],
sagemaker_session=sagemaker_session,
)
```
We can examine the pipeline definition in JSON format. You also can inspect the pipeline graph in SageMaker Studio by going to the page for your pipeline.
```
import json
json.loads(pipeline.definition())
```

`upsert` creates or updates the pipeline.
```
pipeline.upsert(role_arn=role)
```
### Run the pipeline
```
execution = pipeline.start(parameters=pipeline_parameters)
execution.wait()
```
## Getting predictions from the endpoint
After the previous cell completes, you can check whether the endpoint has finished deploying.
We can use the `endpoint_name` to create up a `HuggingFacePredictor` object that will be used to get predictions.
```
from sagemaker.huggingface import HuggingFacePredictor
endpoint_name = f"{model_id}-{dataset_name}"
# check if endpoint is up and running
print(f"https://console.aws.amazon.com/sagemaker/home?region={region}#/endpoints/{endpoint_name}")
hf_predictor = HuggingFacePredictor(endpoint_name,sagemaker_session=sagemaker_session)
```
### Test data
Here are a couple of sample reviews we would like to classify as positive (`pos`) or negative (`neg`). Demonstrating the power of advanced Transformer-based models such as this Hugging Face model, the model should do quite well even though the reviews are mixed.
```
sentiment_input1 = {"inputs":"Although the movie had some plot weaknesses, it was engaging. Special effects were mind boggling. Can't wait to see what this creative team does next."}
hf_predictor.predict(sentiment_input1)
sentiment_input2 = {"inputs":"There was some good acting, but the story was ridiculous. The other sequels in this franchise were better. It's time to take a break from this IP, but if they switch it up for the next one, I'll check it out."}
hf_predictor.predict(sentiment_input2)
```
## Cleanup Resources
The following cell will delete the resources created by the Lambda function and the Lambda itself.
Deleting other resources such as the S3 bucket and the IAM role for the Lambda function are the responsibility of the notebook user.
```
sm_client = boto3.client("sagemaker")
# Delete the Lambda function
step_deployment.func.delete()
# Delete the endpoint
hf_predictor.delete_endpoint()
```
| github_jupyter |
# Assignment 2
For this assignment you'll be looking at 2017 data on immunizations from the CDC. Your datafile for this assignment is in [assets/NISPUF17.csv](assets/NISPUF17.csv). A data users guide for this, which you'll need to map the variables in the data to the questions being asked, is available at [assets/NIS-PUF17-DUG.pdf](assets/NIS-PUF17-DUG.pdf). **Note: you may have to go to your Jupyter tree (click on the Coursera image) and navigate to the assignment 2 assets folder to see this PDF file).**
## Question 1
Write a function called `proportion_of_education` which returns the proportion of children in the dataset who had a mother with the education levels equal to less than high school (<12), high school (12), more than high school but not a college graduate (>12) and college degree.
*This function should return a dictionary in the form of (use the correct numbers, do not round numbers):*
```
{"less than high school":0.2,
"high school":0.4,
"more than high school but not college":0.2,
"college":0.2}
```
```
import pandas as pd
def proportion_of_education():
df = pd.read_csv("assets/NISPUF17.csv", index_col="SEQNUMC")
del df['Unnamed: 0']
df.sort_index(inplace=True)
df = df["EDUC1"].to_frame()
count = len(df.index)
lhs = df.loc[df["EDUC1"] == 1].count()["EDUC1"] / count
hs = df.loc[df["EDUC1"] == 2].count()["EDUC1"] / count
mhs = df.loc[df["EDUC1"] == 3].count()["EDUC1"] / count
college = df.loc[df["EDUC1"] == 4].count()["EDUC1"] / count
return {"less than high school": lhs,
"high school": hs,
"more than high school but not college": mhs,
"college": college
}
assert type(proportion_of_education())==type({}), "You must return a dictionary."
assert len(proportion_of_education()) == 4, "You have not returned a dictionary with four items in it."
assert "less than high school" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
assert "high school" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
assert "more than high school but not college" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
assert "college" in proportion_of_education().keys(), "You have not returned a dictionary with the correct keys."
```
## Question 2
Let's explore the relationship between being fed breastmilk as a child and getting a seasonal influenza vaccine from a healthcare provider. Return a tuple of the average number of influenza vaccines for those children we know received breastmilk as a child and those who know did not.
*This function should return a tuple in the form (use the correct numbers:*
```
(2.5, 0.1)
```
```
def average_influenza_doses():
df = pd.read_csv("assets/NISPUF17.csv", index_col="SEQNUMC")
del df['Unnamed: 0']
df.sort_index(inplace=True)
df = df[["CBF_01", "P_NUMFLU"]]
df = df.dropna()
df = df.groupby(["CBF_01"]).mean()
return (df.loc[1]["P_NUMFLU"], df.loc[2]["P_NUMFLU"])
assert len(average_influenza_doses())==2, "Return two values in a tuple, the first for yes and the second for no."
```
## Question 3
It would be interesting to see if there is any evidence of a link between vaccine effectiveness and sex of the child. Calculate the ratio of the number of children who contracted chickenpox but were vaccinated against it (at least one varicella dose) versus those who were vaccinated but did not contract chicken pox. Return results by sex.
*This function should return a dictionary in the form of (use the correct numbers):*
```
{"male":0.2,
"female":0.4}
```
Note: To aid in verification, the `chickenpox_by_sex()['female']` value the autograder is looking for starts with the digits `0.0077`.
```
def chickenpox_by_sex():
df = pd.read_csv("assets/NISPUF17.csv", index_col="SEQNUMC")
del df['Unnamed: 0']
df.sort_index(inplace=True)
df = df[["SEX", "HAD_CPOX", "P_NUMVRC"]]
df["SEX"] = df["SEX"].replace({1: "Male", 2: "Female"})
df = df.fillna(0)
df = df[(df["P_NUMVRC"]>0) & (df["HAD_CPOX"].isin((1,2)))]
# number of males vaccinated that contracted
nmvc = df[(df["SEX"] == "Male") & (df["HAD_CPOX"] == 1)].count()["SEX"]
# number of males vaccinated that did not contracted
nmvnc = df[(df["SEX"] == "Male") & (df["HAD_CPOX"] == 2)].count()["SEX"]
# number of females vaccinated that contracted
nfvc = df[(df["SEX"] == "Female") & (df["HAD_CPOX"] == 1)].count()["SEX"]
# number of females vaccinated that did not contracted
nfvnc = df[(df["SEX"] == "Female") & (df["HAD_CPOX"] == 2)].count()["SEX"]
return {"male":nmvc/nmvnc,"female":nfvc/nfvnc}
chickenpox_by_sex()
assert len(chickenpox_by_sex())==2, "Return a dictionary with two items, the first for males and the second for females."
```
## Question 4
A correlation is a statistical relationship between two variables. If we wanted to know if vaccines work, we might look at the correlation between the use of the vaccine and whether it results in prevention of the infection or disease [1]. In this question, you are to see if there is a correlation between having had the chicken pox and the number of chickenpox vaccine doses given (varicella).
Some notes on interpreting the answer. The `had_chickenpox_column` is either `1` (for yes) or `2` (for no), and the `num_chickenpox_vaccine_column` is the number of doses a child has been given of the varicella vaccine. A positive correlation (e.g., `corr > 0`) means that an increase in `had_chickenpox_column` (which means more no’s) would also increase the values of `num_chickenpox_vaccine_column` (which means more doses of vaccine). If there is a negative correlation (e.g., `corr < 0`), it indicates that having had chickenpox is related to an increase in the number of vaccine doses.
Also, `pval` is the probability that we observe a correlation between `had_chickenpox_column` and `num_chickenpox_vaccine_column` which is greater than or equal to a particular value occurred by chance. A small `pval` means that the observed correlation is highly unlikely to occur by chance. In this case, `pval` should be very small (will end in `e-18` indicating a very small number).
[1] This isn’t really the full picture, since we are not looking at when the dose was given. It’s possible that children had chickenpox and then their parents went to get them the vaccine. Does this dataset have the data we would need to investigate the timing of the dose?
```
def corr_chickenpox():
import scipy.stats as stats
import numpy as np
import pandas as pd
# this is just an example dataframe
df=pd.DataFrame({"had_chickenpox_column":np.random.randint(1,3,size=(100)),
"num_chickenpox_vaccine_column":np.random.randint(0,6,size=(100))})
# here is some stub code to actually run the correlation
corr, pval=stats.pearsonr(df["had_chickenpox_column"],df["num_chickenpox_vaccine_column"])
# just return the correlation
# return corr
df = pd.read_csv("assets/NISPUF17.csv")
df.sort_index(inplace=True)
df = df[["HAD_CPOX", "P_NUMVRC"]]
df = df.dropna()
df = df[df["HAD_CPOX"]<=3]
corr, pval = stats.pearsonr(df["HAD_CPOX"],df["P_NUMVRC"])
return corr
assert -1<=corr_chickenpox()<=1, "You must return a float number between -1.0 and 1.0."
corr_chickenpox()
```
| github_jupyter |
## Multiple Output Models
+ Multi Tast Elastic Net
+ Multi Task Models
```
import pandas as pd
import numpy as np
from sklearn.linear_model import MultiTaskElasticNet
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import r2_score, mean_squared_error
#### Read in Crime by year / month data set
## read in data
path = '../Homeworks/chicagoCrimesByYear.csv'
df = pd.read_csv(path).fillna(0)
## set month year as index
df.sort_values(by=['year', 'month'])
df.set_index(['year', 'month'], inplace=True)
df.head()
```
#### One Month Ahead Forecasting using mutlitask Elastic Net
This predicts all crime counts one month in advance
uses linear model (elastic net) at the core
```
# date shift
X = df.iloc[0:-1, :]
y = df.iloc[1: : ]
# builds Model
model = MultiTaskElasticNet().fit(X, y)
# create predictions, formats to pandas data frame
preds = pd.DataFrame(model.predict(X), columns=df.columns, index=y.index)
# create a dictionary to calculate peformance for every column
r2_dict = {}
for col in df.columns:
r2_dict[col] = r2_score(y[col], preds[col])
print('models r2 scores :')
r2_dict
```
#### One Month Ahead Forecasting using DecisionTreeRegression
This predicts all crime counts one month in advance
uses multiple decision Tree Regressors
```
# date shift
X = df.iloc[0:-1, :]
y = df.iloc[1: : ]
# builds Model
model = MultiOutputRegressor(DecisionTreeRegressor()).fit(X, y)
# create predictions, formats to pandas data frame
preds = pd.DataFrame(model.predict(X), columns=df.columns, index=y.index)
# create a dictionary to calculate peformance for every column
r2_dict = {}
for col in df.columns:
r2_dict[col] = r2_score(y[col], preds[col])
print('models r2 scores :')
r2_dict
```
#### One Year Ahead Forecasting using DecisionTreeRegression
This predicts all crime counts one month in advance
uses multiple decision Tree Regressors
```
# date shift
df_grouped = df.groupby(df.index.get_level_values('year')).sum()
# filter out partial year
df_grouped = df_grouped.loc[df_grouped.index < 2020, :]
X =df_grouped.iloc[0:-1, :]
y = df_grouped.iloc[1: : ]
# builds Model
model = MultiTaskElasticNet().fit(X, y)
# create predictions, formats to pandas data frame
preds = pd.DataFrame(model.predict(X), columns=df.columns, index=y.index)
# create a dictionary to calculate peformance for every column
r2_dict = {}
for col in df.columns:
r2_dict[col] = r2_score(y[col], preds[col])
print('models r2 scores :')
r2_dict
```
#### Predict Next Year's crime trends
use all the data to create a prediction for next year
```
index = list(df_grouped.index)
last_value = index[-1]
index.append(last_value + 1)
index = list(df_grouped.index)
last_value = index[-1]
index.append(last_value + 1)
index = index[1:]
preds = pd.DataFrame(model.predict(df_grouped), columns=df_grouped.columns, index = index)
preds.tail()
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.