repo_name stringlengths 6 77 | path stringlengths 8 215 | license stringclasses 15
values | content stringlengths 335 154k |
|---|---|---|---|
mne-tools/mne-tools.github.io | stable/_downloads/4a4a8e5bd5ae7cafea93a04d8c0a0d00/psf_ctf_vertices_lcmv.ipynb | bsd-3-clause | # Author: Olaf Hauk <olaf.hauk@mrc-cbu.cam.ac.uk>
#
# License: BSD-3-Clause
import mne
from mne.datasets import sample
from mne.beamformer import make_lcmv, make_lcmv_resolution_matrix
from mne.minimum_norm import get_cross_talk
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path / 'subjects'
meg_path = data_path / 'MEG' / 'sample'
fname_fwd = meg_path / 'sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = meg_path / 'sample_audvis-cov.fif'
fname_evo = meg_path / 'sample_audvis-ave.fif'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
# Read raw data
raw = mne.io.read_raw_fif(raw_fname)
# only pick good EEG/MEG sensors
raw.info['bads'] += ['EEG 053'] # bads + 1 more
picks = mne.pick_types(raw.info, meg=True, eeg=True, exclude='bads')
# Find events
events = mne.find_events(raw)
# event_id = {'aud/l': 1, 'aud/r': 2, 'vis/l': 3, 'vis/r': 4}
event_id = {'vis/l': 3, 'vis/r': 4}
tmin, tmax = -.2, .25 # epoch duration
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax,
picks=picks, baseline=(-.2, 0.), preload=True)
del raw
# covariance matrix for pre-stimulus interval
tmin, tmax = -.2, 0.
cov_pre = mne.compute_covariance(epochs, tmin=tmin, tmax=tmax,
method='empirical')
# covariance matrix for post-stimulus interval (around main evoked responses)
tmin, tmax = 0.05, .25
cov_post = mne.compute_covariance(epochs, tmin=tmin, tmax=tmax,
method='empirical')
info = epochs.info
del epochs
# read forward solution
forward = mne.read_forward_solution(fname_fwd)
# use forward operator with fixed source orientations
mne.convert_forward_solution(forward, surf_ori=True,
force_fixed=True, copy=False)
# read noise covariance matrix
noise_cov = mne.read_cov(fname_cov)
# regularize noise covariance (we used 'empirical' above)
noise_cov = mne.cov.regularize(noise_cov, info, mag=0.1, grad=0.1,
eeg=0.1, rank='info')
"""
Explanation: Compute cross-talk functions for LCMV beamformers
Visualise cross-talk functions at one vertex for LCMV beamformers computed
with different data covariance matrices, which affects their cross-talk
functions.
End of explanation
"""
# compute LCMV beamformer filters for pre-stimulus interval
filters_pre = make_lcmv(info, forward, cov_pre, reg=0.05,
noise_cov=noise_cov,
pick_ori=None, rank=None,
weight_norm=None,
reduce_rank=False,
verbose=False)
# compute LCMV beamformer filters for post-stimulus interval
filters_post = make_lcmv(info, forward, cov_post, reg=0.05,
noise_cov=noise_cov,
pick_ori=None, rank=None,
weight_norm=None,
reduce_rank=False,
verbose=False)
"""
Explanation: Compute LCMV filters with different data covariance matrices
End of explanation
"""
# compute cross-talk functions (CTFs) for one target vertex
sources = [3000]
verttrue = [forward['src'][0]['vertno'][sources[0]]] # pick one vertex
rm_pre = make_lcmv_resolution_matrix(filters_pre, forward, info)
stc_pre = get_cross_talk(rm_pre, forward['src'], sources, norm=True)
del rm_pre
rm_post = make_lcmv_resolution_matrix(filters_post, forward, info)
stc_post = get_cross_talk(rm_post, forward['src'], sources, norm=True)
del rm_post
"""
Explanation: Compute resolution matrices for the two LCMV beamformers
End of explanation
"""
brain_pre = stc_pre.plot('sample', 'inflated', 'lh', subjects_dir=subjects_dir,
figure=1, clim=dict(kind='value', lims=(0, .2, .4)))
brain_pre.add_text(0.1, 0.9, 'LCMV beamformer with pre-stimulus\ndata '
'covariance matrix', 'title', font_size=16)
# mark true source location for CTFs
brain_pre.add_foci(verttrue, coords_as_verts=True, scale_factor=1., hemi='lh',
color='green')
"""
Explanation: Visualize
Pre:
End of explanation
"""
brain_post = stc_post.plot('sample', 'inflated', 'lh',
subjects_dir=subjects_dir,
figure=2, clim=dict(kind='value', lims=(0, .2, .4)))
brain_post.add_text(0.1, 0.9, 'LCMV beamformer with post-stimulus\ndata '
'covariance matrix', 'title', font_size=16)
brain_post.add_foci(verttrue, coords_as_verts=True, scale_factor=1.,
hemi='lh', color='green')
"""
Explanation: Post:
End of explanation
"""
|
arcyfelix/Courses | 17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/01-Python-Crash-Course/.ipynb_checkpoints/Python Crash Course Exercises -checkpoint.ipynb | apache-2.0 | price = 300
"""
Explanation: Python Crash Course Exercises
This is an optional exercise to test your understanding of Python Basics. The questions tend to have a financial theme to them, but don't look to deeply into these tasks themselves, many of them don't hold any significance and are meaningless. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as Complete Python Bootcamp
Exercises
Answer the questions or complete the tasks outlined in bold below, use the specific method described if applicable.
Task #1
Given price = 300 , use python to figure out the square root of the price.
End of explanation
"""
stock_index = "SP500"
"""
Explanation: Task #2
Given the string:
stock_index = "SP500"
Grab '500' from the string using indexing.
End of explanation
"""
stock_index = "SP500"
price = 300
"""
Explanation: Task #3
Given the variables:
stock_index = "SP500"
price = 300
Use .format() to print the following string:
The SP500 is at 300 today.
End of explanation
"""
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
"""
Explanation: Task #4
Given the variable of a nested dictionary with nested lists:
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
Use indexing and key calls to grab the following items:
Yesterday's SP500 price (250)
The number 365 nested inside a list nested inside the 'info' key.
End of explanation
"""
source_finder("PRICE:345.324:SOURCE--QUANDL")
"""
Explanation: Task #5
Given strings with this form where the last source value is always separated by two dashes --
"PRICE:345.324:SOURCE--QUANDL"
Create a function called source_finder() that returns the source. For example, the above string passed into the function would return "QUANDL"
End of explanation
"""
price_finder("What is the price?")
price_finder("DUDE, WHAT IS PRICE!!!")
price_finder("The price is 300")
"""
Explanation: Task #5
Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!')
End of explanation
"""
s = 'Wow that is a nice price, very nice Price! I said price 3 times.'
count_price(s)
"""
Explanation: Task #6
Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation.
End of explanation
"""
avg_price([3,4,5])
"""
Explanation: Task #7
Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float.
End of explanation
"""
|
ctuning/ck-math | script/explore-clblast-matrix-size/clblast-client-single-configuration-analysis.ipynb | bsd-3-clause | import os
import sys
import json
import re
"""
Explanation: [PUBLIC] Analysis of CLBlast client multiple sizes
<a id="overview"></a>
Overview
This Jupyter Notebook analyses the performance that CLBlast (single configuaration) achieves across a range of sizes.
<a id="data"></a>
Get the experimental data from DropBox
NB: Please ignore this section if you are not interested in re-running or modifying this notebook.
The experimental data was collected on the experimental platform and archived as follows:
$ cd `ck find ck-math:script:<...>`
$ python <...>.py
$ ck zip local:experiment:* --archive_name=<...>.zip
It can be downloaded and extracted as follows:
$ wget <...>.zip
$ ck add repo:<....> --zip=<....>.zip --quiet
<a id="code"></a>
Data wrangling code
NB: Please ignore this section if you are not interested in re-running or modifying this notebook.
Includes
Standard
End of explanation
"""
import IPython as ip
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib as mp
print ('IPython version: %s' % ip.__version__)
print ('Pandas version: %s' % pd.__version__)
print ('NumPy version: %s' % np.__version__)
print ('Seaborn version: %s' % sns.__version__) # apt install python-tk
print ('Matplotlib version: %s' % mp.__version__)
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
from IPython.display import Image
from IPython.core.display import HTML
"""
Explanation: Scientific
If some of the scientific packages are missing, please install them using:
```
pip install jupyter pandas numpy matplotlib
```
End of explanation
"""
import ck.kernel as ck
print ('CK version: %s' % ck.__version__)
"""
Explanation: Collective Knowledge
If CK is not installed, please install it using:
```
pip install ck
```
End of explanation
"""
# Return the number of floating-point operations for C = alpha * A * B + beta * C,
# where A is a MxK matrix and B is a KxN matrix.
def xgemm_flops(alpha, beta, M, K, N):
flops_AB = 2*M*N*K if alpha!=0 else 0
flops_C = 2*M*N if beta!=0 else 0
flops = flops_AB + flops_C
return flops
# Return GFLOPS (Giga floating-point operations per second) for a known kernel and -1 otherwise.
def GFLOPS(kernel, run_characteristics, time_ms):
if kernel.lower().find('xgemm') != -1:
time_ms = np.float64(time_ms)
alpha = np.float64(run_characteristics['arg_alpha'])
beta = np.float64(run_characteristics['arg_beta'])
M = np.int64(run_characteristics['arg_m'])
K = np.int64(run_characteristics['arg_k'])
N = np.int64(run_characteristics['arg_n'])
return (1e-9 * xgemm_flops(alpha, beta, M, K, N)) / (1e-3 * time_ms)
else:
return (-1.0)
def convert2int(s):
if s[-1]=='K':
return np.int64(s[0:-1])*1024
else:
return np.int64(s)
def args_str(kernel, run):
args = ''
if kernel.lower().find('xgemm') != -1:
args = 'alpha=%s, beta=%s, M=%s, K=%s, N=%s' % \
(run['arg_alpha'], run['arg_beta'], run['arg_m'], run['arg_k'], run['arg_n'])
return args
"""
Explanation: Define helper functions
End of explanation
"""
def get_experimental_results(repo_uoa='local', tags='explore-clblast-matrix-size'):
module_uoa = 'experiment'
r = ck.access({'action':'search', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'tags':tags})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
experiments = r['lst']
dfs = []
for experiment in experiments:
data_uoa = experiment['data_uoa']
r = ck.access({'action':'list_points', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'data_uoa':data_uoa})
if r['return']>0:
print ("Error: %s" % r['error'])
exit(1)
for point in r['points']:
with open(os.path.join(r['path'], 'ckp-%s.0001.json' % point)) as point_file:
point_data_raw = json.load(point_file)
characteristics_list = point_data_raw['characteristics_list']
num_repetitions = len(characteristics_list)
# Obtain column data.
data = [
{
'repetition_id': repetition_id,
'm': convert2int(characteristics['run']['m'][0]),
'n': convert2int(characteristics['run']['n'][0]),
'k': convert2int(characteristics['run']['k'][0]),
#'mnk': convert2int(characteristics['run']['m'][0]) * convert2int(characteristics['run']['n'][0]) * convert2int(characteristics['run']['k'][0]),
'G': np.float32(characteristics['run']['GFLOPS_1'][0])
#'strategy' : tuner_output['strategy'],
#'config_id': config_id,
#'config' : config['parameters'],
#'kernel' : config['kernel']
#'args_id' : args_str(config['kernel'], characteristics['run']),
#'ms' : np.float64(config['time']),
#'GFLOPS' : GFLOPS(config['kernel'], characteristics['run'], config['time'])
}
for (repetition_id, characteristics) in zip(range(num_repetitions), characteristics_list)
#for (m,n,k,G,) in characteristics['run']
#for (config_id, config) in zip(range(len(tuner_output['result'])), tuner_output['result'])
]
#print data
#Construct a DataFrame.
df = pd.DataFrame(data)
# Set columns and index names.
df.columns.name = 'characteristics'
df.index.name = 'index'
df = df.set_index(['m', 'n', 'k', 'repetition_id'])
# Append to the list of similarly constructed DataFrames.
dfs.append(df)
# Concatenate all constructed DataFrames (i.e. stack on top of each other).
result = pd.concat(dfs)
return result.sortlevel(result.index.names)
df = get_experimental_results(tags='explore-clblast-matrix-size-client')
pd.options.display.max_columns = len(df.columns)
pd.options.display.max_rows = len(df.index)
df
df = df.sortlevel(df.index.names[3])
#df.sort_value(level=df.index.names[3])
#df = df.sort_values('mnk')
#pd.options.display.max_columns=2
#df = df.reset_index('mnk').sort_values('mnk')
df_mean = df.groupby(level=df.index.names[:-1]).mean()
df_std = df.groupby(level=df.index.names[:-1]).std()
df_mean.T \
.plot(yerr=df_std.T, title='GFLOPS',
kind='bar', rot=0, ylim=[0,150], figsize=[20, 12], grid=True, legend=True, colormap=cm.autumn, fontsize=16)
kernel = df.iloc[0].name[0]
kernel
"""
Explanation: Access the experimental data
End of explanation
"""
|
phasedchirp/Assorted-Projects | exercises/SlideRule-DS-Intensive/Inferential Statistics/sliderule_dsi_inferential_statistics_exercise_1.ipynb | gpl-2.0 | %matplotlib inline
from matplotlib import pyplot as plot
import pandas as pd
from scipy import stats
from statsmodels.api import qqplot
import numpy as np
df = pd.read_csv('data/human_body_temperature.csv')
# adding temp in degrees celsius
df['tempC'] = df.temperature.apply(lambda x: (x-32)*(5/9.))
"""
Explanation: What is the true normal human body temperature?
Background
The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. In 1992, this value was revised to 36.8$^{\circ}$C or 98.2$^{\circ}$F.
Exercise
In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.
Answer the following questions in this notebook below and submit to your Github account.
Is the distribution of body temperatures normal?
Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply.
Is the true population mean really 98.6 degrees F?
Bring out the one sample hypothesis test! In this situation, is it approriate to apply a z-test or a t-test? How will the result be different?
At what temperature should we consider someone's temperature to be "abnormal"?
Start by computing the margin of error and confidence interval.
Is there a significant difference between males and females in normal temperature?
Set up and solve for a two sample hypothesis testing.
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
"""
df.describe().apply(lambda x: np.round(x,2))
df.groupby("gender").describe().apply(lambda x: np.round(x,2))
"""
Explanation: Quick summary statistics
End of explanation
"""
# ignore error message. without the .show() qqplot has some buggy behavior when run with %matplotlib inline
print "normal q-q plot for the sample"
qqplot(df.temperature,line='45',fit=True).show()
"""
Explanation: Notably the mean temperature rounds to roughly 37 °C, which then converts to 98.6, suggesting the 98.6 value may be a result of rounding followed by conversion, rather than conversion followed by rounding.
Q1: Checking for approximate normality
Using a q-q plot here
End of explanation
"""
print "normal q-q plot for women in the sample"
qqplot(df[df.gender=="F"].temperature,line='45',fit=True).show()
print "normal q-q plot for men in the sample"
qqplot(df[df.gender=="M"].temperature,line='45',fit=True).show()
"""
Explanation: The majority of the data appears to fit the normal distribution relatively well. However, the lowest and highest quantiles show less of a good fit. There's also some apparent discreetness in the data, which apparently arises from rounding in the original data (see documentation). Splitting by gender, the women show less of a good fit to the normal distribution than men.
End of explanation
"""
df.groupby("gender").hist(column=["temperature"],layout=(2,1),sharex=True)
"""
Explanation: Histograms
Upper histogram is the distribution for women, lower for men.
End of explanation
"""
genderBin = np.where(df.gender=="F",1,0)
genderBin
plot.scatter(df.temperature,df.heart_rate,c=genderBin,cmap='winter')
"""
Explanation: These aren't precisely normal looking, women moreso than men, but they aren't particularly badly off. However, given that the normality assumption is maybe not totally justified here, a t test might be more appropriate than a z test, being more robust with regard to this assumption.
Checking for relationship between heart rate and temperature
Plot below suggests not too much going on with this.
End of explanation
"""
print "Correlation between heart rate and temperature is %.3f (p = %.3f)" %stats.pearsonr(df.temperature,df.heart_rate)
"""
Explanation: Slightly more formally:
End of explanation
"""
ttest1 = stats.ttest_1samp(df.temperature,98.6)
print "The t-statistic is %.3f and the p-value is %s" % ttest1
"""
Explanation: So the relationship is significant and is large enough that it isn't necessarily something to be ignored. While some of this might be attributed to the gender differences in heart rate, there's a still a lot of overlap. Will address this is a bit more detail below.
Q2: Is mean body temp 98.6 °F?
End of explanation
"""
n = len(df.temperature)
m = df.temperature.mean()
se = stats.sem(df.temperature)
h = se * stats.t.ppf(0.975, n-1)
print "Lower: %.2f, Upper: %.2f" %(m-h,m+h)
"""
Explanation: So, the mean here is significantly different from 98.6. This is not particularly shocking, given the mean of 98.25 calculated above. Constructing a 95% confidence interval around the mean, we find that it doesn't contain 98.6
End of explanation
"""
from statsmodels.stats.weightstats import ztest
ztest(x1=df.temperature,value=98.6)
np.array(df.temperature)
"""
Explanation: For comparison, a z-test gives quite similar answers:
End of explanation
"""
fTemp = df[df["gender"] == "F"].temperature
mTemp = df[df["gender"] == "M"].temperature
ttest2 = stats.ttest_ind(fTemp,mTemp)
print "The difference in means is %.3f degrees" % (fTemp.mean() - mTemp.mean())
print "The t-statistic is %.3f and the p-value is %.3f" % ttest2
"""
Explanation: Q3: What counts as normal?
This is fairly difficult to say. Assuming nobody in the sample was seriously ill when the measurements were taken, there's a non-trivial range of variability (sample values range between 96 and 100 °F). Given that the distribution is roughly normal, 2 sds from the mean might be a useful guideline. This would give the range 96.79 to 99.71 °F, corresponding fairly well to Wikipedia's listed values for the observed range.
Q4: Gender differences?
Do women and men show a difference in mean body temperature?
End of explanation
"""
import statsmodels.formula.api as smf
lm = smf.ols(formula='temperature ~ heart_rate + gender', data=df).fit()
print lm.summary()
"""
Explanation: So, the (small) differences in the estimated means calculated above also appear to be significant, although for the data shows a lot of overlap (see summary statistics above).
Taking into account heart rate:
End of explanation
"""
|
jerkos/cobrapy | documentation_builder/loopless.ipynb | lgpl-2.1 | from matplotlib.pylab import *
%matplotlib inline
import cobra.test
from cobra import Reaction, Metabolite, Model
from cobra.flux_analysis.loopless import construct_loopless_model
from cobra.solvers import get_solver_name
"""
Explanation: Loopless FBA
The goal of this procedure is identification of a thermodynamically consistent flux state without loops, as implied by the name.
Usually, the model has the following constraints.
$$ S \cdot v = 0 $$
$$ lb \le v \le ub $$
However, this will allow for thermodynamically infeasible loops (referred to as type 3 loops) to occur, where flux flows around a cycle without any net change of metabolites. For most cases, this is not a major issue, as solutions with these loops can usually be converted to equivalent solutions without them. However, if a flux state is desired which does not exhibit any of these loops, loopless FBA can be used. The formulation used here is modified from Schellenberger et al.
We can make the model irreversible, so that all reactions will satisfy
$$ 0 \le lb \le v \le ub \le \max(ub) $$
We will add in boolean indicators as well, such that
$$ \max(ub) i \ge v $$
$$ i \in {0, 1} $$
We also want to ensure that an entry in the row space of S also exists with negative values wherever v is nonzero. In this expression, $1-i$ acts as a not to indicate inactivity of a reaction.
$$ S^\mathsf T x - (1 - i) (\max(ub) + 1) \le -1 $$
We will construct an LP integrating both constraints.
$$ \left(
\begin{matrix}
S & 0 & 0\
-I & \max(ub)I & 0 \
0 & (\max(ub) + 1)I & S^\mathsf T
\end{matrix}
\right)
\cdot
\left(
\begin{matrix}
v \
i \
x
\end{matrix}
\right)
\begin{matrix}
&=& 0 \
&\ge& 0 \
&\le& \max(ub)
\end{matrix}$$
Note that these extra constraints are not applied to boundary reactions which bring metabolites in and out of the system.
End of explanation
"""
figure(figsize=(10.5, 4.5), frameon=False)
gca().axis("off")
xlim(0.5, 3.5)
ylim(0.7, 2.2)
arrow_params = {"head_length": 0.08, "head_width": 0.1, "ec": "k", "fc": "k"}
text_params = {"fontsize": 25, "horizontalalignment": "center", "verticalalignment": "center"}
arrow(0.5, 1, 0.85, 0, **arrow_params) # EX_A
arrow(1.5, 1, 0.425, 0.736, **arrow_params) # v1
arrow(2.04, 1.82, 0.42, -0.72, **arrow_params) # v2
arrow(2.4, 1, -0.75, 0, **arrow_params) # v3
arrow(2.6, 1, 0.75, 0, **arrow_params)
# reaction labels
text(0.9, 1.15, "EX_A", **text_params)
text(1.6, 1.5, r"v$_1$", **text_params)
text(2.4, 1.5, r"v$_2$", **text_params)
text(2, 0.85, r"v$_3$", **text_params)
text(2.9, 1.15, "DM_C", **text_params)
# metabolite labels
scatter(1.5, 1, s=250, color='#c994c7')
text(1.5, 0.9, "A", **text_params)
scatter(2, 1.84, s=250, color='#c994c7')
text(2, 1.95, "B", **text_params)
scatter(2.5, 1, s=250, color='#c994c7')
text(2.5, 0.9, "C", **text_params);
test_model = Model()
test_model.add_metabolites(Metabolite("A"))
test_model.add_metabolites(Metabolite("B"))
test_model.add_metabolites(Metabolite("C"))
EX_A = Reaction("EX_A")
EX_A.add_metabolites({test_model.metabolites.A: 1})
DM_C = Reaction("DM_C")
DM_C.add_metabolites({test_model.metabolites.C: -1})
v1 = Reaction("v1")
v1.add_metabolites({test_model.metabolites.A: -1, test_model.metabolites.B: 1})
v2 = Reaction("v2")
v2.add_metabolites({test_model.metabolites.B: -1, test_model.metabolites.C: 1})
v3 = Reaction("v3")
v3.add_metabolites({test_model.metabolites.C: -1, test_model.metabolites.A: 1})
DM_C.objective_coefficient = 1
test_model.add_reactions([EX_A, DM_C, v1, v2, v3])
"""
Explanation: We will demonstrate with a toy model which has a simple loop cycling A -> B -> C -> A, with A allowed to enter the system and C allowed to leave. A graphical view of the system is drawn below:
End of explanation
"""
construct_loopless_model(test_model).optimize()
"""
Explanation: While this model contains a loop, a flux state exists which has no flux through reaction v3, and is identified by loopless FBA.
End of explanation
"""
v3.lower_bound = 1
construct_loopless_model(test_model).optimize()
"""
Explanation: However, if flux is forced through v3, then there is no longer a feasible loopless solution.
End of explanation
"""
salmonella = cobra.test.create_test_model("salmonella")
construct_loopless_model(salmonella).optimize(solver=get_solver_name(mip=True))
ecoli = cobra.test.create_test_model("ecoli")
construct_loopless_model(ecoli).optimize(solver=get_solver_name(mip=True))
"""
Explanation: Loopless FBA is also possible on genome scale models, but it requires a capable MILP solver.
End of explanation
"""
|
rdhyee/nypl50 | travis_encrypt.ipynb | apache-2.0 | from Crypto.PublicKey import RSA
import base64
from github_settings import SSH_KEY_PASSWORD
my_public_key = RSA.importKey(
open('/Users/raymondyee/.ssh/id_rsa.pub', 'r').read())
my_private_key = RSA.importKey(open('/Users/raymondyee/.ssh/id_rsa','r').read(),
passphrase=SSH_KEY_PASSWORD)
message = "abcdefgh"
"""
Explanation: Encryption keys - Travis CI
End of explanation
"""
print (my_public_key.exportKey(format='PEM'))
print (open("/Users/raymondyee/.ssh/id_rsa.pem").read())
"""
Explanation: converting between ssh and pem
verify that my id_rsa.pem is actually equivalent to my id_rsa.pub
End of explanation
"""
from Crypto.PublicKey import RSA
from Crypto.Cipher import PKCS1_v1_5
from Crypto.Cipher import PKCS1_OAEP
from Crypto.Hash import SHA
from Crypto import Random
import base64
def nopadding_encrypt(message, key):
ciphertext = key.encrypt(message, 0)[0]
return base64.b64encode(ciphertext)
def nopadding_decrypt(ciphertextb64, key):
ciphertext = base64.b64decode(ciphertextb64)
return key.decrypt(ciphertext)
def pkcs1v15_encrypt(message, key):
h = SHA.new(message)
cipher = PKCS1_v1_5.new(key)
ciphertext = cipher.encrypt(message+h.digest())
return base64.b64encode(ciphertext)
def pkcs1v15_decrypt (ciphertextb64, key):
dsize = SHA.digest_size
sentinel = Random.new().read(15+dsize) # Let's assume that average data length is 15
cipher = PKCS1_v1_5.new(key)
ciphertext = base64.b64decode(ciphertextb64)
message = cipher.decrypt(ciphertext, sentinel)
digest = SHA.new(message[:-dsize]).digest()
print ("len(message): {} sentinel: {} len(digest):{} dsize: {}".format(len(message), sentinel,
len(digest), dsize))
if digest==message[-dsize:]: # Note how we DO NOT look for the sentinel
return message[:-dsize]
else:
raise Exception ('encryption was done incorrectly:{}'.format(message))
def pkcs1oaep_encrypt(message, key):
cipher = PKCS1_OAEP.new(key)
ciphertext = cipher.encrypt(message)
return base64.b64encode(ciphertext)
def pkcs1oaep_decrypt(ciphertextb64, key):
cipher = PKCS1_OAEP.new(key)
ciphertext = base64.b64decode(ciphertextb64)
return cipher.decrypt(ciphertext)
enc_data = nopadding_encrypt(message, my_public_key)
print (enc_data,
nopadding_decrypt (enc_data, my_private_key)
)
enc_data = pkcs1v15_encrypt(message, my_public_key)
print (enc_data, pkcs1v15_decrypt(enc_data,
my_private_key
))
enc_data = pkcs1oaep_encrypt(message, my_public_key)
print (enc_data,
pkcs1oaep_decrypt( enc_data,
my_private_key
))
### try decrypting output from Ruby with pkcs1v15
ruby_output = """
Upw4QQcNptfvd6t00mVLZaLMd965DqiiNOYmRStkcr1eX/v3ETkTNIqkc8WG
ajrTYM20rYw3wfcMIjbCKXBSouTYqrJ4H4Uom3BbOI11Ykmf3Lf20QhB5r9K
YwDLol3bKSqbTTNXhPm2ALSjsX5tha4jkc4VooGAA6grMMcTmS9cGgCC0Gm5
oILJzzLb5WEEN2CiUk0JVvSvadYylDyuFou8iP6GVPpOrILDNHHZKb70irXb
E846PrDg8x83fL3+OoYAtfup3fR2ZH2qVXvs4JAQqRH9ECQtUkinJ4sukKYU
R/pULVPeWI/xgX0cQ3xxXg3V8m4IcqF1nTe8TkZ1RA==
""".strip()
assert base64.b64decode(ruby_output) == base64.b64decode(ruby_output.replace("\n",""))
pkcs1v15_decrypt(ruby_output, my_private_key)
pkcs1oaep_decrypt(ruby_output, my_private_key)
%%bash
echo -n 'abcdefgh' \
| openssl rsautl \
-encrypt \
-pubin -inkey ~/.ssh/id_rsa.pem \
> /Users/raymondyee/Downloads/cipher.txt
pkcs1v15_decrypt(base64.b64encode(open("/Users/raymondyee/Downloads/cipher.txt", "rb").read()), my_private_key)
%%bash
cat /Users/raymondyee/Downloads/test_message.txt \
| base64 -D \
| openssl rsautl \
-decrypt \
-inkey ~/.ssh/id_rsa
pkcs1oaep_decrypt(base64.b64encode(open("/Users/raymondyee/Downloads/cipher.txt", "rb").read()), my_private_key)
%%bash
# using openssl
echo -n 'abcdefgh' | openssl rsautl -encrypt -pubin -inkey /Users/raymondyee/.ssh/id_rsa.pem | base64
openssl_output = """
yHkXsyDCj6eZJ7Ixf8vdXwOT7iCp9DHjVNcVmyMYR/fAsLzgLDeuNeS01hsMAVXtDiEJaMjxVaAqziRgeYB8Q36ZDGm9OUBkWahjQbvouXjS/YG5wLpW+PxnhYOIWS8La74dc50Kwqa5r6iqDJufBxJfD9g0eAngBTeIxIg1jq/r/ThNYcpb3qLVa4+h9sd4BocXxwvAwSjd0Wr1B4rogSUdxf11KU6K2tlQTjb/GHfOY7HjXaQH6jz8gRWJlNdDVaGSc+DCKiZfGrB62Ifuf94RBNjq0Y9T18PS+vVatcI2FJ8rSpV90cHYB3gTSLmBBwytW1SUt2rYR13Oi7aCUA==
""".strip()
pkcs1v15_decrypt(openssl_output, my_private_key)
"""
Explanation: Python and cryptography with pycrypto | Laurent Luce's Blog
How to match the
ruby
Base64.encode64
from travis.rb/repository.rb at dcc9f20535c811068c4ff9788ae9bd026a116351 · travis-ci/travis.rb This docs: Module: Base64 (Ruby 2_2_0):
Returns the Base64-encoded version of bin. This method complies with RFC 2045. Line feeds are added to every 60 encoded characters.
pycrypto + my own id_rsa
padding
Class: OpenSSL::PKey::RSA (Ruby 2_2_4):
Encrypt string with the public key. padding defaults to PKCS1_PADDING. The encrypted string output can be decrypted using private_decrypt.
Also in the doc:
RSA is an asymmetric public key algorithm that has been formalized in RFC 3447.
Look for how to do so in Python.
possible values for padding (see source: Ruby MRI/test/openssl/test_pkey_rsa.rb):
OpenSSL::PKey::RSA::NO_PADDING
OpenSSL::PKey::RSA::PKCS1_PADDING
Looks like there is no standard library support in Ruby libs for
Also: don't know whether PKCS1_PADDING means:
pycrypto: Module PKCS1_v1_5
Crypto.Cipher.PKCS1_v1_5
pycrypto: Module PKCS1_OAEP
Crypto.Cipher.PKCS1_OAEP
End of explanation
"""
import base64
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
digest = hashes.Hash(hashes.SHA256(), backend=default_backend())
digest.update(b"abc")
digest.update(b"123")
digest.finalize()
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives.asymmetric import rsa
private_key = rsa.generate_private_key(
public_exponent=65537,
key_size=2048,
backend=default_backend()
)
private_key
from github_settings import SSH_KEY_PASSWORD
from cryptography.hazmat.primitives import serialization
with open("/Users/raymondyee/.ssh/id_rsa", "rb") as key_file:
private_key = serialization.load_pem_private_key(
key_file.read(),
password=SSH_KEY_PASSWORD,
backend=default_backend()
)
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.primitives.asymmetric import padding
public_key = private_key.public_key()
pem = public_key.public_bytes(
encoding=serialization.Encoding.PEM,
format=serialization.PublicFormat.SubjectPublicKeyInfo
)
pem.splitlines()
message = b"abcdefgh"
#OAEP
ciphertext = public_key.encrypt(
message,
padding.OAEP(
mgf=padding.MGF1(algorithm=hashes.SHA1()),
algorithm=hashes.SHA1(),
label=None
)
)
ciphertext
message = b"abcdefgh"
#PKCS1v15
ciphertext = public_key.encrypt(
message,
padding.PKCS1v15()
)
ciphertext
plaintext = private_key.decrypt(
ciphertext,
padding.PKCS1v15()
)
plaintext == message
private_key.decrypt(
base64.b64decode(openssl_output),
padding.PKCS1v15()
)
private_key.decrypt(
base64.b64decode(ruby_output),
padding.PKCS1v15()
)
"""
Explanation: python cryptography library
Background for the library:
The state of crypto in Python [LWN.net]
RSA — Cryptography 1.3.dev1 documentation
End of explanation
"""
|
Ciaran1981/geospatial-learn | example_notebooks/PointCloudClassification.ipynb | gpl-3.0 | from geospatial_learn import learning as ln
incloud = "/path/to/Llandinam.ply"
"""
Explanation: A workflow for classifying a point cloud using point features
The following example will run through the functions to classify a point cloud based on the point neighborhood attributes. This is a very simple example but this of course could be extended to extract very useful information using different classes and subsequent querying of the constituent segments.
The point cloud in question can be downloaded here:
https://drive.google.com/file/d/1DP7wkTqemfux2UkAD_8gZUnzm5GUfShZ/view?usp=sharing
It is derived from UAV imagery via structure from motion. Unzip it and have a look in cloud compare, making the scalar field 'training'.
The task will classify the scene into roofs, building facades, trees/vegetation and ground classes, which are represented by the training samples seen in the screenshot.
<img src="figures/llanlabel.png" style="height:300px">
Import the learning module which contains all we need
End of explanation
"""
ln.ply_features(incloud)
"""
Explanation: Firstly we will calculate the features required to characterise the pointcloud.
These are calculated on 3 scales which by default are k=10, 20 & 30 nearest-neighbours.
If wish to alter this go ahead!
The features are:
- anisotropy
- curvature
- eigenentropy
- eigen_sum
- linearity
- omnivariance
- planarity
- sphericity
As well as these, the RGB and normals are used as features. We do not however uses XYZ as these always have negative effect on results.
End of explanation
"""
training = ln.get_training_ply(incld)
"""
Explanation: Next we can get training as a numpy array for creating our model
End of explanation
"""
model = 'path/to/model.h5'
ln.create_model(training, model, clf='keras', cv=5)
"""
Explanation: Next we create a model, this will be a keras-based dense net in this instance but does not have to be.
The nnt structure is 32 > 16 > 8 > 32.
This is not necessarily a good example of a dense nnt structure and is used merely for demo purposes.
End of explanation
"""
classify_ply(incloud, model, train_field="training", class_field='label',
rgb=True
"""
Explanation: Finally we classify the point cloud
End of explanation
"""
|
cfcdavidchan/Deep-Learning-Foundation-Nanodegree | intro-to-tflearn/TFLearn_Digit_Recognition.ipynb | mit | # Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
"""
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
"""
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
"""
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
"""
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
"""
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
"""
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# Inputs
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
"""
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
"""
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
"""
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
"""
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
"""
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation
"""
|
HCsoft-RD/shaolin | examples/Brainstorming -merging paramnb with shaolin.ipynb | agpl-3.0 | import param
import paramnb
def hello(x):
print("Hello %s" % x)
class BaseClass(param.Parameterized):
x = param.Parameter(default=3.14,doc="X position")
y = param.Parameter(default="Not editable",constant=True)
string_value = param.String(default="str",doc="A string")
num_int = param.Integer(50000,bounds=(-200,100000))
unbounded_int = param.Integer(23)
float_with_hard_bounds = param.Number(8.2,bounds=(7.5,10))
float_with_soft_bounds = param.Number(0.5,bounds=(0,5),softbounds=(0,2))
unbounded_float = param.Number(30.01)
hidden_parameter = param.Number(2.718,precedence=-1)
class Example(BaseClass):
"""An example Parameterized class"""
boolean = param.Boolean(True, doc="A sample Boolean parameter")
select_string = param.ObjectSelector(default="yellow",objects=["red","yellow","green"])
select_fn = param.ObjectSelector(default=list,objects=[list,set,dict])
int_list = param.ListSelector(default=[3,5], objects=[1,3,5,7,9],precedence=0.5)
single_file = param.FileSelector(path='../*/*.py*',precedence=0.5)
multiple_files = param.MultiFileSelector(path='../*/*.py?',precedence=0.5)
msg = param.Action(hello, doc="""Print a message.""",precedence=0.7)
paramnb.Widgets(BaseClass())
"""
Explanation: What is this about?
This is brainstorming on how to extend paramb in order to have full control over the layout of the underlying widgets. The implementation should meet different objectives:
Decouple the logic of a python class from its dashboard representation:
The actual functionality of the class should follow the paramnb filosophy.
There should be a way to reference the widget representation of every parameter.
Offer a way of quickly define complex combinations of parameters and interactivity options.
Capability to combine different classes representing dashboards. This would allow to create more complex dashboards through object inheritance and composition.
1. The paramnb way
In paramnb the parameters are defined at class level following the following sintax:
End of explanation
"""
from shaolin import shaoscript
shao_param = shaoscript('togs$description=miau&options=["paramnb","shaolin"]')#let's create a shaolin widget
shao_param,shao_param.value,shao_param.widget,shao_param.name
"""
Explanation: 1.1 Advantadges
Do not pollute the domain-specific code: Same domain speciffic code, widgets for free.
Great inheritance mechanism.
Layout auto-tuned using the precedence parameter.
Clean way to override default parameters.
1.2 Cons
Impossible to access widget specific parameters/tune widget interactivity.
No easy way to implement display-related logic.
Impossible to define the widget layout manually (only using the jupyter dashboards extension).
No easy way to propagate callbacks in complex dashboards.
2. The shaolin way
Shaolin tries to follow the same filosophy as paramnb while providing full control over how the user will interact with the widget GUI, at the cost of some pollution in the domain-specific code.
This is a list of cool stuf that shaolin can do. It would be great to find a way to include these features in paramnb.
2.1 accessing widget specific parameters/tune widget interactivity.
Shaolin transforms every parameter of a class into a shaolin widget, wich is a wrapper for an ipywidgets Widget.
End of explanation
"""
shao_param(),shao_param.value
"""
Explanation: This mean that domain spacific code will be poluted because in order to access the value of the parameter we have to access the value attribute of the shaolin widget or call the shaolin widget. The polution will be either adding a () after each parameter, or wirting shao_param.value.
End of explanation
"""
shao_param.widget
shao_param[0]#indexing with any integer shows the widget representation
"""
Explanation: The ipywidgets representation of the parameter can be accessed using the widget attribute or indexing the shaolin widget with an integer.
End of explanation
"""
from shaolin import Dashboard
from IPython.core.display import clear_output
class ShaoExample(Dashboard):
def __init__(self):
dashboard = ['column$name=example',
['toggle_buttons$description=Toggle&name=toggle&options=["paramnb","shaolin"]',
'textarea$description=Some text&name=text&value=brainstorming',
'float_slider$description=Float slider&name=fslider&min=0&max=10&step=1&value=5'
],
]
Dashboard.__init__(self,dashboard)
self.fslider.observe(self._update_layout_1)
self.fslider.observe(self.callback_2)
self.toggle.observe(self.callback_1)
self.toggle.observe(self._update_layout_2)
def _update_layout_1(self,_=None):
"""Hides the textarea widget when
the slider value is higher than 5"""
if self.fslider.value >=5:
self.text.visible = False
else:
self.text.visible = True
def _update_layout_2(self,_=None):
"""Disable the text input when the
toggle buttons value is shaolin"""
if self.toggle.value =='shaolin':
self.text.widget.disabled = False
else:
self.text.widget.disabled = True
def callback_1(self,_=None):
"""Prints the textarea value"""
print(self.text.value)
clear_output(True)
def callback_2(self,event):
"""Displays the event data passed
to the callback and the slider value"""
print("event data: {}".format(event))
print(self.fslider.value)
clear_output(True)
shao_e = ShaoExample()
shao_e[0]
"""
Explanation: 2.2 multiple callbacks in the same class
It is important to be able to assign several callbacks to the same class. Generally speaking, when programing a dashboard two types of callback will be needed:
- Domain speciffic callback: contains the code of the function that will use the widgets as parameters.
- Layout callback: Logic that controls how the dashboard is displayed.
In the following example we have an example class with 4 different callbacks applied to its widgets.
End of explanation
"""
shao_e.kwargs,shao_e()
"""
Explanation: 2.3 Accessing the dashboard kwargs
Instead of printing the parameters, it comes really in handy to hace a dictionary with all the dashboard parameters. This allows you to make a dashboard with no domain-specific callback, so you can use it as a way to organise parameters.
End of explanation
"""
import seaborn as sns
%matplotlib inline
sns.set(style="ticks")
data = sns.load_dataset("anscombe")
title = '#Exploratory plots$N=title&D='
marginals = "['Both','None','Histogram','KDE']$d=Marginals"
dset = "['ALL','I','II','III','IV']$D=Dataset"
x_cols = 'dd$D=X column&o='+str(data.columns.values.tolist())
y_cols = 'dd$D=Y column&o='+str(data.columns.values.tolist())+'&v='+data.columns[1]
save = "[False]$D=Save plot&n=save"
data_layout = ['c$N=data_layout',[title,['r$N=sub_row',[x_cols,y_cols]],
["c$N=sub_col",[marginals,dset,
['r$N=btn_row',['@btn$d=run&button_style="info"',save]] ]
]]
]
dash = Dashboard(data_layout,name='dash_1',mode='interactive')
dash[0]
"""
Explanation: If the only thing needed is a graphical interface capable of providing a kwargs dictionary managed by widgets I usually use the KungFu class.
2.4 Displaying the dashboards
When creating a complex dashboard it is nice to have two diferent ways of organising the layout:
Programatically.
Using the jupyter dashboards extension.
In order to define the layout programatically shaolin uses syntax based on a list of lists that mimics how the flex-layout of the ipywidgets package is defined.
The convention is the following:
[box_widget, [children_1, children_2]]
where children can also be a list of list with the structure defined above.
End of explanation
"""
dash.title[0]
dash.sub_col[0]
dash.sub_row[0]
"""
Explanation: Children is defined as string following the shaolin syntax. This string representation allows to quickly set the parameters and interactivity of every widget and act as a proxy for defining shaolin widgets.
This way we are able to define how we want our dashboard to be displayed. Once a dashboard is created, it is also possible to alter how it will be displayed using the jupyter dashboards extension.
In oder to do that, you can render each component of the dashboard in a different cell, and then rearange the cell using the dashboards extension:
End of explanation
"""
#regression
r_title = '#Regression options$N=r_title&D='
reg = '@[True]$D=Plot Regression&n=regression'
robust = '@False$D=Robust'
reg_order = "@(1,10,1,1)$D=Reg order"
ci = "@(1,100,1,95)$N=ci&d=Confidence intervals"
reg_layout = ['c$N=reg_layout',[r_title,reg,reg_order,ci,robust]]
dash_2 = Dashboard(reg_layout,name='dash_2',mode="interactive")
dash_2[0]
combined_dashboard = ['row$n=combined',[dash,dash_2]]
comb_dash = Dashboard(combined_dashboard,mode='interactive')
comb_dash[0]
comb_dash.dash_1[0]
comb_dash.dash_2[0]
"""
Explanation: 2.5 combining different dashboards
It is also really useful to be able to combine different dashboards. Shaolin allows Children to be a shaolin dashboard as it is shown in the following example:
End of explanation
"""
def callback(event):
print(event)
clear_output(True)
#print(event,other)
comb_dash.observe(callback)
comb_dash[0]
"""
Explanation: 2.6 propagating callbacks to children
In shaolin there are three diferents types of interactivity modes that a parameter can have:
Interactive (@): Interactive widgets will be assigned the target callback function when dashboard.observe(callback) is called. Interactive parameters will be included in the kwargs dictionary.
Active: Default mode. Active widgets wont get any callback applied when dashboard.observe is called but they will appear in the kwargs dictionary.
Passive (/): Passive widgets wont be included in the kwargs dictionary and don't get callbacks automatically applied.
In order to apply a callback to an active/interactive widget, target_widget.observe(callback) must be splicitly called.
In the following example the callback function will be propagated to any interactive widget (its definition starts with a "@"). This means that dash_1 will execute the callback when the run button is pressed and when any widget of dash_2 is used.
End of explanation
"""
class BaseClass(param.Parameterized):
x = param.Parameter(default=3.14,doc="X position")
y = param.Parameter(default="Not editable",constant=True)
string_value = param.String(default="str",doc="A string")
num_int = param.Integer(50000,bounds=(-200,100000))
unbounded_int = param.Integer(23)
float_with_hard_bounds = param.Number(8.2,bounds=(7.5,10))
float_with_soft_bounds = param.Number(0.5,bounds=(0,5),softbounds=(0,2))
unbounded_float = param.Number(30.01)
hidden_parameter = param.Number(2.718,precedence=-1)
class BaseClass_(param.Parameterized):
#custom syntax highlighting using pygments would be really awesome. Red is hard to read
layout = ['r$N=baseclass',[['c$N=text_col',['ft$d=X&v=3.14&doc="X position"',
'ft$d=Unbounded float&v=30.01',
'it$d=Unbounded int&v=23',
'text$d=String value&v=str&doc="A string"']
],
['c$N=sliders_col',['(7.5,10,0.1,8.2)$D=Float with hard bounds',
'(0.,2.,0.1,0.5)$D=Float with soft bounds&hb=(0.,5.)',
'(-200,100000,1,50000)$d=Num int&v=23',
'text$d=y&v=Not editable&disabled=True']
]
]
]
paramnb.Widgets(BaseClass)
"""
Explanation: 3. Brainstorming: How to add this features to paramnb
It would be nice to find a way to implement these features without breaking the paramnb filosophy. I dont know which one is the best way to do it, but here are some ideas:
Accessing the widgets
Mabe it would be usefull to create aditional attributes of a class containing the widget representation of a variable.
For example, in the Example class it would be great to access the example.boolean (True) value this way, and the checkbox widget could be accessed like this: example._boolean (CheckBox).
Another option would be:
example.boolean (True)
example.layout.boolean (CheckBox)
It would be great to find a good naming convention.
widget definition
The shaolin string syntax used may look really messy and complex, but once you get used to use it it literally saves hundreds of lines of code and makes it really easy to mantain the layout and make changes with some clever copy-paste.
It would be great to standarize some sort of pseudo programing language for creating layouts with the ipywidgets package. This way, the BaseClass from the following example could also be defined the following way:
End of explanation
"""
layout = ['r$N=baseclass',[['c$N=text_col',['ft$d=X&v=3.14&doc="X position"',
'ft$d=Unbounded float&v=30.01',
'it$d=Unbounded int&v=23',
'text$d=String value&v=str&doc="A string"']
],
['c$N=sliders_col',['(7.5,10,0.1,8.2)$D=Float with hard bounds',
'(0.,2.,0.1,0.5)$D=Float with soft bounds&hb=(0.,5.)',
'(-200,100000,1,50000)$d=Num int&v=23',
'text$d=y&v=Not editable&disabled=True']
]
]
]
wannabe_dash = Dashboard(layout)
wannabe_dash[0]
wannabe_dash()
"""
Explanation: this is how it looks
End of explanation
"""
|
jayme-anchante/Python-presentations | Salário dos professores e qualidade da educação no Brasil.ipynb | mit | # Começamos importando as bibliotecas a serem utilizadas:
import numpy as np
import pandas as pd
import seaborn as sns; sns.set()
%matplotlib inline
# Importando os microdados do arquivo .zip:
rs = pd.read_table('/mnt/part/Data/RAIS/2014/RS2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
rs.head() # mostra as primeiras 5 linhas do DataFrame
print('O formato do DataFrame é: ' , rs.shape, '(linhas, colunas)')
print('\n', 'O tipo de cada coluna:', '\n\n',rs.dtypes)
# Visualizando quais são
print('Bairros Fortaleza unique values:', rs['Bairros Fortaleza'].unique())
print('Bairros RJ unique values:', rs['Bairros RJ'].unique())
print('CBO 2002 unique values:', rs['CBO Ocupaτπo 2002'].unique())
print('Distritos SP unique values:', rs['Distritos SP'].unique())
print('Tipo Estab.1 unique values:', rs['Tipo Estab.1'].unique())
# Atribuindo '-1' aos valores '0000-1', mudando o tipo de 'object' para 'float':
rs[rs['CBO Ocupaτπo 2002'] == '0000-1'] = -1
# rs['CBO Ocupaτπo 2002'].dropna(inplace = True)
rs['CBO Ocupaτπo 2002'] = rs['CBO Ocupaτπo 2002'].astype(dtype = float)
rs['CBO Ocupaτπo 2002'].dtypes
"""
Explanation: 1. Contexto
O Instituto Nacional de Estudos e Pesquisas Educacionais Anísio Teixeira (Inep) divulgou no dia 21 de Junho de 2017 sobre remuneração média dos professores em exercício na educação básica. O estudo é resultado de uma nova metodologia do Inep cruzando informações do Censo Escolar com a Relação Anual de Informações Sociais (Rais) de 2014, do Ministério do Trabalho e da Previdência Social. O levantamento mostrou uma população de 2.080.619 professores.$^{[1]}$
De acordo com o levantamento, com dados referentes a 2014, uma jornada semanal de 40h representa, na rede de ensino de Porto Alegre, uma média de proventos mensais de R\$ 10.947,15. Porto Alegre seria a rede que melhor paga entre as capitais (média é de R\$ 3.116). Ofereceria também remuneração superior àquela das escolas federais, estaduais e privadas. $^{[2]}$
O dito estudo do Inep é citado em diversas notícias, por exemplo G1, MEC, 10 anos de metodologia de coleta de dados individualizada dos censos educacionais. O estudo completo infelizmente não foi encontrado, porém a apresentação Remuneração Média dos Docentes da Educação Básica pode ser baixada completamente.
Fontes:
$^{[1]}$ Inep divulga estudo sobre salário de professor da educação básica
$^{[2]}$ Itamar Melo. Salário em alta, ensino em baixa. Zero Hora, 30 de Junho de 2017.
2. Fonte de dados:
Quando fui buscar os microdados da RAIS para download tive a (não) imensa surpresa de enfrentar uma enorme dificuldade para encontrá-los. No site do Ministério do Trabalho não há nenhuma referência a RAIS (ou algo que lembre minimamente ela). Uma notícia no site do Governo Federal informa um novo link para consulta dos dados da RAIS; infelizmente o link acaba numa página 'Not Found'. O site (oficial) da RAIS faz referência apenas a download do GDRAIS 2016, GDRAIS Genérico (1976-2015), o Manual de Orientação, Layout e a Portaria da RAIS ano-base 2016; nada dos microdados. Felizmente no site do Laboratório de Estudos Econômicos da Universidade Federal de Juiz de Fora é possível encontrar diversos links para acesso dos microdados originais de diversas pesquisas brasileiras, entre elas a RAIS!
Após clicar no link da RAIS 2014$^{[1]}$, é possível manualmente baixar os microdados para cada Estado brasileiro (no Linux, é possível abrir o terminal - ou linha de comando - e digitar wget -m ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/2014/ que os dados serão baixados automaticamente para sua pasta inicial).
Observações da RAIS trabalhador$^{[2}$: arquivos com separador ';', ao encontrar '-1', '{ñ class}', '{ñclass}' ou parte do texto considerar como ignorado, separador do decimal é ','.
Fontes:
$^{[1]}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/2014/
$^{[2}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/Layouts/v%C3%ADnculos/RAIS_vinculos_layout.xls
3. Software:
Eu utilizei a linguagem de programação Python 3.6.1. utilizando a distribuição Anaconda da Continuum Analytics, por ser uma linguagem gratuita e open-source, além de rápida e versátil, garantindo a total reprodutibilidade do presente estudo. O presente documento é um Jupyter notebook que mistura texto (Markdown e HTML), código (Python) e visualizações (tabelas, gráficos, imagens etc.).
Bibliotecas utilizadas: Pandas (data structures and data analysis), Jupyter (create and share documents that contain live code, equations, visualizations and explanatory text), StatsModels.
End of explanation
"""
df = pd.DataFrame({'Munic': rs['Municφpio'],
'Remuneracao': rs['Vl Remun MΘdia Nom'],
'CBO': rs['CBO Ocupaτπo 2002'],
'CNAE': rs['CNAE 95 Classe'],
'Natureza': rs['Natureza Jurφdica'],
'Horas': rs['Qtd Hora Contr'],
'Tempo': rs['Tempo Emprego'],
'Admissao': rs['MΩs Admissπo'],
'Desligamento': rs['MΩs Desligamento'],
'Sexo': rs['Sexo Trabalhador'],
'Idade': rs['Faixa Etßria'],
'Raca': rs['Raτa Cor']})
print('Número de observações na base original:', len(df))
df.dropna(axis = 0, how = 'any', inplace = True)
print('Número de observações após excluir missings: ', len(df))
df.head()
print('Mun Trab é igual a Municφpio em' , round(((rs['Mun Trab'] == rs['Municφpio']).sum() / len(rs)) * 100, 4), '% dos casos')
"""
Explanation: Segundo a apresentação, as variáveis utilizadas da RAIS são: UF, Município, Remuneração média anual do trabalhador, Classificação Brasileira de Ocupações (CBO), Classe de Atividade Econômica (CNAE), CPF, Ano, Natureza Jurídica (CONCLA), Quantidade de horas contratuais por semana, tempo de emprego do trabalhador, data de admissão do trabalhador e mês de desligamento.
Há duas variáveis de município ('Mun Trab' e 'Municφpio'), como elas são iguais na maior parte dos casos (99%), utilizo a 'Mun Trab'. Há diversas variáveis com a remuneração dos trabalhadores, utilizo a 'Vl Remun MΘdia Nom' (Descrição da variável: Remuneração média do trabalhador (valor nominal) - a partir de 1999). Há apenas uma coluna indicando o CBO. Para a CNAE, utilizo a 'CNAE 95 Classe'. Nos microdados baixados do site indicado, não há nenhuma coluna com CPF dos trabalhadores. O ano é 2014 para todas as linhas já que utilizo apenas a RAIS 2014. Há apenas uma coluna com a Natureza Jurídica que é 'Natureza Jurφdica'. Há apenas uma coluna para Horas contratuais que é 'Qtd Hora Contr'. Tempo de emprego é 'Tempo de Emprego', mês de admissão é 'MΩs Admissπo' e mês de desligamento é 'MΩs Desligamento'.
End of explanation
"""
cnae = [75116, 80136, 80144, 80152, 80209, 80969, 80977, 80993]
cbo = [23, 33]
df1 = df[df['CNAE'].isin(cnae) | (df['CBO'] / 10000).apply(np.floor).isin(cbo)]
"""
Explanation: De acordo com o slide 10, a identificação dos docentes foi feita baseado nos seguintes códigos do CNAE e CBO:
CNAE:
75116: Administração Pública em Geral
80136: Educação Infantil creche
80144: Educação Infantil pré-escola
80152: Ensino Fundamental
80209: Ensino Médio
80969: Educação Profissional de Nível Técnico
80977: Educação Profissional de Nível Tecnológico
80993: Outras Atividades de Ensino
CBO:
23: Profissionais do Ensino
33: Professores leigos e de nível médio
Podemos utilizar uma lista para fatiar (slice) o quadro de dados (DataFrame). Fonte: Wouter Overmeire, https://stackoverflow.com/questions/12096252/use-a-list-of-values-to-select-rows-from-a-pandas-dataframe
End of explanation
"""
ed_basica = [231105, 231110, 231205, 231210, 231305, 231310, 231315,
231320, 231325, 231330, 231335, 231340, 331105, 332105]
print('Remuneração dos professores gaúchos: R$', df1['Remuneracao'].mean())
print('Remuneração dos professores do ensino básico gaúchos: R$',
df1[df1['CBO'].isin(ed_basica)]['Remuneracao'].mean())
print('Remuneração dos professores do ensino básico gaúchos: R$',
df1[df1['CBO'].isin(ed_basica)]['Remuneracao'].mean())
print('Remuneração dos professores do ensino básico gaúchos: R$',
df1[df1['CBO'].isin(ed_basica) & (df1['Munic'] == 431490)]['Remuneracao'].mean())
df1[df1['CBO'].isin(ed_basica) & (df1['Munic'] == 431490)]['Remuneracao'].hist();
poa = df1[(df1['Munic'] == 431490)]
"""
Explanation: Entretanto, podemos notar que as categorias 23 e 33 de CBO se dividem em diversas ocupações específicas, de acordo com a planilha 'ocupação' do arquivo RAIS_vinculos_layout $^{[1]}$. As últimas ocupações da categoria 23 estão ligadas à educação, porém parecem ser mais cargos administrativos e/ou de apoio que cargos do magitério propriamente dito (Coordenador Pedagogico, Orientador Educacional, Pedagogo, Psicopedagogo, Supervisor de Ensino, Designer Educacional). A categoria 33 é ainda mais heterogênea, incluindo Auxiliar de Desenvolvimento Infantil, Instrutor de Auto-Escola, Instrutor de Cursos Livres , Inspetor de Alunos de Escola Privada, Inspetor de Alunos de Escola Publica e Monitor de Transporte Escolar.
231105:Professor de Nivel Superior na Educacao Infantil (Quatro a Seis Anos)
231110:Professor de Nivel Superior na Educacao Infantil (Zero a Tres Anos)
231205:Professor da Educacao de Jovens e Adultos do Ensino Fundamental (Primeira a Quarta Serie)
231210:Professor de Nivel Superior do Ensino Fundamental (Primeira a Quarta Serie)
231305:Professor de Ciencias Exatas e Naturais do Ensino Fundamental
231310:Professor de Educacao Artistica do Ensino Fundamental
231315:Professor de Educacao Fisica do Ensino Fundamental
231320:Professor de Geografia do Ensino Fundamental
231325:Professor de Historia do Ensino Fundamental
231330:Professor de Lingua Estrangeira Moderna do Ensino Fundamental
231335:Professor de Lingua Portuguesa do Ensino Fundamental
231340:Professor de Matematica do Ensino Fundamental
232105:Professor de Artes no Ensino Medio
232110:Professor de Biologia no Ensino Medio
232115:Professor de Disciplinas Pedagogicas no Ensino Medio
232120:Professor de Educacao Fisica no Ensino Medio
232125:Professor de Filosofia no Ensino Medio
232130:Professor de Fisica no Ensino Medio
232135:Professor de Geografia no Ensino Medio
232140:Professor de Historia no Ensino Medio
232145:Professor de Lingua e Literatura Brasileira no Ensino Medio
232150:Professor de Lingua Estrangeira Moderna no Ensino Medio
232155:Professor de Matematica no Ensino Medio
232160:Professor de Psicologia no Ensino Medio
232165:Professor de Quimica no Ensino Medio
232170:Professor de Sociologia no Ensino Medio
233105:Professor da Area de Meio Ambiente
233110:Professor de Desenho Tecnico
233115:Professor de Tecnicas Agricolas
233120:Professor de Tecnicas Comerciais e Secretariais
233125:Professor de Tecnicas de Enfermagem
233130:Professor de Tecnicas Industriais
233135:Professor de Tecnologia e Calculo Tecnico
233205:Instrutor de Aprendizagem e Treinamento Agropecuario
233210:Instrutor de Aprendizagem e Treinamento Industrial
233215:Professor de Aprendizagem e Treinamento Comercial
233220:Professor Instrutor de Ensino e Aprendizagem Agroflorestal
233225:Professor Instrutor de Ensino e Aprendizagem em Servicos
234105:Professor de Matematica Aplicada (No Ensino Superior)
234110:Professor de Matematica Pura (No Ensino Superior)
234115:Professor de Estatistica (No Ensino Superior)
234120:Professor de Computacao (No Ensino Superior)
234125:Professor de Pesquisa Operacional (No Ensino Superior)
234205:Professor de Fisica (Ensino Superior)
234210:Professor de Quimica (Ensino Superior)
234215:Professor de Astronomia (Ensino Superior)
234305:Professor de Arquitetura
234310:Professor de Engenharia
234315:Professor de Geofisica
234320:Professor de Geologia
234405:Professor de Ciencias Biologicas do Ensino Superior
234410:Professor de Educacao Fisica no Ensino Superior
234415:Professor de Enfermagem do Ensino Superior
234420:Professor de Farmacia e Bioquimica
234425:Professor de Fisioterapia
234430:Professor de Fonoaudiologia
234435:Professor de Medicina
234440:Professor de Medicina Veterinaria
234445:Professor de Nutricao
234450:Professor de Odontologia
234455:Professor de Terapia Ocupacional
234460:Professor de Zootecnia do Ensino Superior
234505:Professor de Ensino Superior na Area de Didatica
234510:Professor de Ensino Superior na Area de Orientacao Educacional
234515:Professor de Ensino Superior na Area de Pesquisa Educacional
234520:Professor de Ensino Superior na Area de Pratica de Ensino
234604:Professor de Lingua Alema
234608:Professor de Lingua Italiana
234612:Professor de Lingua Francesa
234616:Professor de Lingua Inglesa
234620:Professor de Lingua Espanhola
234624:Professor de Lingua Portuguesa
234628:Professor de Literatura Brasileira
234632:Professor de Literatura Portuguesa
234636:Professor de Literatura Alema
234640:Professor de Literatura Comparada
234644:Professor de Literatura Espanhola
234648:Professor de Literatura Francesa
234652:Professor de Literatura Inglesa
234656:Professor de Literatura Italiana
234660:Professor de Literatura de Linguas Estrangeiras Modernas
234664:Professor de Outras Linguas e Literaturas
234668:Professor de Linguas Estrangeiras Modernas
234672:Professor de Linguistica e Linguistica Aplicada
234676:Professor de Filologia e Critica Textual
234680:Professor de Semiotica
234684:Professor de Teoria da Literatura
234705:Professor de Antropologia do Ensino Superior
234710:Professor de Arquivologia do Ensino Superior
234715:Professor de Biblioteconomia do Ensio Superior
234720:Professor de Ciencia Politica do Ensino Superior
234725:Professor de Comunicacao Social do Ensino Superior
234730:Professor de Direito do Ensino Superior
234735:Professor de Filosofia do Ensino Superior
234740:Professor de Geografia do Ensino Superior
234745:Professor de Historia do Ensino Superior
234750:Professor de Jornalismo
234755:Professor de Museologia do Ensino Superior
234760:Professor de Psicologia do Ensino Superior
234765:Professor de Servico Social do Ensino Superior
234770:Professor de Sociologia do Ensino Superior
234805:Professor de Economia
234810:Professor de Administracao
234815:Professor de Contabilidade
234905:Professor de Artes do Espetaculo no Ensino Superior
234910:Professor de Artes Visuais no Ensino Superior (Artes Plasticas e Multimidia)
234915:Professor de Musica no Ensino Superior
239205:Professor de Alunos com Deficiencia Auditiva e Surdos
239210:Professor de Alunos com Deficiencia Fisica
239215:Professor de Alunos com Deficiencia Mental
239220:Professor de Alunos com Deficiencia Multipla
239225:Professor de Alunos com Deficiencia Visual
239405:Coordenador Pedagogico
239410:Orientador Educacional
239415:Pedagogo
239420:Professor de Tecnicas e Recursos Audiovisuais
239425:Psicopedagogo
239430:Supervisor de Ensino
239435:Designer Educacional
331105:Professor de Nivel Medio na Educacao Infantil
331110:Auxiliar de Desenvolvimento Infantil
331205:Professor de Nivel Medio no Ensino Fundamental
331305:Professor de Nivel Medio no Ensino Profissionalizante
332105:Professor Leigo no Ensino Fundamental
332205:Professor Pratico no Ensino Profissionalizante
333105:Instrutor de Auto-Escola
333110:Instrutor de Cursos Livres
333115:Professores de Cursos Livres
334105:Inspetor de Alunos de Escola Privada
334110:Inspetor de Alunos de Escola Publica
334115:Monitor de Transporte Escolar
$^{[1]}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/Layouts/v%C3%ADnculos/RAIS_vinculos_layout.xls
End of explanation
"""
# Conta o número de observações no DataFrame 'poa' por grupo de 'Natureza' (Natureza Jurídica):
poa['Admissao'].groupby(by = poa['Natureza']).count()
"""
Explanation: Natureza Jurídica e Códigos
||
|---------------------------------------|
|POD EXEC FE|1015|SOC MISTA|2038|SOC SIMP PUR|2232|FUN DOM EXT|3212|
|POD EXEC ES|1023|SA ABERTA|2046|SOC SIMP LTD|2240|ORG RELIG|3220|
|POD EXEC MU|1031|SA FECH|2054|SOC SIMP COL|2259|COMUN INDIG|3239|
|POD LEG FED|1040|SOC QT LTDA|2062|SOC SIMP COM|2267|FUNDO PRIVAD|3247|
|POD LEG EST|1058|SOC COLETV|2070|EMPR BINAC|2275|OUTR ORG|3999|
|POD LEG MUN|1066|OC COLETV07|2076|CONS EMPREG|2283|EMP IND IMO|4014|
|POD JUD FED|1074|SOC COMD SM|2089|CONS SIMPLES|2291|SEG ESPEC|4022|
|POD JUD EST|1082|SOC COMD AC|2097|CARTORIO|3034|CONTR IND|4080|
|AUTARQ FED|1104|SOC CAP IND|2100|ORG SOCIAL|3042|CONTR IND07|4081|
|AUTARQ EST|1112|SOC CIVIL|2119|OSCIP|3050|CAN CARG POL|4090|
|AUTARQ MUN|1120|SOC CTA PAR|2127|OUT FUND PR|3069|LEILOEIRO|4111|
|FUNDAC FED|1139|FRM MER IND|2135|SERV SOC AU|3077|ORG INTERN|5002|
|FUNDAC EST|1147|COOPERATIVA|2143|CONDOMIN|3085|ORG INTERNAC|5010|
|FUNDAC MUN|1155|CONS EMPRES|2151|UNID EXEC|3093|REPR DIPL ES|5029|
|ORG AUT FED|1163|GRUP SOC|2160|COM CONC|3107|OUT INST EXT|5037|
|ORG AUT EST|1171|FIL EMP EXT|2178|ENT MED ARB|3115|IGNORADO|-1|
|COM POLINAC|1198|FIL ARG-BRA|2194|PART POLIT|3123|
|FUNDO PUBLIC|1201|ENT ITAIPU|2208|ENT SOCIAL|3130|
|ASSOC PUBLIC|1210|EMP DOM EXT|2216|ENT SOCIAL07|3131|
|EMP PUB|2011|FUN INVEST|2224|FIL FUN EXT|3204|
End of explanation
"""
df1[df1['CBO'].isin(ed_basica) & (df1['Munic'] == 431490)]['Remuneracao'].groupby(by = df1['Idade']).mean()
from statsmodels.stats.weightstats import ztest
# Source: http://www.statsmodels.org/dev/generated/statsmodels.stats.weightstats.ztest.html#statsmodels.stats.weightstats.ztest
print(ztest(x1 = df1[(df1['CBO'].isin(ed_basica)) & (df1['Munic'] == 431490)]['Remuneracao'], x2=None,
value=10000, alternative='two-sided', usevar='pooled', ddof=1.0))
print(ztest(x1 = df1[(df1['CBO'].isin(ed_basica)) & (df1['Munic'] == 431490)]['Remuneracao'], x2=None,
value=10000, alternative='smaller', usevar='pooled', ddof=1.0))
from statsmodels.stats.weightstats import DescrStatsW
# Source: http://www.statsmodels.org/dev/generated/statsmodels.stats.weightstats.DescrStatsW.html#statsmodels.stats.weightstats.DescrStatsW
stats = DescrStatsW(df1[(df1['CBO'].isin(ed_basica)) & (df1['Munic'] == 431490)]['Remuneracao'])
print(stats.mean)
print(stats.var)
print(stats.std_mean)
print(stats.ttest_mean(value = 10000, alternative = 'larger'))
print(stats.ztest_mean(value = 10000, alternative = 'larger'))
# tstat, pval, df
print(stats.ttost_mean(low = 6000, upp = 4000))
print(stats.ztost_mean(low = 6000, upp = 4000))
# TOST: two one-sided t tests
# null hypothesis: m < low or m > upp alternative hypothesis: low < m < upp
# returns: pvalue; t1, pv1, df1; t2, pv2, df2
# DescrStatsW.ttest()
"""
Explanation: De acordo com a Secretaria Municipal de Educação: "A Rede Municipal de Ensino – RME – é formada por 99 escolas com cerca de 4 mil professores e 900 funcionários, atende mais de 50 mil alunos da Educação Infantil, Ensino Fundamental, Ensino Médio, Educação Profissional de Nível Técnico, Educação de Jovens e Adultos (EJA) e Educação Especial."
Entretanto, os dados da RAIS mostram apenas 148 professores do Poder Executivo Municipal ("POD EXEC MU 1031"). A maioria dos professores (106.728) são do Poder Executivo Estadual (POD EXEC ES 1023).
End of explanation
"""
import statsmodels.api as sm
# Dummies:
groups = df1['Idade']
dummy = sm.categorical(groups, drop=True)
x = np.linspace(0, 20, nsample)
# drop reference category
X = np.column_stack((x, dummy[:,1:]))
X = sm.add_constant(X, prepend=False)
X = pd.concat([df1['Natureza'], df1['Horas'], df1['Tempo'], df1['Sexo'], df1['Idade'], df1['Raca']], axis = 1)
X = sm.add_constant(X)
y = df1['Remuneracao']
model = sm.OLS(y, X)
results = model.fit()
print(results.summary())
# #Open all the .csv file in a directory
# #Author: Gaurav Singh
# #Source: https://stackoverflow.com/questions/20906474/import-multiple-csv-files-into-pandas-and-concatenate-into-one-dataframe
# import pandas as pd
# import glob
# #get data file names
# path =r'/mnt/part/Data/RAIS/2014/'
# allFiles = glob.glob(path + "/*.zip")
# df = pd.DataFrame()
# list_ = []
# for file_ in allFiles:
# frame = pd.read_csv(file_, sep = ';', encoding = 'cp860', decimal = ',')
# list_.append(frame)
# df = pd.concat(list_, axis=0)
"""
Explanation: Variáveis dummy de acordo com o layout$^{[1]}$:
||FAIXA ETÁRIA|
|---------------|
|01| 10 A 14 anos|
|02| 15 A 17 anos|
|03| 18 A 24 anos|
|04| 25 A 29 anos|
|05| 30 A 39 anos|
|06| 40 A 49 anos|
|07| 50 A 64 anos|
|08| 65 anos ou mais|
|{ñ class}| {ñ class}|
||FAIXA HORA CONTRATUAL |
|-----------------------|
|01| Até 12 horas|
|02| 13 a 15 horas|
|03| 16 a 20 horas|
|04| 21 a 30 horas|
|05| 31 a 40 horas|
|06| 41 a 44 horas|
|grau de instruçao |escolaridade após 2005 |
|-------------------------------------------|
|ANALFABETO | 1|
|ATE 5.A INC| 2|
|5.A CO FUND| 3|
|6. A 9. FUND| 4|
|FUND COMPL | 5|
|MEDIO INCOMP| 6|
|MEDIO COMPL| 7|
|SUP. INCOMP| 8|
|SUP. COMP | 9|
|MESTRADO | 10|
|DOUTORADO | 11|
|IGNORADO | -1|
$^{[1]}$ ftp://ftp.mtps.gov.br/pdet/microdados/RAIS/Layouts/v%C3%ADnculos/RAIS_vinculos_layout.xls
End of explanation
"""
# ac = pd.read_csv('/mnt/part/Data/RAIS/2014/AC2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# al = pd.read_csv('/mnt/part/Data/RAIS/2014/AL2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# am = pd.read_csv('/mnt/part/Data/RAIS/2014/AM2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ap = pd.read_csv('/mnt/part/Data/RAIS/2014/AP2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ba = pd.read_csv('/mnt/part/Data/RAIS/2014/BA2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ce = pd.read_csv('/mnt/part/Data/RAIS/2014/CE2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# df = pd.read_csv('/mnt/part/Data/RAIS/2014/DF2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# es = pd.read_csv('/mnt/part/Data/RAIS/2014/ES2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# go = pd.read_csv('/mnt/part/Data/RAIS/2014/GO2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ma = pd.read_csv('/mnt/part/Data/RAIS/2014/MA2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# mg = pd.read_csv('/mnt/part/Data/RAIS/2014/MG2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ms = pd.read_csv('/mnt/part/Data/RAIS/2014/MS2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# mt = pd.read_csv('/mnt/part/Data/RAIS/2014/MT2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pa = pd.read_csv('/mnt/part/Data/RAIS/2014/PA2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pb = pd.read_csv('/mnt/part/Data/RAIS/2014/PB2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pe = pd.read_csv('/mnt/part/Data/RAIS/2014/PE2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pi = pd.read_csv('/mnt/part/Data/RAIS/2014/PI2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# pr = pd.read_csv('/mnt/part/Data/RAIS/2014/PR2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rj = pd.read_csv('/mnt/part/Data/RAIS/2014/RJ2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rn = pd.read_csv('/mnt/part/Data/RAIS/2014/RN2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# ro = pd.read_csv('/mnt/part/Data/RAIS/2014/RO2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rr = pd.read_csv('/mnt/part/Data/RAIS/2014/RR2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# rs = pd.read_csv('/mnt/part/Data/RAIS/2014/RS2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# sc = pd.read_csv('/mnt/part/Data/RAIS/2014/SC2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# se = pd.read_csv('/mnt/part/Data/RAIS/2014/SE2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# sp = pd.read_csv('/mnt/part/Data/RAIS/2014/SP2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# to = pd.read_csv('/mnt/part/Data/RAIS/2014/TO2014.zip', sep = ';', encoding = 'cp860', decimal = ',')
# Concatenar os DataFrameS individuais a partir de uma lista:
# lista = [ac, al, am, ap, ba, ce, df, es, go, ma, mg, ms, mt, pa, pb, pe, pi, pr, rj, rn, ro, rr, rs, sc, se, sp, to]
# rais = pd.concat(lista)
# Gerando uma variável com a UF a partir do Município:
# np.floor(rais['Municφpio'] / 10000).unique()
"""
Explanation: Trabalhando com dados de todas as UFs
End of explanation
"""
|
pygeo/pycmbs | demo/dataset_comparison.ipynb | mit | # read in the data
from pycmbs.data import Data
h_file = 'hoaps-g.t63.m01.rain.1987-2008_monmean.nc'
m_file = 'pr_Amon_MPI-ESM-LR_amip_r1i1p1_197901-200812_2000-01-01_2007-09-30_T63_monmean.nc'
hoaps = Data(h_file, 'rain', read=True)
model = Data(m_file, 'pr', read=True, scale_factor=86400.) # note the scale factor to convert directly to [mm/d]
model.unit = '$mm d^{-1}$'
"""
Explanation: Comparison of two dataset
Here we compare two rainfall dataset with each other. The first is a satellite observation dataset, the so called HOAPS climatology and the second is a CMIP5 model results. Both dataset are already regridded to the same spatial grid (T63)
End of explanation
"""
%matplotlib inline
# initial plotting
from pycmbs.mapping import map_plot
vmin=0.
vmax=15.
f = map_plot(hoaps,vmin=vmin,vmax=vmax)
f = map_plot(model,vmin=vmin,vmax=vmax)
"""
Explanation: Now we compare the two dataset by different means.
End of explanation
"""
msk = hoaps.get_valid_mask(frac=0.1) # at least 10% of valid data for all timesteps
model._apply_mask(msk)
hoaps._apply_mask(msk) # apply mask also to HOAPS dataset to be entirely consistent
f = map_plot(model,vmin=vmin,vmax=vmax)
"""
Explanation: From this we see that one dataset covers the entire earth, while the second one only the free ocean. We would therefore like to apply a consistent mask to both of them.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(7,10))
ax1 = fig.add_subplot(211)
ax2 = fig.add_subplot(212)
model_m = model.timmean(return_object=True)
hoaps_m = hoaps.timmean(return_object=True)
xx = map_plot(model_m.sub(hoaps_m,copy=True),ax=ax1,cmap_data='RdBu_r',vmin=-5.,vmax=5., use_basemap=True, title='Absolute difference')
xx = map_plot(model_m.sub(hoaps_m,copy=True).div(hoaps_m,copy=True),ax=ax2,cmap_data='RdBu_r',vmin=-1.,vmax=1., use_basemap=True, title='Relative difference')
fig.savefig('precipitation_difference.pdf', bbox_inches='tight')
fig.savefig('precipitation_difference.png', bbox_inches='tight', dpi=200)
"""
Explanation: Let's have a look on the temporal mean differences first
End of explanation
"""
from pycmbs.plots import ScatterPlot
S = ScatterPlot(hoaps_m)
S.plot(model_m,fldmean=False)
S.ax.grid()
S.ax.set_xlim(0.,15.)
S.ax.set_ylim(S.ax.get_xlim())
S.ax.set_title('Mean global precipitation')
S.ax.set_ylabel('model')
S.ax.set_xlabel('HOAPS')
S.ax.figure.savefig('scatterplot.png')
"""
Explanation: Scatterplot
An easy scatterplot is generated using the ScatterPlot class. Here we need to provide a reference Dataset as as independent variable. All other datasets are then refrend to that reference.
End of explanation
"""
from pycmbs.plots import LinePlot
f = plt.figure(figsize=(15,3))
ax = f.add_subplot(111)
L = LinePlot(ax=ax,regress=False)
L.plot(hoaps)
ax.grid()
f.savefig('time_series.png')
"""
Explanation: Timeseries
End of explanation
"""
|
mbway/Bayesian-Optimisation | prototypes/Optimisation.ipynb | gpl-3.0 | f = lambda x: x * np.cos(x)
x = np.linspace(0, 12, 100)
y = f(x)
plt.figure(figsize=(16,8))
plt.plot(x, y, 'g-')
plt.margins(0.1, 0.1)
plt.xlabel('x')
plt.ylabel('cost')
plt.show()
ranges = {
'x' : np.linspace(0, 12, 100)
}
class TestEvaluator(op.LocalEvaluator):
def test_config(self, config):
#time.sleep(1)
return f(config.x)
optimiser = op.GridSearchOptimiser(ranges, queue_size=100)
optimiser.poll_interval = 0 # run the optimiser loop faster, not reccomended when the evaluator is anything but instant
evaluator = TestEvaluator(optimiser)
optimiser.start(run_async=True)
evaluator.start(run_async=True)
optimiser.wait_for()
evaluator.wait_for()
optimiser.report()
print('-'*25)
print(optimiser.log_record)
print('-'*25)
print(evaluator.log_record)
test_xs, test_ys = zip(*[(s.config.x, s.cost) for s in optimiser.samples])
best_config, best_y = optimiser.best_sample()
best_x = best_config.x
plt.figure(figsize=(16,8))
plt.plot(x, y, 'g-')
plt.plot(test_xs, test_ys, 'bo', markersize=5)
plt.plot([best_x], [best_y], 'ro', markersize=10)
plt.margins(0.1, 0.1)
plt.xlabel('x')
plt.ylabel('cost')
plt.show()
optimiser.plot_param('x', plot_boxplot=False, plot_samples=True, plot_means=True)
"""
Explanation: 1 Parameter - Grid Search
End of explanation
"""
import math
n = 100
base = 10
xs = np.logspace(math.log(1e-4, base), math.log(1e4, base), num=n, base=base)
ys = [0.1] * n
base = 2
xs2 = np.logspace(math.log(1e-4, base), math.log(1e4, base), num=n, base=base)
ys2 = [-0.1] * n
plt.figure(figsize=(16,3))
plt.plot(xs, ys, 'bo', markersize=5)
plt.plot(xs2, ys2, 'ro', markersize=5)
plt.axes().set_ylim((-2,2))
plt.margins(0.1, 0.1)
plt.show()
"""
Explanation: Note
can use np.logspace to space points out logarithmically rather than linearly, remember that the start and end points are $\mathrm{base}^{\mathrm{start}}$ and $\mathrm{base}^{\mathrm{end}}$ hence why log(start), log(end) is used below
also note that the base of the logarithm is pretty much insignificant (as seen by the blue vs red points)
another point is that the ranges passed to the optimisers need only be numpy arrays, so you can shuffle them or pass custom arrays rather than using linspace or other means.
End of explanation
"""
def f(x,y):
return 1.5 * (np.sin(0.5*x)**2 * np.cos(y) + 0.1*x + 0.2*y)
X = np.linspace(-6, 6, 100)
Y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(X, Y)
Z = f(X,Y)
plot3D.surface3D(X,Y,Z)
ranges = {
'x' : np.linspace(-6, 6, num=10),
'y' : np.linspace(-5, 5, num=10),
'z' : np.array([-0.5, 0.5]),
'another' : np.array(['a', 'b'])
}
#np.random.shuffle(ranges['x'])
#np.random.shuffle(ranges['y'])
order = ['x', 'y', 'z', 'another']
class TestEvaluator(op.LocalEvaluator):
def test_config(self, config):
#time.sleep(1)
return f(config.x, config.y) + config.z
optimiser = op.GridSearchOptimiser(ranges, queue_size=100, order=order)
optimiser.poll_interval = 0 # run the optimiser loop faster, not reccomended when the evaluator is anything but instant
#optimiser = op.RandomSearchOptimiser(ranges, queue_size=100, allow_re_tests=False)
evaluator = TestEvaluator(optimiser)
optimiser.start(run_async=True)
evaluator.start(run_async=True)
optimiser.wait_for()
evaluator.wait_for()
optimiser.report()
print(optimiser.log_record)
"""
Explanation: 3 Parameters - Grid Search
End of explanation
"""
optimiser.scatter_plot('x', 'y', interactive=True, color_by='cost')
"""
Explanation: Scatter plot of parameters against cost
End of explanation
"""
optimiser.surface_plot('x', 'y')
optimiser.plot_param('x', plot_boxplot=True, plot_samples=True, plot_means=True)
optimiser.plot_param('y', plot_boxplot=True, plot_samples=True, plot_means=True)
optimiser.plot_param('z', plot_boxplot=True, plot_samples=True, plot_means=True)
"""
Explanation: Plot as a surface
End of explanation
"""
|
IBMDecisionOptimization/docplex-examples | examples/mp/jupyter/progress.ipynb | apache-2.0 | from docplex.mp.model import Model
def build_hearts(r, **kwargs):
# initialize the model
mdl = Model('love_hearts_%d' % r, **kwargs)
# the dictionary of decision variables, one variable
# for each circle with i in (1 .. r) as the row and
# j in (1 .. i) as the position within the row
idx = [(i, j) for i in range(1, r + 1) for j in range(1, i + 1)]
a = mdl.binary_var_dict(idx, name=lambda ij: "a_%d_%d" % ij)
# the constraints - enumerate all equilateral triangles
# and prevent any such triangles being formed by keeping
# the number of included circles at its vertexes below 3
# for each row except the last
for i in range(1, r):
# for each position in this row
for j in range(1, i + 1):
# for each triangle of side length (k) with its upper vertex at
# (i, j) and its sides parallel to those of the overall shape
for k in range(1, r - i + 1):
# the sets of 3 points at the same distances clockwise along the
# sides of these triangles form k equilateral triangles
for m in range(k):
u, v, w = (i + m, j), (i + k, j + m), (i + k - m, j + k - m)
mdl.add(a[u] + a[v] + a[w] <= 2)
mdl.maximize(mdl.sum(a))
return mdl
"""
Explanation: Using the Progress Listeners with CPLEX Optimizer
This tutorial includes everything you need to set up decision optimization engines, build a mathematical programming model, then use the progress listeners to monitor progress, capture intermediate solutions and stop the solve on your own criteria.
When you finish this tutorial, you'll have a foundational knowledge of Prescriptive Analytics.
This notebook is part of Prescriptive Analytics for Python
It requires either an installation of CPLEX Optimizers or it can be run on IBM Cloud Pak for Data as a Service (Sign up for a free IBM Cloud account
and you can start using IBM Cloud Pak for Data as a Service right away).
CPLEX is available on <i>IBM Cloud Pack for Data</i> and <i>IBM Cloud Pak for Data as a Service</i>:
- <i>IBM Cloud Pak for Data as a Service</i>: Depends on the runtime used:
- <i>Python 3.x</i> runtime: Community edition
- <i>Python 3.x + DO</i> runtime: full edition
- <i>Cloud Pack for Data</i>: Community edition is installed by default. Please install DO addon in Watson Studio Premium for the full edition
Table of contents:
Describe the business problem
How decision optimization (prescriptive analytics) can help
Use decision optimization
Step 1: Set up the prescriptive model
Step 2: Monitoring CPLEX progress
Step 3: Aborting the search with a custom progress listener
Variant: using matplotlib to plot a chart of gap vs. time
Summary
How decision optimization can help
Prescriptive analytics (decision optimization) technology recommends actions that are based on desired outcomes. It takes into account specific scenarios, resources, and knowledge of past and current events. With this insight, your organization can make better decisions and have greater control of business outcomes.
Prescriptive analytics is the next step on the path to insight-based actions. It creates value through synergy with predictive analytics, which analyzes data to predict future outcomes.
Prescriptive analytics takes that insight to the next level by suggesting the optimal way to handle that future situation. Organizations that can act fast in dynamic conditions and make superior decisions in uncertain environments gain a strong competitive advantage.
<br/>
<u>With prescriptive analytics, you can:</u>
Automate the complex decisions and trade-offs to better manage your limited resources.
Take advantage of a future opportunity or mitigate a future risk.
Proactively update recommendations based on changing events.
Meet operational goals, increase customer loyalty, prevent threats and fraud, and optimize business processes.
Use decision optimization
Step 1: Set up the prescriptive model
We need a scalable MIP model in order to show how to leverage progress listeners in Docplex MP API.
Progress listeners are designed to monitor the progress of complex MIP search in docplex MP,
that is, linear programs with integer variables.
This model is easily scalable, and thus is appropriate to demonstrate the progress listener API, but any other scalable MIP model would do.
End of explanation
"""
m5 = build_hearts(5)
m5.print_information()
"""
Explanation: Let's try to build a small instance of the 'hearts' program and print its characteristics.
End of explanation
"""
from docplex.mp.progress import *
"""
Explanation: Step 2: Monitoring CPLEX progress
MIP search can take some time for large (or complex) problems. Setting the log_output=True in a solve() lets
you display the CPLEX log, which provides a lot of information. In certain cases, you might want to take control of what happens at intermediate points in the search, and this is what listeners are designed for.
An introduction to progress listeners
Progress listeners are objects, sub-classes of the docplex.mp.progress.ProgressListener class. Once a listener has been attached to a model instance (using Model.add_progress_listener), it receives method calls from within the CPLEX MIP search. CPLEX code decides when listeners are called, and this baseline logic cannot be changed.
However, progress listeners let you select certian types of events.
First, we have to import the docplex.mp.progress module, which contains everything about progress listeners.
End of explanation
"""
# connect a listener to the model
m5.add_progress_listener(TextProgressListener())
"""
Explanation: Monitoring MIP search progress
The simplest class of listener is the TextProgressListener, which prints a message on the stdout each time it is called. Let's see what this does on our small model.
End of explanation
"""
m5.solve(clean_before_solve=True);
"""
Explanation: Solve with Decision Optimization
If you're using a Community Edition of CPLEX runtimes, depending on the size of the problem, the solve stage may fail and will need a paying subscription or product installation.
Here, we solve with the clean_before_true flag set to True, as we want each solve to produce the same output. Without this flag, a second solve on the model would start from the first solve solution, and would not have the same output.
End of explanation
"""
for l, listener in enumerate(m5.iter_progress_listeners(), start=1):
print("listener #{0} has type '{1}', clock={2}".format(l, listener.__class__.__name__, listener.clock))
"""
Explanation: The listener prints one line each time it is called by CPLEX code; fromthis we can see that:
the listener is called several time from the same node (0)
the listener is called several time at the same iteration (ItCnt=42)
the listener is called several times with the same objective 7.0
In each line, the '+' indicates that an intermediate solution is available at the time of the call. In this case, an intermediate solution was available at each call, but this is not always the case.
Looking closer, we also see that the listener reacts to events which improve either the objective or the best bound.
This is due to the value of the clock attribute of the listener
Listener clocks
Clocks are value sof the enumerated type docplex.mp.progress.ProgressClock, which defines types of events to listen to. Every listener has a clock, the default being ProgressClock.Gap, which reacts when an event satisfies the following conditions:
an intermediate solution is available
either the objective has improved or the best bound has improved
Let's check this on our model:
End of explanation
"""
m5.clear_progress_listeners()
m5.add_progress_listener(TextProgressListener(clock='all'))
m5.solve(clean_before_solve=True);
"""
Explanation: Now, let's experiment with a text progress listener listening to the All clock, that is the baseline clock that reacts to all calls from CPLEX. To do so, we first clear all progress listeners and add a new one.
Note the constructor also accepts strings, interpreted as clock name.
End of explanation
"""
m5.clear_progress_listeners()
m5.add_progress_listener(TextProgressListener(clock='objective', absdiff=1, reldiff=0))
m5.solve(clean_before_solve=True);
"""
Explanation: In this case, the listener is called more often, sometimes with identical objective and bound (see lines 3 and 4). This explains why the default clock is Gap , to focus on actual changes.
Other possible values for the clock enumerated type are:
Solutions: listen to all intermediate solutions, whether or not they improve objective or best bound.
Objective: listen to intermediate solutions, which improve the objective.
How exactly is improvement measured? A listener constructor can specify an absdiff and reldiff parameters which ar e interpreted as the minimal absolute (resp. relative) improvement to accept or not a call from CPLEX.
Let us demonstrate this with a third TextProgressListener with clock set to 'objective' and an absolute diff of 1. We expect this listener to react whenever th eobjec5tive ha simprobed by an amount greater than 1:
End of explanation
"""
from docplex.mp.progress import SolutionRecorder
sol_recorder = SolutionRecorder()
m5.clear_progress_listeners()
m5.add_progress_listener(sol_recorder)
"""
Explanation: As expected, the listener accepted three events, with the objective vaklues of 6,7 and 8.
Monitor progress: manage intermediate solutions
This is done by another predefined listener class: SolutionRecorder. This type of listener is a subclass of the SolutionListerer class. Again, this listener contains a clock parameter (default is Gap) which controls which events are accepted or not.
The default behavior is to accept only solutions, which improve either the objective or the best bound
End of explanation
"""
m5.solve(clean_before_solve=True);
"""
Explanation: Again, we solve with the clean_before_solve flag set to True to ensure a deterministic behavior.
End of explanation
"""
# utility function to display recorded solutions in a recorder.
def display_recorded_solutions(rec):
print('* The recorder contains {} solutions'.format(rec.number_of_solutions))
for s, sol in enumerate(rec.iter_solutions(), start=1):
sumvals = sum(v for _, v in sol.iter_var_values())
print(' - solution #{0}, obj={1}, non-zero-values={2}, total={3}'.format(
s, sol.objective_value, sol.number_of_var_values, sumvals))
display_recorded_solutions(sol_recorder)
"""
Explanation: At the end of the solve, the recorder contains all the intermediate solutions.
Now, let's display some information about those intermediate solutions.
End of explanation
"""
sol_recorder2 = SolutionRecorder(clock='objective')
m5.clear_progress_listeners()
m5.add_progress_listener(sol_recorder2)
m5.solve(clean_before_solve=True)
display_recorded_solutions(sol_recorder2)
"""
Explanation: Now, let's try a solution recorder with a different clock: Objective. This recorder will record only intermediate solutions which improve the objective, regardless of the best bound. Such changes occur less frequently than the Gap clock, so we expect less solutions to be recorded.
End of explanation
"""
from docplex.mp.progress import ProgressListener
class AutomaticAborter(ProgressListener):
""" a simple implementation of an automatic search stopper.
"""
def __init__(self, max_no_improve_time=10.):
super(AutomaticAborter, self).__init__(ProgressClock.All)
self.last_obj = None
self.last_obj_time = None
self.max_no_improve_time = max_no_improve_time
def notify_start(self):
super(AutomaticAborter, self).notify_start()
self.last_obj = None
self.last_obj_time = None
def is_improving(self, new_obj, eps=1e-4):
last_obj = self.last_obj
return last_obj is None or (abs(new_obj- last_obj) >= eps)
def notify_progress(self, pdata):
super(AutomaticAborter, self).notify_progress(pdata)
if pdata.has_incumbent and self.is_improving(pdata.current_objective):
self.last_obj = pdata.current_objective
self.last_obj_time = pdata.time
print('----> #new objective={0}, time={1}s'.format(self.last_obj, self.last_obj_time))
else:
# a non improving move
last_obj_time = self.last_obj_time
this_time = pdata.time
if last_obj_time is not None:
elapsed = (this_time - last_obj_time)
if elapsed >= self.max_no_improve_time:
print('!! aborting cplex, elapsed={0} >= max_no_improve: {1}'.format(elapsed,
self.max_no_improve_time))
self.abort()
else:
print('----> non improving time={0}s'.format(elapsed))
"""
Explanation: As expected, the 'objective' recorder stored only 3 solutions instead of 5. Only one solution with objective 7 is recorded, instead of three with the 'Gap' recorder.
Step 3: Aborting the search with a custom progress listener
MIP search can be time-consuming; insome cases, a 'good-enough' solution can be sufficient.
For example, when the gap is converging very slowly, it may be a good idea to stop and use the last solution instead of waiting for along time to prove optimality.
Let's assume we want to implement the following behavior:
stop the search, when no improvement has occured in objective for N seconds since the latest improvements.
The first question to ask is: what clock do we listen to? As we want to stop as soon as
elapsed time without improvement is greater than our limit, we listent to the higher frequency clock, All clock.
Second, as we do not care for solutions, we sub-class from ProgressListener, not from SolutionListener.
What do we need to code this aborter? we need to know whether an incumbent solution is present, and what is its objective value, then check whether the objective has improved or not.
If it has improved, we store the value of the objective and the time (obtained throught ProgressData.time),
if not, we check whether elapsed time is greater than the limit, and if it is the case, call method abort().
The code is as follows:
End of explanation
"""
large_hearts = build_hearts(12)
#large_hearts.add_progress_listener(TextProgressListener(clock='gap'))
# maximum non-improving time is 4 seconds.
large_hearts.add_progress_listener(AutomaticAborter(max_no_improve_time=4))
# again use clean_before_solve to ensure deterministic run of this cell.
large_hearts.solve(clean_before_solve=True, log_output=False);
"""
Explanation: We demonstrate the aborter on a bigger problem:
End of explanation
"""
large_s = large_hearts.solution
print('* solution has objective {0}'.format(large_s.objective_value))
print("* solve status is '{}'".format(large_hearts.solve_details.status))
class MipGapPrinter(ProgressListener):
def __init__(self):
ProgressListener.__init__(self, ProgressClock.Gap)
def notify_progress(self, pdata):
gap = pdata.mip_gap
ms_time = 1000* pdata.time
print('-- new gap: {0:.1%}, time: {1:.0f} ms'.format(gap, ms_time))
m8 = build_hearts(8)
m8.add_progress_listener(MipGapPrinter())
m8.solve(clean_before_solve=True);
"""
Explanation: Though the solve has been aborted, it returned the latest solution,
but the status of the solve shows it hhas been aborted.
End of explanation
"""
import matplotlib.pyplot as plt
%matplotlib inline
from docplex.mp.progress import ProgressListener, ProgressClock
from IPython import display
class MipGapPlotter(ProgressListener):
def __init__(self):
ProgressListener.__init__(self, ProgressClock.Gap)
plt.ion()
self.fig = plt.figure(figsize=(10,4))
self.ax = self.fig.add_subplot(1,1,1)
def notify_start(self):
self.times =[]
self.gaps = []
#self.lines, = ax.plot([],[], 'o')
plt.xlabel('time (ms)')
plt.ylabel('gap (%)')
def notify_progress(self, pdata):
gap = pdata.mip_gap
time = pdata.time
self.times.append(1000* time)
self.gaps.append(100*gap)
plt.plot(self.times, self.gaps, 'go-')
display.display(plt.gcf())
display.clear_output(wait=True)
m9 = build_hearts(9)
m9.add_progress_listener(MipGapPlotter())
m9.solve(clean_before_solve=True);
"""
Explanation: Variant: using matplotlib to plot a chart of gap vs. time
In this variant, we use matplotlib to chart the evolution of gap over time. The logic of the custom listener is exactly similar to the gap printer, except that we call matplotlib.plot instead of printing a message.
End of explanation
"""
|
atulsingh0/MachineLearning | ML_UoW/Course00_MLFoundation/Deep Features for Image Classification.ipynb | gpl-3.0 | import graphlab
"""
Explanation: Using deep features to build an image classifier
Fire up GraphLab Create
End of explanation
"""
image_train = graphlab.SFrame('image_train_data/')
image_test = graphlab.SFrame('image_test_data/')
"""
Explanation: Load a common image analysis dataset
We will use a popular benchmark dataset in computer vision called CIFAR-10.
(We've reduced the data to just 4 categories = {'cat','bird','automobile','dog'}.)
This dataset is already split into a training set and test set.
End of explanation
"""
graphlab.canvas.set_target('ipynb')
image_train['image'].show()
"""
Explanation: Exploring the image data
End of explanation
"""
raw_pixel_model = graphlab.logistic_classifier.create(image_train,target='label',
features=['image_array'])
"""
Explanation: Train a classifier on the raw image pixels
We first start by training a classifier on just the raw pixels of the image.
End of explanation
"""
image_test[0:3]['image'].show()
image_test[0:3]['label']
raw_pixel_model.predict(image_test[0:3])
"""
Explanation: Make a prediction with the simple model based on raw pixels
End of explanation
"""
raw_pixel_model.evaluate(image_test)
"""
Explanation: The model makes wrong predictions for all three images.
Evaluating raw pixel model on test data
End of explanation
"""
len(image_train)
"""
Explanation: The accuracy of this model is poor, getting only about 46% accuracy.
Can we improve the model using deep features
We only have 2005 data points, so it is not possible to train a deep neural network effectively with so little data. Instead, we will use transfer learning: using deep features trained on the full ImageNet dataset, we will train a simple model on this small dataset.
End of explanation
"""
#deep_learning_model = graphlab.load_model('http://s3.amazonaws.com/GraphLab-Datasets/deeplearning/imagenet_model_iter45')
#image_train['deep_features'] = deep_learning_model.extract_features(image_train)
"""
Explanation: Computing deep features for our images
The two lines below allow us to compute deep features. This computation takes a little while, so we have already computed them and saved the results as a column in the data you loaded.
(Note that if you would like to compute such deep features and have a GPU on your machine, you should use the GPU enabled GraphLab Create, which will be significantly faster for this task.)
End of explanation
"""
image_train.head()
"""
Explanation: As we can see, the column deep_features already contains the pre-computed deep features for this data.
End of explanation
"""
deep_features_model = graphlab.logistic_classifier.create(image_train,
features=['deep_features'],
target='label')
"""
Explanation: Given the deep features, let's train a classifier
End of explanation
"""
image_test[0:3]['image'].show()
deep_features_model.predict(image_test[0:3])
"""
Explanation: Apply the deep features model to first few images of test set
End of explanation
"""
deep_features_model.evaluate(image_test)
"""
Explanation: The classifier with deep features gets all of these images right!
Compute test_data accuracy of deep_features_model
As we can see, deep features provide us with significantly better accuracy (about 78%)
End of explanation
"""
|
MingChen0919/learning-apache-spark | notebooks/07-natural-language-processing/nlp-information-extraction.ipynb | mit | from pyspark import SparkContext
sc = SparkContext(master = 'local')
from pyspark.sql import SparkSession
spark = SparkSession.builder \
.appName("Python Spark SQL basic example") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
"""
Explanation: NLP Information Extraction
SparkContext and SparkSession
End of explanation
"""
import nltk
from nltk.corpus import gutenberg
milton_paradise = gutenberg.raw('milton-paradise.txt')
"""
Explanation: Simple NLP pipeline architecture
Reference: Bird, Steven, Ewan Klein, and Edward Loper. Natural language processing with Python: analyzing text with the natural language toolkit. " O'Reilly Media, Inc.", 2009.
Example data
The raw text is from the gutenberg corpus from the nltk package. The fileid is milton-paradise.txt.
Get the data
Raw text
End of explanation
"""
import pandas as pd
pdf = pd.DataFrame({
'sentences': nltk.sent_tokenize(milton_paradise)
})
df = spark.createDataFrame(pdf)
df.show(n=5)
"""
Explanation: Create a spark data frame to store raw text
Use the nltk.sent_tokenize() function to split text into sentences.
End of explanation
"""
from pyspark.sql.functions import udf
from pyspark.sql.types import *
## define udf function
def sent_to_tag_words(sent):
wordlist = nltk.word_tokenize(sent)
tagged_words = nltk.pos_tag(wordlist)
return(tagged_words)
## define schema for returned result from the udf function
## the returned result is a list of tuples.
schema = ArrayType(StructType([
StructField('f1', StringType()),
StructField('f2', StringType())
]))
## the udf function
sent_to_tag_words_udf = udf(sent_to_tag_words, schema)
"""
Explanation: Tokenization and POS tagging
End of explanation
"""
df_tagged_words = df.select(sent_to_tag_words_udf(df.sentences).alias('tagged_words'))
df_tagged_words.show(5)
"""
Explanation: Transform data
End of explanation
"""
import nltk
from pyspark.sql.functions import udf
from pyspark.sql.types import *
# define a udf function to chunk noun phrases from pos-tagged words
grammar = "NP: {<DT>?<JJ>*<NN>}"
chunk_parser = nltk.RegexpParser(grammar)
chunk_parser_udf = udf(lambda x: str(chunk_parser.parse(x)), StringType())
"""
Explanation: Chunking
Chunking is the process of segmenting and labeling multitokens. The following example shows how to do a noun phrase chunking on the tagged words data frame from the previous step.
First we define a udf function which chunks noun phrases from a list of pos-tagged words.
End of explanation
"""
df_NP_chunks = df_tagged_words.select(chunk_parser_udf(df_tagged_words.tagged_words).alias('NP_chunk'))
df_NP_chunks.show(2, truncate=False)
"""
Explanation: Transform data
End of explanation
"""
|
mne-tools/mne-tools.github.io | 0.13/_downloads/plot_creating_data_structures.ipynb | bsd-3-clause | from __future__ import print_function
import mne
import numpy as np
"""
Explanation: Creating MNE-Python's data structures from scratch
End of explanation
"""
# Create some dummy metadata
n_channels = 32
sampling_rate = 200
info = mne.create_info(32, sampling_rate)
print(info)
"""
Explanation: Creating :class:Info <mne.Info> objects
<div class="alert alert-info"><h4>Note</h4><p>for full documentation on the `Info` object, see
`tut_info_objects`. See also
`sphx_glr_auto_examples_io_plot_objects_from_arrays.py`.</p></div>
Normally, :class:mne.Info objects are created by the various
data import functions <ch_convert>.
However, if you wish to create one from scratch, you can use the
:func:mne.create_info function to initialize the minimally required
fields. Further fields can be assigned later as one would with a regular
dictionary.
The following creates the absolute minimum info structure:
End of explanation
"""
# Names for each channel
channel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG']
# The type (mag, grad, eeg, eog, misc, ...) of each channel
channel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog']
# The sampling rate of the recording
sfreq = 1000 # in Hertz
# The EEG channels use the standard naming strategy.
# By supplying the 'montage' parameter, approximate locations
# will be added for them
montage = 'standard_1005'
# Initialize required fields
info = mne.create_info(channel_names, sfreq, channel_types, montage)
# Add some more information
info['description'] = 'My custom dataset'
info['bads'] = ['Pz'] # Names of bad channels
print(info)
"""
Explanation: You can also supply more extensive metadata:
End of explanation
"""
# Generate some random data
data = np.random.randn(5, 1000)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=100
)
custom_raw = mne.io.RawArray(data, info)
print(custom_raw)
"""
Explanation: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
:class:`mne.Info` object, it is important that the
fields are consistent:
- The length of the channel information field `chs` must be
`nchan`.
- The length of the `ch_names` field must be `nchan`.
- The `ch_names` field should be consistent with the `name` field
of the channel information contained in `chs`.</p></div>
Creating :class:Raw <mne.io.Raw> objects
To create a :class:mne.io.Raw object from scratch, you can use the
:class:mne.io.RawArray class, which implements raw data that is backed by a
numpy array. Its constructor simply takes the data matrix and
:class:mne.Info object:
End of explanation
"""
# Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch
sfreq = 100
data = np.random.randn(10, 5, sfreq * 2)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=sfreq
)
"""
Explanation: Creating :class:Epochs <mne.Epochs> objects
To create an :class:mne.Epochs object from scratch, you can use the
:class:mne.EpochsArray class, which uses a numpy array directly without
wrapping a raw object. The array must be of shape(n_epochs, n_chans,
n_times)
End of explanation
"""
# Create an event matrix: 10 events with a duration of 1 sample, alternating
# event codes
events = np.array([
[0, 1, 1],
[1, 1, 2],
[2, 1, 1],
[3, 1, 2],
[4, 1, 1],
[5, 1, 2],
[6, 1, 1],
[7, 1, 2],
[8, 1, 1],
[9, 1, 2],
])
"""
Explanation: It is necessary to supply an "events" array in order to create an Epochs
object. This is of shape(n_events, 3) where the first column is the index
of the event, the second column is the length of the event, and the third
column is the event type.
End of explanation
"""
event_id = dict(smiling=1, frowning=2)
"""
Explanation: More information about the event codes: subject was either smiling or
frowning
End of explanation
"""
# Trials were cut from -0.1 to 1.0 seconds
tmin = -0.1
"""
Explanation: Finally, we must specify the beginning of an epoch (the end will be inferred
from the sampling frequency and n_samples)
End of explanation
"""
custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id)
print(custom_epochs)
# We can treat the epochs object as we would any other
_ = custom_epochs['smiling'].average().plot()
"""
Explanation: Now we can create the :class:mne.EpochsArray object
End of explanation
"""
# The averaged data
data_evoked = data.mean(0)
# The number of epochs that were averaged
nave = data.shape[0]
# A comment to describe to evoked (usually the condition name)
comment = "Smiley faces"
# Create the Evoked object
evoked_array = mne.EvokedArray(data_evoked, info, tmin,
comment=comment, nave=nave)
print(evoked_array)
_ = evoked_array.plot()
"""
Explanation: Creating :class:Evoked <mne.Evoked> Objects
If you already have data that is collapsed across trials, you may also
directly create an evoked array. Its constructor accepts an array of
shape(n_chans, n_times) in addition to some bookkeeping parameters.
End of explanation
"""
|
parthasen/java-R | DS_ML_Jan4.ipynb | gpl-3.0 | def hurst(data):
tau, lagvec = [], []
# Step through the different lags
for lag in range(2,20):
# Produce price different with lag
pp = np.subtract(data[lag:],data[:-lag])
# Write the different lags into a vector
lagvec.append(lag)
# Calculate the variance of the difference
tau.append(np.sqrt(np.std(pp)))
# Linear fit to a double-log graph to get power
m = np.polyfit(np.log10(lagvec),np.log10(tau),1)
# Calculate hurst
hurst = m[0]*2
return hurst
H=hurst(dataset.mid.tail(100))
H
"""
Explanation: Momentum
Hurst exponent helps test whether the time series is:
(1) A Random Walk (H ~ 0.5)
(2) Trending (H > 0.5)
(3) Mean reverting (H < 0.5)
https://www.quantopian.com/posts/hurst-exponent
https://www.quantopian.com/posts/neural-network-that-tests-for-mean-reversion-or-momentum-trending
End of explanation
"""
from statsmodels import regression
import statsmodels.api as sm
import scipy.stats as stats
import scipy.spatial.distance as distance
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
short_mavg = pd.rolling_mean(dataset.mid,5)
long_mavg = pd.rolling_mean(dataset.mid,15)
dataset.mid.tail(100).plot(alpha = 0.5)
dataset.high.tail(100).plot()
dataset.low.tail(100).plot()
dataset.vwap.tail(100).plot()
dataset.km.tail(100).plot()
short_mavg.tail(100).plot()
long_mavg.tail(100).plot(alpha = 0.5)
plt.ylabel('Price')
plt.show()
"""
Explanation: https://www.quantopian.com/research/notebooks/Cloned%20from%20%22Quantopian%20Lecture%20Series%3A%20Measuring%20Momentum%22.ipynb
End of explanation
"""
asset=dataset.mid.tail(200)
asset.plot(alpha = 0.5)
rolling_means = {}
for i in np.linspace(10, 100, 10):
X = pd.rolling_mean(asset,int(i))
rolling_means[i] = X
X.plot(alpha = 0.7)
rolling_means = pd.DataFrame(rolling_means).dropna()
plt.show()
"""
Explanation: Moving Average Crossover Ribbons
End of explanation
"""
asset=dataset.mid.tail(50)
scores = pd.Series(index=asset.index)
for date in rolling_means.index:
mavg_values = rolling_means.loc[date]
ranking = stats.rankdata(mavg_values.values)
d = distance.hamming(ranking, range(1, 11))
scores[date] = d
# Normalize the score
(10 * scores).plot();
asset.plot()
plt.legend(['Signal', 'Asset Price']);
plt.show()
"""
Explanation: Information from the above Ribbon
Distance Metric
End of explanation
"""
asset=dataset.mid.tail(100)
scores = pd.Series(index=asset.index)
for date in rolling_means.index:
mavg_values = rolling_means.loc[date]
ranking = stats.rankdata(mavg_values.values)
_, d = stats.spearmanr(ranking, range(1, 11))
scores[date] = d
# Normalize the score
(10 * scores).plot();
asset.plot()
plt.legend(['Signal', 'Asset Price']);
plt.show()
"""
Explanation: correlation metric.
End of explanation
"""
asset=dataset.km.tail(100)
scores = pd.Series(index=asset.index)
for date in rolling_means.index:
mavg_values = rolling_means.loc[date]
d = np.max(mavg_values) - np.min(mavg_values)
scores[date] = d
# Normalize the score
(10 * scores).plot();
asset.plot()
plt.legend(['Signal', 'Asset Price']);
plt.show()
"""
Explanation: Measuring Thickness
End of explanation
"""
k = 30
pricing=dataset.mid.tail(1000)
x = np.log(pricing)
v = x.diff()
m = (0.5*(dataset.askSize.tail(1000)+dataset.bidSize.tail(1000)))/((max(dataset.askSize)+max(dataset.bidSize))*0.5)
p0 = pd.rolling_sum(v, k)
p1 = pd.rolling_sum(m*v, k)
p2 = p1/pd.rolling_sum(m, k)
p3 = pd.rolling_mean(v, k)/pd.rolling_std(v, k)
f, (ax1, ax2) = plt.subplots(2,1)
ax1.plot(p0)
ax1.plot(p1)
ax1.plot(p2)
ax1.plot(p3)
ax1.set_title('Momentum of SPY')
ax1.legend(['p(0)', 'p(1)', 'p(2)', 'p(3)'], bbox_to_anchor=(1.1, 1))
ax2.plot(p0)
ax2.plot(p1)
ax2.plot(p2)
ax2.plot(p3)
#ax2.axis([0, 300, -0.005, 0.005])
ax2.set_xlabel('Time');
plt.show()
def get_p(prices, m, d, k):
""" Returns the dth-degree rolling momentum of data using lookback window length k """
x = np.log(prices)
v = x.diff()
m = np.array(m)
if d == 0:
return pd.rolling_sum(v, k)
elif d == 1:
return pd.rolling_sum(m*v, k)
elif d == 2:
return pd.rolling_sum(m*v, k)/pd.rolling_sum(m, k)
elif d == 3:
return pd.rolling_mean(v, k)/pd.rolling_std(v, k)
def backtest_get_p(prices, m, d):
""" Returns the dth-degree rolling momentum of data"""
v = np.diff(np.log(prices))
m = np.array(m)
if d == 0:
return np.sum(v)
elif d == 1:
return np.sum(m*v)
elif d == 2:
return np.sum(m*v)/np.sum(m)
elif d == 3:
return np.mean(v)/np.std(v)
k = 30
d=3
prices=dataset.mid.tail(1000)
x = np.log(pricing)
v = x.diff()
m = (0.5*(dataset.askSize.tail(1000)+dataset.bidSize.tail(1000)))/((max(dataset.askSize)+max(dataset.bidSize))*0.5)
p0 = pd.rolling_sum(v, k)
p1 = pd.rolling_sum(m*v, k)
p2 = p1/pd.rolling_sum(m, k)
p3 = pd.rolling_mean(v, k)/pd.rolling_std(v, k)
"""
Explanation: Momentum From Physics
End of explanation
"""
|
peterwittek/qml-rg | Archiv_Session_Spring_2017/Tutorials/Advanced_Data_Science.ipynb | gpl-3.0 | from __future__ import print_function
import matplotlib.pyplot as plt
import os
import pandas as pd
import re
import seaborn as sns
try:
from urllib2 import Request, urlopen
except ImportError:
from urllib.request import Request, urlopen
from bs4 import BeautifulSoup
%matplotlib inline
"""
Explanation: 1. Introduction
Data science used to be called data mining, and for good reasons. It is a dirty, soul-wrenching job in which many die. Machine learning is the only enjoyable part, but whatever happens before you can deploy a learning algorithm is drudgery. The renaming of the subject was a PR move, and it is about as pretentious as calling coal mining "carbon extraction science".
Today we will focus on the drudgery part, which is inevitable if you want to obtain any non-trivial result with machine learning. My hope is that the topics covered translate to other workloads you might encounter in your daily work. The topics will be:
I/O operations, which includes pulling files off the net and elementary loading/saving. We will not work on scale, as the IT policy explicitly bans this.
Lots of text processing. This alone is a compelling reason to use Python 3: you will avoid most of the problems associated with the rabbit hole known as UTF-8. You'd also better make friends with regular expressions.
Database-like operations, dataframes.
Visual inspection of data to get a feeling of it.
The modules we import reflect the above: numpy or TensorFlow does not have a place here. We will scrape stuff from arXiv to analyze correlation patterns between authors, metadata, and Impact Factor. There are high-level libraries to do scraping (e.g. Scrapy) and also to work with arXiv (e.g.arxiv.py), but we will build things bottom-up, so we import low-level I/O, networking, text processing and parsing libraries.
End of explanation
"""
url = "http://bist.eu/"
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
content = urlopen(req).read()
"""
Explanation: 2. Scraping
The amount of insight you can gain by scraping free information from the net is astonishing, especially given how easy it is to scrape. Whatever you want to learn about, there is free information out there (and probably a Python package to help).
We will work on the simplest scenario: pulling off files that do not need authentication or setting cookies. We will ignore robot rules and pretend that we are a Firefox browser. For instance, this will get you BIST's main site:
End of explanation
"""
content[:50]
"""
Explanation: If we inspect what it is, it looks like HTML:
End of explanation
"""
type(content)
"""
Explanation: The way we read it, the content is a byte array:
End of explanation
"""
print(content.decode("utf-8")[:135])
"""
Explanation: Convert it to UTF-8 to see what it contains in a nicer way:
End of explanation
"""
with open("bist.html", 'wb') as file:
file.write(content)
file.close()
"""
Explanation: You can save it to a file so that you do not have to scrape it again. This is important when you scrape millions of files, and the scraping has to be restarted. Have mercy on the webserver, or you risk getting blacklisted.
End of explanation
"""
from subprocess import run
run(["firefox", "-new-tab", "bist.html"])
"""
Explanation: You can technically open this in a browser, but it is only the HTML part of the site: the style files, images, and a whole lot of other things that make a page working are missing. Lazy people can launch a browser from Python to see what the page looks like:
End of explanation
"""
def get_file(url, filename=None):
if filename is None:
slash = url.rindex("/")
filename = url[slash+1:]
if os.path.isfile(filename):
with open(filename, 'rb') as file:
content = file.read()
else:
req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
content = urlopen(req).read()
with open(filename, 'wb') as file:
file.write(content)
file.close()
return content
"""
Explanation: Exercise 1. Copy an image location from the BIST website. Scrape it and display it. For the display, you can import Image from IPython.display, and then use Image(data=content). Technically you could use imshow from Matplotlib, but then you would have decompress the image first.
We put everything together in a function that only attempts to download a file if it is not present locally. If you do not specify a filename, it tries to extract it from the URL by taking everything after the last "/" character as the filename.
End of explanation
"""
abbreviations = get_file("https://github.com/JabRef/abbrv.jabref.org/raw/master/journals/journal_abbreviations_webofscience.txt")
"""
Explanation: 3. Cleaning structured data
We will integrate several data sources, as this is the most common pattern you encounter in day to day data science. We will have two structured sources of data, and one semistructured:
A table of journal titles and matching abbreviations. This will help us standardize journal titles in the data, as some will be given as full titles, others as abbreviations.
A table of journals and matching Impact Factors.
Search results in HTML format that contain the metadata of the papers we want to study
We start with the easier part, obtaining and cleaning the structured data. Let us get the journal abbrevations:
End of explanation
"""
print(abbreviations.decode("utf-8")[:1000])
"""
Explanation: Let's see what it contains:
End of explanation
"""
abbs = pd.read_csv("journal_abbreviations_webofscience.txt", sep='=',
comment='#', names=["Full Journal Title", "Abbreviation"])
"""
Explanation: This is fairly straightforward: the file starts with a bunch of comments, then two columns containing the full title and the abbreviation, separated by the "=" character. We can load it with Pandas without thinking much:
End of explanation
"""
abbs.head()
"""
Explanation: In principle, it looks okay:
End of explanation
"""
abbs.iloc[0, 1]
"""
Explanation: If we take a closer look at a particular element, we will notice that something is off:
End of explanation
"""
abbs = abbs.applymap(lambda x: x.upper().strip())
"""
Explanation: The string starts with an unnecessary white space. We also do not want to miss any possible match because of differences in the use of capitalization. So we strip starting and trailing white space and convert every entry to upper case:
End of explanation
"""
get_file("https://www.researchgate.net/file.PostFileLoader.html?id=558730995e9d9735688b4631&assetKey=AS%3A273803718922244%401442291301717",
"2014_SCI_IF.xlsx");
"""
Explanation: This is done: it was an easy and clean data set.
Let us go for the next one. Their Journal Citation Records reduced scientific contributions to a single number, allowing incompetent people to make unfounded judgements on the quality of your research output. So when you encounter a question like this: "I want to know which JCR journal is paid and easy to get publication", you do not even blink an eye. Fortunately, the same thread has a download link for the 2015 report in an XLS file. We go for it:
End of explanation
"""
run(["soffice", "2014_SCI_IF.xlsx"])
"""
Explanation: We can open it in LibreOffice or OpenOffice, using the lazy way again:
End of explanation
"""
ifs = pd.read_excel("2014_SCI_IF.xlsx", skiprows=2, index_col=0)
"""
Explanation: The two first lines are junk, and there is an index column. Let us load it in Pandas:
End of explanation
"""
ifs.head()
"""
Explanation: If this fails, you do not have the package for reading Excel files. Install it with import pip; pip.main(["install", "xlrd"]).
Now this table definitely looks fishy:
End of explanation
"""
ifs.tail()
"""
Explanation: What are those empty columns? They do not seem to be in the file. The answer lies at the end of the table:
End of explanation
"""
skip = len(ifs) - ifs["Journal Impact Factor"].last_valid_index() + 1
ifs = pd.read_excel("2014_SCI_IF.xlsx", skiprows=2, index_col=0,
skip_footer=skip, parse_cols="A,B,E")
"""
Explanation: Some stupid multi-column copyright notice at the bottom distorts the entire collection. This is why spreadsheets are the most hated data format for structured data, as the uncontrolled blend of content and formatting lets ignorant people give you a headache.
The journals are listed according to their ranking, in decreasing order. We know that we do not care about journals that do not have an Impact Factor. We take this as a cut-off for rows, and re-read only the rows and columns we are interested in:
End of explanation
"""
ifs.head()
"""
Explanation: This looks much better:
End of explanation
"""
ifs.sample(n=10)
"""
Explanation: But not perfect:
End of explanation
"""
def get_search_result(author):
filename = author + ".html"
url = "https://arxiv.org/find/all/1/au:+" + author + "/0/1/0/all/0/1?per_page=400"
page = get_file(url, filename)
return BeautifulSoup(page, "html.parser")
"""
Explanation: Now you see why we converted everything to upper case before.
Exercise 2. Convert everything in the column "Full Journal Title" to upper case. You will encounter a charming inconsistency in Pandas.
4. Cleaning and filtering of semi-structured data
The horror starts when even more control is given to humans who create the data. We move on to studying the search results for the authors Lewenstein and Acín on arXiv. The metadata is entered by humans, which makes it inconsistent and chaotic.
The advanced search on arXiv is simple and handy. The search URL has a well-defined structure that we can exploit. We create a wrapper function around our scraping engine, which gets the first 400 results for a requested author and returns a parsed HTML file. The library BeautifulSoup does the parsing: it understand the hierarchical structure of the file, and allows you navigate the hierarchy with a handful of convenience functions.
End of explanation
"""
lewenstein = get_search_result("Lewenstein_M")
"""
Explanation: It is a common pattern to exploit search URLs to get what you want. Many don't let you pull off too many results in a single page, and you have to step through the "Next" links over and over again to get all the results you want. This is also not difficult, but it has the extra complication that you have to extract the "Next" link and call it recursively.
Let us study the results:
End of explanation
"""
run(["firefox", "-new-tab", "view-source:file://" + os.getcwd() + "/Lewenstein_M.html"])
"""
Explanation: We can view the source in a browser:
End of explanation
"""
titles = []
for dd in lewenstein.find_all("dd"):
titles.append(dd.find("div", class_="list-title mathjax"))
len(titles)
"""
Explanation: Skipping the boring header part, the source code of the search result looks like this:
```html
<h3>Showing results 1 through 389 (of 389 total) for
<a href="/find/all/1/au:+Lewenstein_M/0/1/0/all/0/1?skip=0&query_id=ff0631708b5d0dd5">au:Lewenstein_M</a></h3>
<dl>
<dt>1. <span class="list-identifier"><a href="/abs/1703.09814" title="Abstract">arXiv:1703.09814</a> [<a href="/pdf/1703.09814" title="Download PDF">pdf</a>, <a href="/ps/1703.09814" title="Download PostScript">ps</a>, <a href="/format/1703.09814" title="Other formats">other</a>]</span></dt>
<dd>
<div class="meta">
<div class="list-title mathjax">
<span class="descriptor">Title:</span> Efficient Determination of Ground States of Infinite Quantum Lattice Models in Three Dimensions
</div>
<div class="list-authors">
<span class="descriptor">Authors:</span>
<a href="/find/cond-mat/1/au:+Ran_S/0/1/0/all/0/1">Shi-Ju Ran</a>,
<a href="/find/cond-mat/1/au:+Piga_A/0/1/0/all/0/1">Angelo Piga</a>,
<a href="/find/cond-mat/1/au:+Peng_C/0/1/0/all/0/1">Cheng Peng</a>,
<a href="/find/cond-mat/1/au:+Su_G/0/1/0/all/0/1">Gang Su</a>,
<a href="/find/cond-mat/1/au:+Lewenstein_M/0/1/0/all/0/1">Maciej Lewenstein</a>
</div>
<div class="list-comments">
<span class="descriptor">Comments:</span> 11 pages, 9 figures
</div>
<div class="list-subjects">
<span class="descriptor">Subjects:</span> <span class="primary-subject">Strongly Correlated Electrons (cond-mat.str-el)</span>; Computational Physics (physics.comp-ph)
</div>
</div>
</dd>
```
This is the entire first result. It might look intimidating, but as long as you know that in HTML a mark-up starts with `<whatever>` and ends with `</whatever>`, you will find regular and hierarchical patterns. If you stare hard enough, you will see that the `<dd>` tag contains most of the information we want: it has the authors, the title, the journal reference if there is one (not in the example shown), and the primary subject. It does not actually matter what `<dd>` is: we are not writing a browser, we are scraping data.
As a quick sanity check, we can easily extract the titles, and verify that they match the number of search results. The `lewenstein` object is instance of the BeautifulSoup class, which has methods to find all instances of a given mark-up. We use this to find the titles:
End of explanation
"""
titles[0]
"""
Explanation: So far so good, although the titles do not really look like we expect them:
End of explanation
"""
def extract_title(title):
start = title.index(" ")
return title[start+1:-1]
titles = []
for dd in lewenstein.find_all("dd"):
titles.append(extract_title(dd.find("div", class_ = "list-title mathjax").text))
titles[0]
"""
Explanation: We define a helper function to extract the title:
End of explanation
"""
def extract_subject(long_subject):
start = long_subject.index("(")
return long_subject[start+1:-1]
def drop_punctuation(string):
result = string.replace(".", " ")
return " ".join(result.split())
true_lewenstein = ["Maciej Lewenstein", "M Lewenstein"]
impostors = set()
primary_subjects = set()
for dd in lewenstein.find_all("dd"):
div = dd.find("div", class_="list-authors")
subject = extract_subject(dd.find("span", class_ = "primary-subject").text)
names = [drop_punctuation(a.text) for a in div.find_all("a")]
for name in names:
if re.search("M.* Lewenstein", name):
if name not in true_lewenstein:
impostors.add(name + " " + subject)
elif "Maciej" not in name:
primary_subjects.add(subject)
print(impostors)
print(primary_subjects)
"""
Explanation: The next problem we face is that not all of these papers belong to Maciej Lewenstein: some impostors have the same abbreviated name M. Lewenstein. They are easy to detect if they uses the non-abbreviated name. Let us run through the page again, noting which subject the the impostors publish in. For this, let us introduce another auxiliary function that extract the short name of the the subject. We also note the primary subject when the abbreviated form of the name appears. We use another function to drop "." and merge multiple white space to a single one. We will use a simple regular expression to find the candidate Lewensteins.
End of explanation
"""
def extract_journal(journal):
start = journal.index(" ")
raw = journal[start+1:-1]
m = re.search("\d", raw)
return drop_punctuation(raw[:m.start()]).strip().upper()
def extract_title(title):
start = title.index(" ")
return title[start+1:-1]
def extract_id_and_year(arXiv):
start = arXiv.index(":")
if "/" in arXiv:
year_index = arXiv.index("/")
else:
year_index = start
year = arXiv[year_index+1:year_index+3]
if year[0] == "9":
year = int("19" + year)
else:
year = int("20" + year)
return arXiv[start+1:], year
papers = []
for dd, dt in zip(lewenstein.find_all("dd"), lewenstein.find_all("dt")):
id_, year = extract_id_and_year(dt.find("a", attrs={"title": "Abstract"}).text)
div = dd.find("div", class_="list-authors")
subject = extract_subject(dd.find("span", class_ = "primary-subject").text)
journal = dd.find("div", class_ = "list-journal-ref")
if journal:
names = [drop_punctuation(a.text) for a in div.find_all("a")]
for i, name in enumerate(names):
if re.search("M.* Lewenstein", name):
if name not in true_lewenstein:
break
else:
names[i] = "Maciej Lewenstein"
else:
papers.append([id_, extract_title(dd.find("div", class_ = "list-title mathjax").text),
names, subject, year, extract_journal(journal.text)])
"""
Explanation: So it is only one person, and we can be reasonably confident that Maciej Lewenstein is unlikely to publish in these subjects. The other good news is that all the short forms of the name belong to physics papers, and not computer science. Armed with this knowledge, we can filter out the correct manuscripts.
We need to filter one more thing: we are only interested in papers for which the journal reference is given. Further digging in the HTML code lets us find the correct tag. While we are putting together the correct records, we also normalize his name.
Annoyingly, the <dd> tag we have been focusing on does not contain the arXiv ID or the year. We zip the main loop's iterator with the <dt> tag, because only this one contains the arXiv ID, from which the year can be extracted. This always goes in pair with the <dd> tag, completing the metadata of the manuscripts.
We define yet another set of auxiliary functions to extract everything we need. The routine for extracting the journal title already performs stripping and converting to upper case. We assume that the name of the journal is the stuff on the matching line before the first digit (which is probably the volume or some other bibliographic information).
End of explanation
"""
for i, paper in enumerate(papers):
journal = paper[-1]
long_name = abbs[abbs["Abbreviation"] == journal]
if len(long_name) > 0:
papers[i][-1] = long_name["Full Journal Title"].values[0]
"""
Explanation: We would be almost done if journal names were all entered same way. Of course they were not. Let us try to standardize them: this is where we first combine two sources of data. We use the journal abbreviations table to default to the long title of the journal in our paper collection.
End of explanation
"""
def find_rotten_apples(paper_list):
rotten_apples = []
for paper in paper_list:
match = ifs[ifs["Full Journal Title"] == paper[-1]]
if len(match) == 0:
rotten_apples.append(paper[-1])
return sorted(rotten_apples)
rotten_apples = find_rotten_apples(papers)
rotten_apples
"""
Explanation: There will still be some rotten apples:
End of explanation
"""
def drop_punctuation(string):
result = string.replace(".", " ")
result = result.replace(",", " ")
result = result.replace("(", " ")
result = result.replace(": ", "-")
return " ".join(result.split())
"""
Explanation: Now you start to feel the pain of being a data scientist. The sloppiness of manual data entering is unbounded. Your duty is to clean up this mess. A quick fix is to tinker with the drop_punctuation function to replace the retarded encoding of JRC. Then go through the creation of the papers array and the standardization again.
End of explanation
"""
len(rotten_apples)
"""
Explanation: Exercise 3. Cut the number of rotten apples in half by defining a replacement dictionary and doing another round of standardization.
End of explanation
"""
acin = get_search_result("Acin_A")
for dd, dt in zip(acin.find_all("dd"), acin.find_all("dt")):
id_, year = extract_id_and_year(dt.find("a", attrs={"title": "Abstract"}).text)
div = dd.find("div", class_="list-authors")
subject = extract_subject(dd.find("span", class_ = "primary-subject").text)
journal = dd.find("div", class_ = "list-journal-ref")
if journal:
names = [drop_punctuation(a.text) for a in div.find_all("a")]
journal = extract_journal(journal.text)
long_name = abbs[abbs["Abbreviation"] == journal]
if len(long_name) > 0:
journal = long_name["Full Journal Title"].values[0]
papers.append([id_, extract_title(dd.find("div", class_ = "list-title mathjax").text),
names, subject, year, journal])
for paper in papers:
names = paper[2]
for i, name in enumerate(names):
if re.search("A.* Ac.n", name):
names[i] = "Antonio Acín"
rotten_apples = find_rotten_apples(papers)
rotten_apples
"""
Explanation: It is the same drill with our other contender, except that the short version of his name is uniquely his. On the other hand, that single accent in the surname introduces N+1 spelling variants, which we should standardize.
End of explanation
"""
db = pd.merge(pd.DataFrame(papers, columns=["arXiv", "Title", "Authors", "Primary Subject", "Year", "Full Journal Title"]),
ifs, how="inner", on=["Full Journal Title"])
"""
Explanation: Finally, we do another combination of sources. We merge the data set with the table of Impact Factors. The merge will be done based on the full journal titles.
End of explanation
"""
db = db.drop_duplicates(subset="arXiv")
"""
Explanation: Since the two of them co-authored papers, there are duplicated. They are easy to drop, since they have the same arXiv ID:
End of explanation
"""
print(len(papers), len(db))
"""
Explanation: Notice that we lost a few papers:
End of explanation
"""
def identify_key_authors(authors):
if "Maciej Lewenstein" in authors and "Antonio Acín" in authors:
return "AAML"
elif "Maciej Lewenstein" in authors:
return "ML"
else:
return "AA"
db["Group"] = db["Authors"].apply(lambda x: identify_key_authors(x))
"""
Explanation: This can be one of two reasons: the papers were published in journals that are not in JCR (very unlikely), or we failed matching the full journal name (very likely). It is getting tedious, so we live it here for now.
5. Visual analysis
Since we only focus on two authors, we can add an additional column to help identifying who it is. We also care about the co-authored papers.
End of explanation
"""
groups = ["AA", "ML", "AAML"]
fig, ax = plt.subplots(ncols=1)
for group in groups:
data = db[db["Group"] == group]["Journal Impact Factor"]
sns.distplot(data, kde=False, label=group)
ax.legend()
ax.set_yscale("log")
plt.show()
"""
Explanation: Let's start plotting distributions:
End of explanation
"""
db[db["Journal Impact Factor"] == db["Journal Impact Factor"].max()]
"""
Explanation: The logarithmic scale makes the raw number of papers appear more balanced, which is fair given the difference in age between the two authors. A single Nature paper makes a great outlier:
End of explanation
"""
subjects = db["Primary Subject"].drop_duplicates()
fig, ax = plt.subplots(ncols=1)
for subject in subjects:
data = db[db["Primary Subject"] == subject]["Journal Impact Factor"]
sns.distplot(data, kde=False, label=subject)
ax.legend()
ax.set_yscale("log")
plt.show()
"""
Explanation: Actually, Toni has another Nature, but that is not on arXiv yet. Not all of Maciej's paper are on arXiv either, especially not the old ones.
We can do the same plots with subjects:
End of explanation
"""
fig, ax = plt.subplots(ncols=1)
for subject in subjects:
data = db[(db["Primary Subject"] == subject) & (db["Group"] != "ML")]
if len(data) > 1:
sns.distplot(data["Journal Impact Factor"], kde=False, label=subject)
ax.legend()
ax.set_yscale("log")
plt.show()
"""
Explanation: You are safe with quant-ph and quantum gases, but stay clear of atom physics. It is amusing to restrict the histogram to Professor Acín's subset:
End of explanation
"""
db["#Authors"] = db["Authors"].apply(lambda x: len(x))
sns.stripplot(x="#Authors", y="Journal Impact Factor", data=db)
"""
Explanation: His topics are somewhat predictable.
Let's add one more column to indicate the number of authors:
End of explanation
"""
db["Length of Title"] = db["Title"].apply(lambda x: len(x))
db["Number of Words in Title"] = db["Title"].apply(lambda x: len(x.split()))
fig, axes = plt.subplots(ncols=2, figsize=(12, 5))
sns.stripplot(x="Length of Title", y="Journal Impact Factor", data=db, ax=axes[0])
sns.stripplot(x="Number of Words in Title", y="Journal Impact Factor", data=db, ax=axes[1])
plt.show()
"""
Explanation: How about the length of title? Number of words in title?
End of explanation
"""
fig, axes = plt.subplots(nrows=2, figsize=(10, 5))
data = db[db["Group"] != "ML"]
sns.stripplot(x="Year", y="Journal Impact Factor", data=data, ax=axes[0])
axes[0].set_title("AA")
data = db[db["Group"] != "AA"].sort_values(by="Year")
sns.stripplot(x="Year", y="Journal Impact Factor", data=data, ax=axes[1])
axes[1].set_title("ML")
plt.tight_layout()
plt.show()
"""
Explanation: Do our authors maintain IF over time?
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion | notebooks/text_models/labs/text_generation.ipynb | apache-2.0 | import os
import time
import numpy as np
import tensorflow as tf
"""
Explanation: Text generation with an RNN
Learning Objectives
Learn how to generate text using a RNN
Create training examples and targets for text generation
Build a RNN model for sequence generation using Keras Subclassing
Create a text generator and evaluate the output
This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's The Unreasonable Effectiveness of Recurrent Neural Networks. Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Below is the sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q":
<pre>
QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.
BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?
ESCALUS:
The cause why then we are all resolved more sons.
VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.
QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.
PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m
</pre>
While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but here are some things to consider:
The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
Setup
Import TensorFlow and other libraries
End of explanation
"""
path_to_file = tf.keras.utils.get_file(
"shakespeare.txt",
"https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt",
)
"""
Explanation: Download the Shakespeare dataset
Change the following line to run this code on your own data.
End of explanation
"""
text = open(path_to_file, "rb").read().decode(encoding="utf-8")
print(f"Length of text: {len(text)} characters")
"""
Explanation: Read the data
First, we'll download the file and then decode.
End of explanation
"""
print(text[:250])
"""
Explanation: Let's take a look at the first 250 characters in text
End of explanation
"""
vocab = sorted(set(text))
print(f"{len(vocab)} unique characters")
"""
Explanation: Let's check to see how many unique characters are in our corpus/document.
End of explanation
"""
example_texts = ["abcdefg", "xyz"]
# TODO 1
chars = #insert code here
"""
Explanation: Process the text
Vectorize the text
Before training, you need to convert the strings to a numerical representation.
TODO 1
Using tf.keras.layers.StringLookup layer can convert each character into a numeric ID. It just needs the text to be split into tokens first.
End of explanation
"""
ids_from_chars = tf.keras.layers.StringLookup(
vocabulary=list(vocab), mask_token=None
)
"""
Explanation: Now create the tf.keras.layers.StringLookup layer:
End of explanation
"""
ids = ids_from_chars(chars)
ids
"""
Explanation: It converts from tokens to character IDs:
End of explanation
"""
chars_from_ids = tf.keras.layers.StringLookup(
vocabulary=ids_from_chars.get_vocabulary(), invert=True, mask_token=None
)
"""
Explanation: Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use tf.keras.layers.StringLookup(..., invert=True).
Note: Here instead of passing the original vocabulary generated with sorted(set(text)) use the get_vocabulary() method of the tf.keras.layers.StringLookup layer so that the [UNK] tokens is set the same way.
End of explanation
"""
chars = chars_from_ids(ids)
chars
"""
Explanation: This layer recovers the characters from the vectors of IDs, and returns them as a tf.RaggedTensor of characters:
End of explanation
"""
tf.strings.reduce_join(chars, axis=-1).numpy()
def text_from_ids(ids):
return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)
"""
Explanation: You can tf.strings.reduce_join to join the characters back into strings.
End of explanation
"""
# TODO 2
all_ids = # insert code here
all_ids
ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)
for ids in ids_dataset.take(10):
print(chars_from_ids(ids).numpy().decode("utf-8"))
seq_length = 100
examples_per_epoch = len(text) // (seq_length + 1)
"""
Explanation: The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain seq_length characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of seq_length+1. For example, say seq_length is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
TODO 2
First use the tf.data.Dataset.from_tensor_slices function to convert the text vector into a stream of character indices.
End of explanation
"""
sequences = ids_dataset.batch(seq_length + 1, drop_remainder=True)
for seq in sequences.take(1):
print(chars_from_ids(seq))
"""
Explanation: The batch method lets you easily convert these individual characters to sequences of the desired size.
End of explanation
"""
for seq in sequences.take(5):
print(text_from_ids(seq).numpy())
"""
Explanation: It's easier to see what this is doing if you join the tokens back into strings:
End of explanation
"""
def split_input_target(sequence):
input_text = sequence[:-1]
target_text = sequence[1:]
return input_text, target_text
split_input_target(list("Tensorflow"))
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1):
print("Input :", text_from_ids(input_example).numpy())
print("Target:", text_from_ids(target_example).numpy())
"""
Explanation: For training you'll need a dataset of (input, label) pairs. Where input and
label are sequences. At each time step the input is the current character and the label is the next character.
Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep:
End of explanation
"""
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = (
dataset.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE)
)
dataset
"""
Explanation: Create training batches
You used tf.data to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches.
End of explanation
"""
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
"""
Explanation: Build The Model
This section defines the model as a keras.Model subclass (For details see Making new Layers and Models via subclassing).
TODO 3 Build a model with the following layers
tf.keras.layers.Embedding: The input layer. A trainable lookup table that will map each character-ID to a vector with embedding_dim dimensions;
tf.keras.layers.GRU: A type of RNN with size units=rnn_units (You can also use an LSTM layer here.)
tf.keras.layers.Dense: The output layer, with vocab_size outputs. It outputs one logit for each character in the vocabulary. These are the log-likelihood of each character according to the model.
End of explanation
"""
class MyModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_units):
super().__init__(self)
# TODO - Create an embedding layer
self.embedding = #insert code here
# TODO - Create a GRU layer
self.gru = #insert code here
# TODO - Finally connect it with a dense layer
self.dense = #insert code here
def call(self, inputs, states=None, return_state=False, training=False):
x = self.embedding(inputs, training=training)
# since we are training a text generation model,
# we use the previous state, in training. If there is no state,
# then we initialize the state
if states is None:
states = self.gru.get_initial_state(x)
x, states = self.gru(x, initial_state=states, training=training)
x = self.dense(x, training=training)
if return_state:
return x, states
else:
return x
model = MyModel(
# Be sure the vocabulary size matches the `StringLookup` layers.
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
)
"""
Explanation: The class below does the following:
- We derive a class from tf.keras.Model
- The constructor is used to define the layers of the model
- We define the pass forward using the layers defined in the constructor
End of explanation
"""
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(
example_batch_predictions.shape,
"# (batch_size, sequence_length, vocab_size)",
)
"""
Explanation: For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character.
Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output:
End of explanation
"""
model.summary()
"""
Explanation: In the above example the sequence length of the input is 100 but the model can be run on inputs of any length:
End of explanation
"""
sampled_indices = tf.random.categorical(
example_batch_predictions[0], num_samples=1
)
sampled_indices = tf.squeeze(sampled_indices, axis=-1).numpy()
"""
Explanation: To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to sample from this distribution as taking the argmax of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
End of explanation
"""
sampled_indices
"""
Explanation: This gives us, at each timestep, a prediction of the next character index:
End of explanation
"""
print("Input:\n", text_from_ids(input_example_batch[0]).numpy())
print()
print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy())
"""
Explanation: Decode these to see the text predicted by this untrained model:
End of explanation
"""
# TODO - add a loss function here
loss = #insert code here
example_batch_mean_loss = loss(target_example_batch, example_batch_predictions)
print(
"Prediction shape: ",
example_batch_predictions.shape,
" # (batch_size, sequence_length, vocab_size)",
)
print("Mean loss: ", example_batch_mean_loss)
"""
Explanation: Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
TODO 4 Attach an optimizer, and a loss function
The standard tf.keras.losses.sparse_categorical_crossentropy loss function works in this case because it is applied across the last dimension of the predictions.
Because your model returns logits, you need to set the from_logits flag.
End of explanation
"""
tf.exp(example_batch_mean_loss).numpy()
"""
Explanation: A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized:
End of explanation
"""
model.compile(optimizer="adam", loss=loss)
"""
Explanation: Configure the training procedure using the tf.keras.Model.compile method. Use tf.keras.optimizers.Adam with default arguments and the loss function.
End of explanation
"""
# Directory where the checkpoints will be saved
checkpoint_dir = "./training_checkpoints"
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix, save_weights_only=True
)
"""
Explanation: Configure checkpoints
Use a tf.keras.callbacks.ModelCheckpoint to ensure that checkpoints are saved during training:
End of explanation
"""
EPOCHS = 10
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
"""
Explanation: Execute the training
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
End of explanation
"""
class OneStep(tf.keras.Model):
def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0):
super().__init__()
self.temperature = temperature
self.model = model
self.chars_from_ids = chars_from_ids
self.ids_from_chars = ids_from_chars
# Create a mask to prevent "[UNK]" from being generated.
skip_ids = self.ids_from_chars(["[UNK]"])[:, None]
sparse_mask = tf.SparseTensor(
# Put a -inf at each bad index.
values=[-float("inf")] * len(skip_ids),
indices=skip_ids,
# Match the shape to the vocabulary
dense_shape=[len(ids_from_chars.get_vocabulary())],
)
self.prediction_mask = tf.sparse.to_dense(sparse_mask)
#TODO 5 - Fill in the code below to generate text
@tf.function
def generate_one_step(self, inputs, states=None):
# Convert strings to token IDs.
input_chars = tf.strings.unicode_split(inputs, "UTF-8")
input_ids = self.ids_from_chars(input_chars).to_tensor()
# Run the model.
# predicted_logits.shape is [batch, char, next_char_logits]
predicted_logits, states = #insert code here
# Only use the last prediction.
predicted_logits = predicted_logits[:, -1, :]
predicted_logits = predicted_logits / self.temperature
# Apply the prediction mask: prevent "[UNK]" from being generated.
predicted_logits = #insert code here
# Sample the output logits to generate token IDs.
predicted_ids = #insert code here
predicted_ids = #insert code here
# Convert from token ids to characters
predicted_chars = self.chars_from_ids(predicted_ids)
# Return the characters and model state.
return predicted_chars, states
one_step_model = OneStep(model, chars_from_ids, ids_from_chars)
"""
Explanation: TODO 5 Generate text
The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.
Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.
The following makes a single step prediction:
End of explanation
"""
start = time.time()
states = None
next_char = tf.constant(["ROMEO:"])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(
next_char, states=states
)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result[0].numpy().decode("utf-8"), "\n\n" + "_" * 80)
print("\nRun time:", end - start)
"""
Explanation: Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
End of explanation
"""
start = time.time()
states = None
next_char = tf.constant(["ROMEO:", "ROMEO:", "ROMEO:", "ROMEO:", "ROMEO:"])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(
next_char, states=states
)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result, "\n\n" + "_" * 80)
print("\nRun time:", end - start)
"""
Explanation: The easiest thing you can do to improve the results is to train it for longer (try EPOCHS = 30).
You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions.
If you want the model to generate text faster the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above.
End of explanation
"""
tf.saved_model.save(one_step_model, "one_step")
one_step_reloaded = tf.saved_model.load("one_step")
states = None
next_char = tf.constant(["ROMEO:"])
result = [next_char]
for n in range(100):
next_char, states = one_step_reloaded.generate_one_step(
next_char, states=states
)
result.append(next_char)
print(tf.strings.join(result)[0].numpy().decode("utf-8"))
"""
Explanation: Export the generator
This single-step model can easily be saved and restored, allowing you to use it anywhere a tf.saved_model is accepted.
End of explanation
"""
class CustomTraining(MyModel):
@tf.function
def train_step(self, inputs):
inputs, labels = inputs
with tf.GradientTape() as tape:
predictions = self(inputs, training=True)
loss = self.loss(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
self.optimizer.apply_gradients(zip(grads, model.trainable_variables))
return {"loss": loss}
"""
Explanation: Advanced: Customized Training
The above training procedure is simple, but does not give you much control.
It uses teacher-forcing which prevents bad predictions from being fed back to the model, so the model never learns to recover from mistakes.
So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement curriculum learning to help stabilize the model's open-loop output.
The most important part of a custom training loop is the train step function.
Use tf.GradientTape to track the gradients. You can learn more about this approach by reading the eager execution guide.
The basic procedure is:
Execute the model and calculate the loss under a tf.GradientTape.
Calculate the updates and apply them to the model using the optimizer.
End of explanation
"""
model = CustomTraining(
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units,
)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(dataset, epochs=1)
"""
Explanation: The above implementation of the train_step method follows Keras' train_step conventions. This is optional, but it allows you to change the behavior of the train step and still use keras' Model.compile and Model.fit methods.
End of explanation
"""
EPOCHS = 10
mean = tf.metrics.Mean()
for epoch in range(EPOCHS):
start = time.time()
mean.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
logs = model.train_step([inp, target])
mean.update_state(logs["loss"])
if batch_n % 50 == 0:
template = (
f"Epoch {epoch+1} Batch {batch_n} Loss {logs['loss']:.4f}"
)
print(template)
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print()
print(f"Epoch {epoch+1} Loss: {mean.result().numpy():.4f}")
print(f"Time taken for 1 epoch {time.time() - start:.2f} sec")
print("_" * 80)
model.save_weights(checkpoint_prefix.format(epoch=epoch))
"""
Explanation: Or if you need more control, you can write your own complete custom training loop:
End of explanation
"""
|
GoogleCloudPlatform/ml-design-patterns | 05_resilience/continuous_eval.ipynb | apache-2.0 | # change these to try this notebook out
PROJECT = 'munn-sandbox'
BUCKET = 'munn-sandbox'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['TFVERSION'] = '2.1'
import shutil
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow_hub import KerasLayer
from tensorflow.keras.layers import Dense, Input, Lambda
from tensorflow.keras.models import Model
print(tf.__version__)
%matplotlib inline
"""
Explanation: Continuous Evaluation
This notebook demonstrates how to use Cloud AI Platform to execute continuous evaluation of a deployed machine learning model. You'll need to have a project set up with Google Cloud Platform.
Set up
Start by creating environment variables for your Google Cloud project and bucket. Also, import the libraries we'll need for this notebook.
End of explanation
"""
DATASET_NAME = "titles_full.csv"
COLUMNS = ['title', 'source']
titles_df = pd.read_csv(DATASET_NAME, header=None, names=COLUMNS)
titles_df.head()
"""
Explanation: Train and deploy the model
For this notebook, we'll build a text classification model using the Hacker News dataset. Each training example consists of an article title and the article source. The model will be trained to classify a given article title as belonging to either nytimes, github or techcrunch.
Load the data
End of explanation
"""
CLASSES = {
'github': 0,
'nytimes': 1,
'techcrunch': 2
}
N_CLASSES = len(CLASSES)
def encode_labels(sources):
classes = [CLASSES[source] for source in sources]
one_hots = to_categorical(classes, num_classes=N_CLASSES)
return one_hots
encode_labels(titles_df.source[:4])
"""
Explanation: We one-hot encode the label...
End of explanation
"""
N_TRAIN = int(len(titles_df) * 0.80)
titles_train, sources_train = (
titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN])
titles_valid, sources_valid = (
titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:])
X_train, Y_train = titles_train.values, encode_labels(sources_train)
X_valid, Y_valid = titles_valid.values, encode_labels(sources_valid)
X_train[:3]
"""
Explanation: ...and create a train/test split.
End of explanation
"""
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1"
swivel_module = KerasLayer(SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)
"""
Explanation: Swivel Model
We'll build a simple text classification model using a Tensorflow Hub embedding module derived from Swivel. Swivel is an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings.
TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.
End of explanation
"""
def build_model(hub_module, model_name):
inputs = Input(shape=[], dtype=tf.string, name="text")
module = hub_module(inputs)
h1 = Dense(16, activation='relu', name="h1")(module)
outputs = Dense(N_CLASSES, activation='softmax', name='outputs')(h1)
model = Model(inputs=inputs, outputs=[outputs], name=model_name)
model.compile(
optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy']
)
return model
def train_and_evaluate(train_data, val_data, model, batch_size=5000):
tf.random.set_seed(33)
X_train, Y_train = train_data
history = model.fit(
X_train, Y_train,
epochs=100,
batch_size=batch_size,
validation_data=val_data,
callbacks=[EarlyStopping()],
)
return history
txtcls_model = build_model(swivel_module, model_name='txtcls_swivel')
txtcls_model.summary()
"""
Explanation: The build_model function is written so that the TF Hub module can easily be exchanged with another module.
End of explanation
"""
# set up train and validation data
train_data = (X_train, Y_train)
val_data = (X_valid, Y_valid)
"""
Explanation: Train and evaluation the model
With the model defined and data set up, next we'll train and evaluate the model.
End of explanation
"""
txtcls_history = train_and_evaluate(train_data, val_data, txtcls_model)
history = txtcls_history
pd.DataFrame(history.history)[['loss', 'val_loss']].plot()
pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
"""
Explanation: For training we'll call train_and_evaluate on txtcls_model.
End of explanation
"""
txtcls_model.predict(x=["YouTube introduces Video Chapters to make it easier to navigate longer videos"])
"""
Explanation: Calling predicition from model head produces output from final dense layer. This final layer is used to compute categorical cross-entropy when training.
End of explanation
"""
tf.saved_model.save(txtcls_model, './txtcls_swivel/')
"""
Explanation: We can save the model artifacts in the local directory called ./txtcls_swivel.
End of explanation
"""
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir ./txtcls_swivel/
"""
Explanation: ....and examine the model's serving default signature. As expected the model takes as input a text string (e.g. an article title) and retrns a 3-dimensional vector of floats (i.e. the softmax output layer).
End of explanation
"""
@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string)])
def source_name(text):
labels = tf.constant(['github', 'nytimes', 'techcrunch'], dtype=tf.string)
probs = txtcls_model(text, training=False)
indices = tf.argmax(probs, axis=1)
pred_source = tf.gather(params=labels, indices=indices)
pred_confidence = tf.reduce_max(probs, axis=1)
return {'source': pred_source,
'confidence': pred_confidence}
"""
Explanation: To simplify the returned predictions, we'll modify the model signature so that the model outputs the predicted article source (either nytimes, techcrunch, or github) rather than the final softmax layer. We'll also return the 'confidence' of the model's prediction. This will be the softmax value corresonding to the predicted article source.
End of explanation
"""
shutil.rmtree('./txtcls_swivel', ignore_errors=True)
txtcls_model.save('./txtcls_swivel', signatures={'serving_default': source_name})
"""
Explanation: Now, we'll re-save the new Swivel model that has this updated model signature by referencing the source_name function for the model's serving_default.
End of explanation
"""
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir ./txtcls_swivel/
"""
Explanation: Examine the model signature to confirm the changes:
End of explanation
"""
title1 = "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"
title2 = "YouTube introduces Video Chapters to make it easier to navigate longer videos"
title3 = "As facebook turns 10 zuckerberg wants to change how tech industry works"
restored = tf.keras.models.load_model('./txtcls_swivel')
infer = restored.signatures['serving_default']
outputs = infer(text=tf.constant([title1, title2, title3]))
print(outputs['source'].numpy())
print(outputs['confidence'].numpy())
"""
Explanation: Now when we call predictions using the updated serving input function, the model will return the predicted article source as a readable string, and the model's confidence for that prediction.
End of explanation
"""
%%bash
MODEL_NAME="txtcls"
MODEL_VERSION="swivel"
MODEL_LOCATION="./txtcls_swivel/"
gcloud ai-platform models create ${MODEL_NAME}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--staging-bucket gs://${BUCKET} \
--runtime-version=2.1
"""
Explanation: Deploy the model for online serving
Once the model is trained and the assets saved, deploying the model to GCP is straightforward. After some time you should be able to see your deployed model and its version on the model page of GCP console.
End of explanation
"""
%load_ext google.cloud.bigquery
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
"""
Explanation: Set up the Evaluation job on CAIP
Now that the model is deployed, go to Cloud AI Platform to see the model version you've deployed and set up an evaluation job by clicking on the button called "Create Evaluation Job". You will be asked to provide some relevant information:
- Job description: txtcls_swivel_eval
- Model objective: text classification
- Classification type: single-label classification
- Prediction label file path for the annotation specification set: When you create an evaluation job on CAIP, you must specify a CSV file that defines your annotation specification set. This file must have one row for every possible label your model outputs during prediction. Each row should be a comma-separated pair containing the label and a description of the label: label-name,description
- Daily sample percentage: We'll set this to 100% so that all online predicitons are captured for evaluation.
- BigQuery table to house online prediction requests: We'll use the BQ dataset and table txtcls_eval.swivel. If you enter a BigQuery table that doesn’t exist, one with that name will be created with the correct schema.
- Prediction input
- Data key: this is The key for the raw prediction data. From examining our deployed model signature, the input data key is text.
- Data reference key: this is for image models, so we can ignore
- Prediction output
- Prediction labels key: This is the prediction key which contains the predicted label (i.e. the article source). For our model, the label key is source.
- Prediction score key: This is the prediction key which contains the predicted scores (i.e. the model confidence). For our model, the score key is confidence.
- Ground-truth method: Check the box that indicates we will provide our own labels, and not use a Human data labeling service.
Once the evaluation job is set up, the table will be made in BigQuery to capture the online prediction requests.
End of explanation
"""
%%writefile input.json
{"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "A native Mac app wrapper for WhatsApp Web"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "Astronauts Dock With Space Station After Historic SpaceX Launch"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "Scrollability"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
%%writefile input.json
{"text": "iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks"}
!gcloud ai-platform predict \
--model txtcls \
--json-instances input.json \
--version swivel
"""
Explanation: Now, every time this model version receives an online prediction request, this information will be captured and stored in the BQ table. Note, this happens everytime because we set the sampling proportion to 100%.
Send prediction requests to your model
Here are some article titles and their groundtruth sources that we can test with prediciton.
| title | groundtruth |
|---|---|
| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch |
| A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes |
| A native Mac app wrapper for WhatsApp Web | github |
| Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes |
| House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes |
| Scrollability | github |
| iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch |
End of explanation
"""
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
"""
Explanation: Summarizing the results from our model:
| title | groundtruth | predicted
|---|---|---|
| YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch | techcrunch |
| A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes | techcrunch |
| A native Mac app wrapper for WhatsApp Web | github | techcrunch |
| Astronauts Dock With Space Station After Historic SpaceX Launch | nytimes | techcrunch |
| House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force | nytimes | nytimes |
| Scrollability | github | techcrunch |
| iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks | techcrunch | nytimes |
End of explanation
"""
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "techcrunch"}]}'
WHERE
raw_data = '{"instances": [{"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "github"}]}'
WHERE
raw_data = '{"instances": [{"text": "A native Mac app wrapper for WhatsApp Web"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "Astronauts Dock With Space Station After Historic SpaceX Launch"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "nytimes"}]}'
WHERE
raw_data = '{"instances": [{"text": "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "github"}]}'
WHERE
raw_data = '{"instances": [{"text": "Scrollability"}]}';
%%bigquery --project $PROJECT
UPDATE `txtcls_eval.swivel`
SET
groundtruth = '{"predictions": [{"source": "techcrunch"}]}'
WHERE
raw_data = '{"instances": [{"text": "iOS 14 lets deaf users set alerts for important sounds, among other clever accessibility perks"}]}';
"""
Explanation: Provide the ground truth for the raw prediction input
Notice the groundtruth is missing. We'll update the evaluation table to contain the ground truth.
End of explanation
"""
%%bigquery --project $PROJECT
SELECT * FROM `txtcls_eval.swivel`
"""
Explanation: We can confirm that the ground truch has been properly added to the table.
End of explanation
"""
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_fscore_support as score
from sklearn.metrics import classification_report
"""
Explanation: Compute evaluation metrics
With the raw prediction input, the model output and the groundtruth in one place, we can evaluation how our model performs. And how the model performs across various aspects (e.g. over time, different model versions, different labels, etc)
End of explanation
"""
%%bigquery --project $PROJECT
SELECT
model,
model_version,
time,
REGEXP_EXTRACT(raw_data, r'.*"text": "(.*)"') AS text,
REGEXP_EXTRACT(raw_prediction, r'.*"source": "(.*?)"') AS prediction,
REGEXP_EXTRACT(raw_prediction, r'.*"confidence": (0.\d{2}).*') AS confidence,
REGEXP_EXTRACT(groundtruth, r'.*"source": "(.*?)"') AS groundtruth,
FROM
`txtcls_eval.swivel`
query = '''
SELECT
model,
model_version,
time,
REGEXP_EXTRACT(raw_data, r'.*"text": "(.*)"') AS text,
REGEXP_EXTRACT(raw_prediction, r'.*"source": "(.*?)"') AS prediction,
REGEXP_EXTRACT(raw_prediction, r'.*"confidence": (0.\d{2}).*') AS confidence,
REGEXP_EXTRACT(groundtruth, r'.*"source": "(.*?)"') AS groundtruth,
FROM
`txtcls_eval.swivel`
'''
client = bigquery.Client()
df_results = client.query(query).to_dataframe()
df_results.head(20)
prediction = list(df_results.prediction)
groundtruth = list(df_results.groundtruth)
precision, recall, fscore, support = score(groundtruth, prediction)
from tabulate import tabulate
sources = list(CLASSES.keys())
results = list(zip(sources, precision, recall, fscore, support))
print(tabulate(results, headers = ['source', 'precision', 'recall', 'fscore', 'support'],
tablefmt='orgtbl'))
"""
Explanation: Using regex we can extract the model predictions, to have an easier to read format:
End of explanation
"""
print(classification_report(y_true=groundtruth, y_pred=prediction))
"""
Explanation: Or a full classification report from the sklearn library:
End of explanation
"""
cm = confusion_matrix(groundtruth, prediction, labels=sources)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap="Blues")
# labels, title and ticks
ax.set_xlabel('Predicted labels')
ax.set_ylabel('True labels')
ax.set_title('Confusion Matrix')
ax.xaxis.set_ticklabels(sources)
ax.yaxis.set_ticklabels(sources)
plt.savefig("./txtcls_cm.png")
"""
Explanation: Can also examine a confusion matrix:
End of explanation
"""
now = pd.Timestamp.now(tz='UTC')
one_week_ago = now - pd.DateOffset(weeks=1)
one_month_ago = now - pd.DateOffset(months=1)
df_prev_week = df_results[df_results.time > one_week_ago]
df_prev_month = df_results[df_results.time > one_month_ago]
df_prev_month
"""
Explanation: Examine eval metrics by model version or timestamp
By specifying the same evaluation table, two different model versions can be evaluated. Also, since the timestamp is captured, it is straightforward to evaluation model performance over time.
End of explanation
"""
|
msramalho/pyhparser | examples/Pyhparser.ipynb | mit | # To install the most recent version
!pip install git+https://github.com/msramalho/pyhparser --upgrade
"""
Explanation: Installing Pyhparser
Pyhparser.
End of explanation
"""
from pyhparser import Pyhparser, readFile
"""
Explanation: Import pyhparser
And one useful function that comes with it (readFile)
End of explanation
"""
inputVar = "10" #if the data is in a file do readFile("input.txt")
parserVar = "(int, n)" #if the parser is in a file do readFile("parser.txt")
p = Pyhparser(inputVar, parserVar) #create a pyhparser instance
p.parse() #execute the parsing
p.printRecursive(p.parserRead) #display the parse grammar hierarchy
tempGlobals = p.getVariables() #get the values of the created variables
for key, value in tempGlobals.items():
globals()[key] = value #make the created var acessible from every scope
print(n) #print the value of n, the parsed variable
"""
Explanation: Hello World
Readin an int into a variable n that can later be used
End of explanation
"""
inputText = """
3 Bartholomew JoJo Simpson
101 120 5455
Andrew American
Bernard Bolivian
Carl Canadian
10 11 12
20 21 22
30 31 32
69 lol
169 lel
333 threeHundredAndThirtyThree
666 sixHundredAndSixtySix
this is my first tuple
5
3 limited to three
2 only two
2 two more
6 what a big old string list
8 this sentence is the biggest of them all
10.23 55
3.141592653 70 1300 veryNice
3.141592653 70 4 string with four words
"""
parserText = """
(int, n) (str, name, {n})
[list, {n}, (int), myInts]
[list, {n}, (str, 2), Names]
[list, {n},
[list, {n}, (int)]
,Numbers]
{(int), (str), myDicts, 2}
[list, 2, {(int), (str)}, myDictsList]
[tuple, 5, (str), myTuple]
(int, total)
[list, {total}, {(int, sizeOfLine), (str, {sizeOfLine})}, sizesList]
#classes
[class, Complex, {realpart: (float), imagpart: (int)}, cn1]
[class, Complex, {realpart: (float), imagpart: (int), special: {(int), (str)}}, cn2]
[class, Complex, {realpart: (float), imagpart: (int), special: {(int, myInt), (str, {myInt})}}, cn3]
"""
"""
Explanation: Multiple data types example
Requires:
- Input data
- Parser format
- Python code that invokes Pyhparser
End of explanation
"""
class Complex:
def __init__(self, realpart, imagpart, special = "not special"):
self.realpart = realpart
self.imagpart = imagpart
self.special = special
def __str__(self):
return ("%s Is the special of %di + %d" % (self.special, self.realpart, self.imagpart))
p = Pyhparser(inputText, parserText, [Complex]) #create a pyhparser instance
p.parse() #execute the parsing
#p.printRecursive(p.parserRead) #display the parse grammar hierarchy
tempGlobals = p.getVariables() #get the values of the created variables
for key, value in tempGlobals.items():
globals()[key] = value #make the created var acessible from every scope
print(cn3)
"""
Explanation: Define the Complex class used to parse the data
End of explanation
"""
|
RaspberryJamBe/ipython-notebooks | notebooks/en-gb/104 - Remote door bell - Using a cloud API to send messages.ipynb | cc0-1.0 | APPKEY = "******"
"""
Explanation: Requirements
For this excercise you need a (free) account at http://www.realtime.co/; if you create an account and start a "Realtime Messaging Free" subscription, you can put its "Application Key" in the variable below. This key will then be used in the communications we'll set up further on.
You will also have to have your Raspberry Pi connected to the internet for the communications to work.
End of explanation
"""
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
"""
Explanation: <img src="Doorbell01.png" height="300" />
We will again use the "RPi.GPIO" library in our code
And for the GPIO pins we will, as usual, use the BCM numbering (cfr numbers on the case)
End of explanation
"""
import ortc
oc = ortc.OrtcClient()
oc.cluster_url = "http://ortc-developers.realtime.co/server/2.1"
# Connecting to the account
oc.connect(APPKEY)
"""
Explanation: ortc is a Python library that is linked to the realtime.co cloud communication system (http://www.realtime.co/)
You can compare it to e-mail, Twitter or Facebook Messenger, but intended to be used by computers or devices.
End of explanation
"""
person_pins = {
17: "Maleficent",
27: "BigBadWolf",
}
"""
Explanation: With person_pins we create a dictionary that links each of the GPIO pins to the person who the door bell of that pin belongs to.
End of explanation
"""
print(person_pins[17])
"""
Explanation: Let's see if that works:
End of explanation
"""
def send_message(pin):
oc.send("deurbel", person_pins[pin])
"""
Explanation: Now let's create a function with a parameter called "pin" (no confusion possible there...); we intend to use this function to send a message, specific to the relevant pin.
If the value of pin is 17, we'll send a message "Maleficent" on the "doorbell" channel.
End of explanation
"""
for pin in person_pins:
GPIO.setup(pin, GPIO.IN, GPIO.PUD_UP)
GPIO.add_event_detect(pin, GPIO.FALLING, send_message, 200)
"""
Explanation: Last piece of the setup: for each of the possible pin values, we 'll:
1. set the pin as input (to listen for button presses)
2. register an event detection (if the pin goes from 1 to 0 -GPIO.FALLING-, send_message is executed)
End of explanation
"""
for pin in person_pins:
GPIO.remove_event_detect(pin)
GPIO.cleanup()
"""
Explanation: Time to test is out: press the button and see if your message appears in the cloud!
Follow the following procedure to check it:
log in on the Realtime website
click on your name to bring op your account
click on Subscriptions (the shopping cart in the left upper corner)
open the "Realtime Messaging Free" subscription and click "Console"
Input "doorbell" as Channel and click [Subscribe]
now you can press a button on you Raspberry Pi and you'll see the message appear
And clean up again:
End of explanation
"""
|
tritemio/FRETBursts | notebooks/FRETBursts - us-ALEX smFRET burst analysis.ipynb | gpl-2.0 | from fretbursts import *
"""
Explanation: FRETBursts - μs-ALEX smFRET burst analysis
This notebook is part of a tutorial series for the FRETBursts burst analysis software.
In this notebook, we present a typical FRETBursts
workflow for μs-ALEX smFRET burst analysis.
Briefly, we show how to perform background estimation, burst search, burst selection,
compute FRET histograms and ALEX histograms, do sub-population selection and finally, FRET efficiency fit.
<br>
<div class="alert alert-success">
If you are new to the notebook interface see
<a href="1. First Steps - Start here if new to Jupyter Notebooks.ipynb">First Steps</a>
before continuing.
</div>
Before running the notebook, you can click on menu Cell -> All Output -> Clear
to clear all previous output. This will avoid mixing output from current execution and
the previously saved one.
Loading the software
We start by loading FRETBursts:
End of explanation
"""
sns = init_notebook()
"""
Explanation: <div class="alert alert-info">
Thanks in advance for remembering to <b>cite</b> FRETBursts in publications or presentations!
</div>
Note that FRETBursts version string tells you the exact FRETBursts version (and revision) in use.
Storing the version in the notebook helps with reproducibility and
tracking software regressions.
Next, we initialize the default plot style for the notebook
(under the hood it uses seaborn):
End of explanation
"""
import lmfit; lmfit.__version__
import phconvert; phconvert.__version__
"""
Explanation: Note that the previous command has no output. Finally, we print the version of some dependencies:
End of explanation
"""
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
"""
Explanation: Downloading the data file
The full list of smFRET measurements used in the FRETBursts tutorials
can be found on Figshare.
This is the file we will download:
End of explanation
"""
download_file(url, save_dir='./data')
"""
Explanation: <div class="alert alert-success">
You can change the <code>url</code> variable above to download your own data file.
This is useful if you are executing FRETBursts online and you want to use
your own data file. See
<a href="1. First Steps - Start here if new to Jupyter Notebooks.ipynb">First Steps</a>.
</div>
Here, we download the data file and put it in a folder named data,
inside the notebook folder:
End of explanation
"""
filename = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
filename
"""
Explanation: NOTE: If you modified the url variable providing an invalid URL
the previous command will fail. In this case, edit the cell containing
the url and re-try the download.
Selecting the data file
Use one of the following 2 methods to select a data file.
Option 1: Paste the file-name
Here, we can directly define the file name to be loaded:
End of explanation
"""
# filename = OpenFileDialog()
# filename
"""
Explanation: Now filename contains the path of the file you just selected.
Run again the previous cell to select a new file. In a following cell
we will check if the file actually exists.
<div class="alert alert-success">
When running the notebook online and using your own data file,
make sure to modify the previous cell.
<br>
See
<a href="1. First Steps - Start here if new to Jupyter Notebooks.ipynb">First Steps</a>.
</div>
Option 2: Use an "Open File" dialog
Alternatively, you can select a data file with an "Open File" windows.
Note that, since this only works in a local installation, the next commands
are commented (so nothing will happen when running the cell).
If you want to try the File Dialog, you need to remove the # signs:
End of explanation
"""
import os
if os.path.isfile(filename):
print("Perfect, file found!")
else:
print("Sorry, file:\n%s not found" % filename)
"""
Explanation: Check that the data file exists
Let's check that the file exists:
End of explanation
"""
d = loader.photon_hdf5(filename)
"""
Explanation: Load the selected file
We can finally load the data and store it in a variable called d:
End of explanation
"""
d.leakage = 0.11
d.dir_ex = 0.04
d.gamma = 1.
"""
Explanation: If you don't get any message, the file is loaded successfully.
We can also set the 3 correction coefficients:
leakage or bleed-through: leakage
direct excitation: dir_ex (ALEX-only)
gamma-factor gamma
End of explanation
"""
d.ph_times_t, d.det_t
"""
Explanation: NOTE: at any later moment, after burst search, a simple
reassignment of these coefficient will update the burst
data with the new correction values.
μs-ALEX parameters
At this point, timestamps and detectors numbers are contained in the ph_times_t and det_t attributes of d. Let's print them:
End of explanation
"""
d.add(det_donor_accept = (0, 1),
alex_period = 4000,
offset = 700,
D_ON = (2180, 3900),
A_ON = (200, 1800))
"""
Explanation: We need to define some ALEX parameters:
End of explanation
"""
bpl.plot_alternation_hist(d)
"""
Explanation: Here the parameters are:
det_donor_accept: donor and acceptor channels
alex_period: length of excitation period (in timestamps units)
D_ON and A_ON: donor and acceptor excitation windows
offset: the offset between the start of alternation and start of timestamping
(see also Definition of alternation periods).
To check that the above parameters are correct, we need to plot the histogram of timestamps (modulo the alternation period) and superimpose the two excitation period definitions to it:
End of explanation
"""
loader.alex_apply_period(d)
"""
Explanation: If the previous alternation histogram looks correct,
the corresponding definitions of the excitation periods can be applied to the data using the following command:
End of explanation
"""
d
"""
Explanation: If the previous histogram does not look right, the parameters in the d.add(...) cell can be modified and checked by running the histogram plot cell until everything looks fine. Don't forget to apply the
parameters with loader.usalex_apply_period(d) as a last step.
NOTE: After applying the ALEX parameters a new array of
timestamps containing only photons inside the excitation periods
is created (name d.ph_times_m). To save memory, by default,
the old timestamps array (d.ph_times_t) is deleted. Therefore,
in the following, when we talk about all-photon selection we always
refer to all photons inside both excitation periods.
Measurement infos
The entire measurement data is now stored in the variable d. Printing it
will give a compact representation containing the file-name and additional parameters:
End of explanation
"""
d.time_max
"""
Explanation: To check the measurement duration (in seconds) run:
End of explanation
"""
dplot(d, timetrace);
"""
Explanation: Plotting basics
In this section basic concepts of plotting with FRETBursts using the
timetrace plot as an example.
To plot a timetrace of the measurement we use:
End of explanation
"""
dplot(d, timetrace, binwidth=0.5e-3);
"""
Explanation: Here, dplot is a generic wrapper (the same for all plots)
that takes care of setting up the figure, title and axis
(in the multispot case dplot creates multi-panel plot).
The second argument, timetrace, is the actual plot function.
All the eventual additional arguments passed to dplot are,
in turn, passed to the plot function (e.g. timetrace).
If we look at the documentation for timetrace
function we notice that it accepts a long list of arguments.
In python, when an argument is not specified, it will take the default
value specified in the function definition (see previus link).
As an example, to change the bin size (i.e. duration) of the timetrace histogram,
we can look up in the timetrace documentation
and find that the argument we need to modify is binwidth
(we can also see that the default value is 0.001 seconds).
We can then re-plot the timetrace using a bin of 0.5 ms:
End of explanation
"""
dplot(d, timetrace, binwidth=0.5e-3, tmin=50, tmax=150)
plt.xlim(51, 52);
"""
Explanation: The timetrace is computed between tmin and tmax (by default 0 and 200s),
but as you can see is displayed only between 0 an 1 second, just because these
are the default x-axis limits. The axis limits can be changes by using the
standard matplotlib command plt.xlim().
On the other hand, to change the range where the timetrace is computed,
we pass the additional arguments tmin and tmax as follows:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=1000, tail_min_us=500)
"""
Explanation: When using FRETBursts in a notebook, all plots are static by default.
This is because we use the so called inline backend of matplotlib.
If you want to manipulate figures interactively, you can switch
to the interactive notebook backend with:
%matplotlib notebook
to go back to inline use:
%matplotlib inline
NOTE: Currently, the notebook backend is incompatible with the QT backend.
If in a session you activate the notebook backend, then switching to the QT backend requires
restarting the notebook. Conversely, you can switch between inline and notebook
or between inline and qt4 backends in the same session wihtou issues.
See also:
bpl.timetrace
function documentation
bpl.ratetrace
function documentation
Intensity timetrace and Rate-timetrace, a later section in this tutorial.
Background estimation
As a first step of the analysis, we need to estimate the background.
The assumption is that the background is a Poisson process and therefore
the corresponding inter photon delays are exponentially distributed. Since the
background can change during the measurement, a new estimation is
computed every time_s seconds (this time is called the background period).
The inter photon delay distribution contains
both single-molecule signal and background, the latter being the only one we are interested in
and the former being in general much shorter. Therefore, a threshold is needed
to discriminate between the exponential tail and the single-molecule peak.
Choosing a threshold and fitting the exponential tail are two
different problems.
FRETBursts provides several ways to specify the minimum threshold
and different functions to fit the exponential tail.
We will go over three different methods in increasing order of complexity (and recommendability)
For more information see:
Documentation for Data.calc_bg()
Documentation for background (e.g. bg) module
Documentation for exp_fitting module.
Single threshold
Let start with a standard Maximum Likelihood (ML)
background fit with a minimum tail threshold of 500μs:
End of explanation
"""
dplot(d, hist_bg, show_fit=True)
"""
Explanation: We can look at how the fit looks with:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=1000, tail_min_us=(800, 4000, 1500, 1000, 3000))
dplot(d, hist_bg, show_fit=True);
"""
Explanation: Note that the fits are not very good. This is understandable because
we used a single threshold for all the photon streams, each one
having a quite different background.
Multiple thresholds
To improve the fit, we can try specifying a threshold for each channel.
This method is bit ad-hoc but it may work well when the
thresholds are properly choosen.
End of explanation
"""
d.ph_streams
"""
Explanation: For ALEX measurements, the tuple passed to
tail_min_us in order to define the thresholds needs to contain
5 values corresponding the 5 distinct photon streams (the all-photon stream
+ the 4 base alternation streams).
The order of these 5 values need to match the order of photon streams in
the Data.ph_streams attribute:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=1000, tail_min_us='auto', F_bg=1.7)
"""
Explanation: Automatic threshold
Finally, is possible to let FRETBursts infer the threshold automatically with:
End of explanation
"""
dplot(d, hist_bg, show_fit=True);
"""
Explanation: Which results in the following fit plot:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=30, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
"""
Explanation: Under the hood, this method estimates the threshold
automatically according to this formula:
threshold_auto = F_bg / coarse_background_rate
where F_bg is the fit function input argument (by default 1.7)
and coarse_background_rate is an initial background estimation
performed with fixed threshold. This method is concemptually an
iterative method to compute the threshold that is stopped
after the second iteration (this is usually more than enough for
accurate estimates).
Of the three methods here described, the latter is the recommended one
since it works well and without user intervention in
a wide range of experimental conditions.
Background timetrace
It is a good practice to monitor background rates as a function of time.
Here, we compute background in adjacent 30s windows (called background periods)
and plot the estimated rates as a function of time.
End of explanation
"""
d.bg
"""
Explanation: Getting the background rates
The background rates are stored in Data() attribute
bg, a dict with photon streams (Ph_sel objects) as key.
Each item in the dict contains a list of fitted background rates
for each channel and period.
End of explanation
"""
d.bg_mean
"""
Explanation: We can also get the average background for each channel:
End of explanation
"""
d.burst_search(L=10, m=10, F=6)
"""
Explanation: Burst analysis
The first step of burst analysis is the burst search.
We will use the sliding-window algorithm on all photons. Note
that "all photons", as mentioned before, means all photons selected in the
alternation histogram.
An important variation compared to the classical sliding-windows
is that the threshold-rate for burst start is computed as
a function of the background and changes when the background
changes during the measurement.
To perform a burst search evaluating the photon rate with
10 photons (m=10), and selecting a minimum rate 6 times larger than
the background rate (F=6) calculated with all photons (default):
End of explanation
"""
dplot(d, hist_fret);
"""
Explanation: The previous command performs the burst search, corrects
the bursts sizes for background, spectral leakage and direct excitation,
and computes $\gamma$-corrected FRET and Stoichiometry.
See the
burst_search documentation for more details.
We can plot the resulting FRET histogram using the following command:
End of explanation
"""
dplot(d, hist_fret, show_kde=True);
dplot(d, hist_fret, show_kde=True, weights='size');
"""
Explanation: All pre-defined plots follow this pattern:
call the generic dplot() function, passing 2 parameters:
the measurement data (d in this case)
the plot function (hist_fret)
In some case we can add other optional parameters to tweak the plot.
All plot functions start with hist_ for histograms,
scatter_ for scatter-plots or timetrace_ for plots as a function
of measurement time. You can use autocompletion to find all
plot function or you can look in bursts_plot.py where
all plot functions are defined.
Instead of hist_fret we can use hist_fret_kde to add a KDE overlay. Also, we can plot a weighted histogram by passing an additional parameter weights:
End of explanation
"""
ds = d.select_bursts(select_bursts.size, th1=30)
"""
Explanation: You can experiment with different weighting schema (for all
supported weights see get_weigths() function in fret_fit.py).
Burst selection
When we performed the burst search, we specified L=10 without
explaining what this parameter means. L is traditionally the minimum size
(number of photons) for a burst: smaller bursts will be rejected.
By setting L=m (10 in this case) we are deciding to not discard
any burst (because the smallest detected burst has at least m counts).
Selecting the bursts in a second step, by applying a minimum burst size criterion,
results in a more accurate and unbiased selection.
For example, we can select bursts with more than 30 photons (after
background, gamma, leakage and direct excitation corrections)
and store the result in a new
Data() variable ds:
End of explanation
"""
ds = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
"""
Explanation: By defaults the burst size includes donor and acceptor photons
during donor excitation. To add acceptor photons during
acceptor excitation (naa), we add the parameter add_naa=True:
End of explanation
"""
dplot(ds, hist_fret);
"""
Explanation: Similar to plot functions, all selection functions
are defined in select_bursts.py and you can access them by typing
select_bursts. and using the TAB key for autocompletion.
See also:
* Burst selection in the documentation.
In particular the function select_bursts.size and Data.select_bursts.
To replot the FRET histogram after selection (note that now
we are passing ds to the plot function):
End of explanation
"""
ds.E_fitter.fit_histogram(mfit.factory_three_gaussians(), verbose=False)
dplot(ds, hist_fret, show_model=True);
"""
Explanation: Note how the histogram exhibits much more clearly defined peaks after burst selection.
Histogram fitting and plotting style
Under the hood the previous hist_fret plot creates a MultiFitter
object for $E$ values. This object, stored as ds.E_fitter, operates
on multi-channel data and computes the histogram, KDE and can fit
the histogram with a model (lmfit.Model).
Now, just for illustration purposes, we fit the previous histogram with 3 Gaussians, using the already created ds.E_fitter object:
End of explanation
"""
dplot(ds, hist_fret, show_model=True, hist_style='line')
"""
Explanation: The bin width can be changed with binwidth argument. Alternatively,
an arbitrary array of bin edges can be passed in bins
(overriding binwidth).
We can customize the appearance of this plot (type
hist_fret? for the complete set of arguments).
For example to change from a bar plot to a line-plot
we use the hist_style argument:
End of explanation
"""
dplot(ds, hist_fret, show_model=True, hist_style='bar', show_kde=True,
kde_plot_style = dict(linewidth=5, color='orange', alpha=0.6),
hist_plot_style = dict(linewidth=3, markersize=8, color='b', alpha=0.6))
plt.legend();
"""
Explanation: We can customize the line-plot, bar-plot, the model
plot and the KDE plot by passing dictionaries with matplotlib
style. The name of the arguments are:
hist_plot_style: style for the histogram line-plot
hist_bar_style: style for the histogram bar-plot
model_plot_style: style for the model plot
kde_plot_style: style for the KDE plot
As an example:
End of explanation
"""
dplot(ds, hist_size, add_naa=True);
"""
Explanation: Other plots
Similarly, we can plot the burst size using all photons
(type hist_size? to learn about all plot options):
End of explanation
"""
dplot(ds, hist_size_all);
"""
Explanation: Or plot the burst size histogram for the different components:
End of explanation
"""
dplot(ds, scatter_fret_nd_na)
xlim(-1, 2)
"""
Explanation: NOTE: The previous plot may generate a benign warning
due to the presence of zeroes when switching to log scale. Just ignore it.
A scatterplot of Size vs FRET is created by:
End of explanation
"""
ds2 = ds.select_bursts(select_bursts.size, th2=300)
"""
Explanation: Study of different populations
Removing multi-mers
We can further select only bursts smaller than 300 photons
to get rid of multi-molecule events:
End of explanation
"""
ax = dplot(ds2, hist_fret, hist_style='bar', show_kde=True,
hist_bar_style = dict(facecolor='r', alpha=0.5,
label='Hist. no large bursts'),
kde_plot_style = dict(lw=3, color='m',
label='KDE no large bursts'))
dplot(ds, hist_fret, ax=ax, hist_style='bar', show_kde=True,
hist_bar_style = dict(label='Hist. with large bursts'),
kde_plot_style = dict(lw=3, label='KDE with large bursts'))
plt.legend();
"""
Explanation: and superimpose the two histograms before and after selection to see the difference:
End of explanation
"""
ds.E_fitter.find_kde_max(np.r_[0:1:0.0002], xmin=0.2, xmax=0.6)
"""
Explanation: NOTE: It is not necessarily true that bursts with more that 300 photons
represents multiple molecules.
To asses the valididty of this assumption it can be useful to
plot the peak count rates in each burst. See hist_burst_phrate
for this kind of plot.
Fit and plot peak positions
We can find the KDE peak position in a range (let say 0.2 ... 0.6):
End of explanation
"""
dplot(ds, hist_fret, hist_style='line',
show_fit_value=True,
show_kde=True, show_kde_peak=True);
"""
Explanation: and plot it with show_kde_peak=True, we also use show_fit_value=True to show a box with the fitted value:
End of explanation
"""
ds.E_fitter.fit_histogram(mfit.factory_three_gaussians(), verbose=False)
"""
Explanation: Instead of using the KDE, we can use the peak position as fitted from a gaussian model.
End of explanation
"""
dplot(ds, hist_fret, hist_style='line',
show_fit_value=True, fit_from='p2_center',
show_model=True);
"""
Explanation: To select which peak to show we use fit_from='p1_center':
End of explanation
"""
ds.E_fitter.params # <-- pandas DataFrame, one row per channel
"""
Explanation: The string 'p2_center' is the name of the parameter of the
gaussian fit that we want to show in the text box. To see all
the parameters of the model we look in:
End of explanation
"""
dplot(ds, scatter_alex, figsize=(4,4), mew=1, ms=4, mec='black', color='purple');
"""
Explanation: ALEX plots
We can create a simple E-S scatter plot with scatter_alex:
End of explanation
"""
dplot(ds, hist2d_alex);
"""
Explanation: We can also plot the ALEX histogram with a scatterplot overlay using hist2d_alex:
End of explanation
"""
alex_jointplot(ds);
"""
Explanation: Finally we can also plot an ALEX histogram and marginals
(joint plots) as follow (for more options see:
Example - usALEX histogram):
End of explanation
"""
ds = d.select_bursts(select_bursts.size, th1=20)
ds2 = ds.select_bursts(select_bursts.naa, th1=10)
alex_jointplot(ds2);
"""
Explanation: To get rid of the large donor-only population, we can simply
select bursts with at least 10 photons in the acceptor channel
(during acceptor excitation). At the same time,
with a burst size selection using Dex photons we get rid
of the A-only population:
End of explanation
"""
# Switches to open plot in external window
#%matplotlib qt
"""
Explanation: As you can see, we remained with only the FRET sub-populations.
See next sections show how to select a region on the E/S
histogram to isolate a subpopulation.
Graphical selection of an E-S region
<br>
<div class="alert alert-warning">
The graphical selection of E-S regions requires a local FRETBursts installation.
Therefore the commands below are commented by default.
<br><br>
If you have a local installation and you want to try commands below,
please uncomment (i.e. remove the inital <code>#</code>) from the lines
containing the <code>%matplotlib</code> command.
</div>
To select bursts graphically, we need to open the ALEX histogram
in a new (QT) window, drag the mouse to define a selection and
have it printed here in the notebook.
Here we describe how to do it in 3 steps.
Step 1 Switch the plot modality to QT (i.e. opens graphs in a separate window):
End of explanation
"""
# ALEX histogram with GUI selection enabled
dplot(ds, hist2d_alex, gui_sel=True)
"""
Explanation: Step 2 Plot the E-S histogram in an external windows. There you can
use the mouse to select a region:
End of explanation
"""
# Switch back to show plots inline in the notebook
#%matplotlib inline
"""
Explanation: The region selection is printed above.
Step 3 Restore the normal inline plotting (no more external windows).
End of explanation
"""
roi = dict(E1=-0.07, E2=1.17, S1=0.18, S2=0.70, rect=False)
d_fret_mix = ds.select_bursts(select_bursts.ES, **roi)
"""
Explanation: Selection bursts by E-S values
To apply a selection based on E-S values,
we can paste the values obtained from the previous plot
(or we can type them in manually).
The selection function used here is select_bursts.ES.
The same E and S boundaries can define either a rectangular
or an elliptical selection by using
respectively rect=True or rect=False.
Here we use the elliptical selection:
End of explanation
"""
g = alex_jointplot(d_fret_mix)
bpl.plot_ES_selection(g.ax_joint, **roi);
"""
Explanation: By plotting the FRET histogram we can double check that
the selection has been applied:
End of explanation
"""
roi_high_fret = dict(E1=0.65, E2=1.09, S1=-0.13, S2=0.96)
d_high_fret = d_fret_mix.select_bursts(select_bursts.ES, **roi_high_fret)
roi_low_fret = dict(E1=-0.19, E2=0.64, S1=-0.05, S2=0.92)
d_low_fret = d_fret_mix.select_bursts(select_bursts.ES, **roi_low_fret)
alex_jointplot(d_high_fret);
alex_jointplot(d_low_fret);
"""
Explanation: Now we can further separate high- and low FRET sub-populations.
End of explanation
"""
d_low_fret.num_bursts / d_high_fret.num_bursts
"""
Explanation: We can for example compute the ratio of high- and low-fret bursts:
End of explanation
"""
dplot(d_low_fret, hist_width)
"""
Explanation: Burst Width analysis
To plot a burst-width histogram, we use hist_width instead of hist_fret:
End of explanation
"""
ax = dplot(d_high_fret, hist_width, bins=(0, 10, 0.2))
dplot(d_low_fret, hist_width, bins=(0, 10, 0.2), ax=ax)
plt.legend(['High-FRET population', 'Low-FRET population']);
"""
Explanation: To use a larger bin size, plots two sub-populations (in different color) and add a legend:
End of explanation
"""
millisec = d.clk_p * 1e3
mean_b_width_low_fret = d_low_fret.mburst[0].width.mean() * millisec
mean_b_width_high_fret = d_high_fret.mburst[0].width.mean() * millisec
print('Mean burst width: %.1f ms (high-fret), %.1f (low-fret)' %
(mean_b_width_high_fret, mean_b_width_low_fret))
"""
Explanation: Finally, we compute the mean burst width for each subpopulation:
End of explanation
"""
E_fitter = bext.bursts_fitter(d_fret_mix, 'E', binwidth=0.03, bandwidth=0.03,
model=mfit.factory_two_gaussians())
S_fitter = bext.bursts_fitter(d_fret_mix, 'S', binwidth=0.03, bandwidth=0.03,
model=mfit.factory_gaussian())
"""
Explanation: FRET fit: in-depth example
We can fit a FRET distribution to any model. For example,
we will fit the FRET selection (2 FRET sub-populations) with 2 Gaussians.
The fitting model is a Model object
from the lmfit library.
The first step, previously performed implicitely by the hist_fret()
plot function, is to create a MultiFitter
object for $E$ and/or $S$, by calling bext.bursts_fitter().
With MultiFitter objects, we can compute histograms,
KDEs and fit the histogram in one single step, as in the following example:
End of explanation
"""
E_fitter = bext.bursts_fitter(d_fret_mix, 'E', binwidth=0.03, bandwidth=0.03)
S_fitter = bext.bursts_fitter(d_fret_mix, 'S', binwidth=0.03, bandwidth=0.03)
"""
Explanation: However if we want to modify the model (for example to add a
constrain) we need to perform the fit in a second step.
To skip the fitting, we simply avoid passing a model:
End of explanation
"""
model = mfit.factory_two_gaussians(add_bridge=True)
"""
Explanation: Now we create a model and initialize the parameters
using mfit.factory_two_gaussians()
(see Model factory functions
for a list of pre-defined models in FRETBursts):
End of explanation
"""
model.print_param_hints()
"""
Explanation: We can see the list of parameters, initial values and constraints:
End of explanation
"""
model.set_param_hint('p1_center', value=0.3, min=-0.1, max=0.6)
model.set_param_hint('p2_center', value=0.85, min=0.5, max=1.1)
model.print_param_hints()
"""
Explanation: We can change initial values and/or constrains (bounds):
End of explanation
"""
E_fitter.fit_histogram(model=model, verbose=False) # default method is 'leastsq'
#E_fitter.fit_histogram(model=model, method='nelder') # example using simplex method
"""
Explanation: Finally, we fit the histogram with one
of the supported minimization methods
(the default is least-squares):
End of explanation
"""
dplot(d_fret_mix, hist_fret, show_model=True);
"""
Explanation: To plot the model with the fitted parameters on top of the FRET
histogram, we add show_model=True as seen before:
End of explanation
"""
E_fitter.fit_res[0]
"""
Explanation: The fitting results of lmfit models are stored in lmfit.ModelResult objects. In FRETBursts, these objects are available in MultiFitter.fit_res:
End of explanation
"""
results = E_fitter.fit_res[0]
results.params.pretty_print(columns=['value'])
"""
Explanation: Here the [0] indicates CH=0. This index is used in multispot measurements, in which
there are several channels.
The values of the fitted parameters are available in the best_values
dictionary:
End of explanation
"""
print(results.fit_report(min_correl=0.5))
"""
Explanation: We can also print a complete report of fitted parameters
including reduced $\chi^2$, error ranges
($\pm 1 \sigma$) and correlations:
End of explanation
"""
results.init_params.pretty_print()
"""
Explanation: To customize the printed report, see lmfit.fit_report() docs.
We can also take a look at the initial parameters:
End of explanation
"""
ci = results.conf_interval()
lmfit.report_ci(ci)
"""
Explanation: We can compute more accurate confidence-intervals (note that
it can take several seconds,
see lmfit docs
for the method details):
End of explanation
"""
E_fitter.params
"""
Explanation: Finally, FRETBursts's
MultiFitter object
(e.g. E_fitter) stores the fitted parameters in E_fitter.params as pandas DataFrame:
End of explanation
"""
d.nd[0]
"""
Explanation: For more information on fitting see:
FRETBursts: Fit Framework
lmfit Documentation.
Exporting Data
To export data computed by FRETBursts, you need to find first where
the data is stored or, for data computed on fly, which function/method
is used to computed it.
Most data computed by FRETBursts is stored in the Data object
(see the Data docs for the list of attributes).
For example:
corrected burst counts are stored in Data.nd, Data.na, Data.naa
timestamps are stored in Data.ph_times_m
All these attributes are list of arrays, one per excitation spot.
This means that, even for single-spot data, we need to use indexing ([0])
to obtain the array. For example:
End of explanation
"""
d.nd[0].tofile('n_dd.csv', sep=',')
"""
Explanation: Any numpy array can be saved to disk with the method .tofile(). For example:
End of explanation
"""
bursts = bext.burst_data(ds)
bursts
"""
Explanation: The background data is stored in Data.bg (and Data.bg_mean) attribute (see Getting the background rates in this notebook). This attribute is dict containing lists containing arrays (scalars) and ca be saved to disk similarly.
Exporting burst data
Part of burst data is stored in Data attributes (mburst, nd, na, naa)
and part is computed on fly (duration, raw counts, etc.).
To put all burst data in a single "table" (a pandas.DataFrame), one row
per burst, we can use bext.burst_data:
End of explanation
"""
bursts.to_csv('bursts.csv')
"""
Explanation: NOTE: For multi-spot data use the ich argument to get burst data
for the different spots.
This table (as any pandas.DataFrame) can be saved in a CSV text file with:
End of explanation
"""
#%matplotlib qt
#dplot(ds, ratetrace, scroll=True, bursts=True)
#dplot(ds, timetrace, tmax=600, scroll=True, bursts=True)
#%matplotlib inline
"""
Explanation: Intensity timetrace and Rate timetrace
For an initial inspection of a data file, it is common to compute an intensity timetrace plot (or timetrace for short).
We also can compute a similar plot called ratetrace, which does not bin counts
but shows the instantaneous count rate. In both cases, it is convenient to scroll
along the plot interactively.
In FRETBursts, the two timetrace and ratetrace plots support
interactive scrolling. We just need to switch from the inline backend
to QT, as we did before.
<br>
<div class="alert alert-warning">
The graphical scrolling of timetraces requires a local FRETBursts installation.
Therefore the commands below are commented by default.
<br><br>
If you have a local installation and you want to try commands below,
please uncomment them by removing the inital <code>#</code>.
</div>
Here are the corresponding commands:
End of explanation
"""
|
tum-pbs/PhiFlow | docs/Staggered_Grids.ipynb | mit | # !pip install --quiet phiflow
from phi.flow import *
grid = StaggeredGrid(0, extrapolation.BOUNDARY, x=10, y=10)
grid.values
"""
Explanation: Staggered grids
Staggered grids are a key component of the marker-and-cell (MAC) method [Harlow and Welch 1965].
They sample the velocity components not at the cell centers but in staggered form at the corresponding face centers.
Their main advantage is that the divergence of a cell can be computed exactly.
Φ<sub>Flow</sub> only stores valid velocity values in memory.
This may require non-uniform tensors for the values since the numbers of horizontal and vertical faces are generally not equal.
Depending on the boundary conditions, the outer-most values may also be redundant and, thus, not stored.
Φ<sub>Flow</sub> represents staggered grids as instances of StaggeredGrid.
They have the same properties as CenteredGrid but the values field may reference a
non-uniform tensor
to reflect the varying number of x, y and z sample points.
End of explanation
"""
domain = dict(x=10, y=10, bounds=Box(x=1, y=1), extrapolation=extrapolation.ZERO)
grid = StaggeredGrid((1, -1), **domain) # from constant vector
grid = StaggeredGrid(Noise(), **domain) # sample analytic field
grid = StaggeredGrid(grid, **domain) # resample existing field
grid = StaggeredGrid(lambda x: math.exp(-x), **domain) # function value(location)
grid = StaggeredGrid(Sphere([0, 0], radius=1), **domain) # no anti-aliasing
grid = StaggeredGrid(SoftGeometryMask(Sphere([0, 0], radius=1)), **domain) # with anti-aliasing
"""
Explanation: Here, each component of the values tensor has one more sample point in the direction it is facing.
If the extrapolation was extrapolation.ZERO, it would be one less (see above image).
Creating Staggered Grids
The StaggeredGrid constructor supports two modes:
Direct construction StaggeredGrid(values: Tensor, extrapolation, bounds).
All required fields are passed as arguments and stored as-is.
The values tensor must have the correct shape considering the extrapolation.
Construction by resampling StaggeredGrid(values: Any, extrapolation, bounds, resolution, **resolution).
When specifying the resolution as a Shape or via keyword arguments, non-Tensor values can be passed for values,
such as geometries, other fields, constants or functions (see the documentation).
Examples:
End of explanation
"""
grid.vector['x'] # select component
"""
Explanation: Staggered grids can also be created from other fields using field.at() or @ by passing an existing StaggeredGrid.
Some field functions also return StaggeredGrids:
spatial_gradient() with type=StaggeredGrid
stagger()
Values Tensor
For non-periodic staggered grids, the values tensor is non-uniform
to reflect the different number of sample points for each component.
Functions to get a uniform tensor:
at_centers() interpolates the staggered values to the cell centers and returns a CenteredGrid
staggered_tensor() pads the internal tensor to an invariant shape with n+1 entries along all dimensions.
Slicing
Like tensors, grids can be sliced using the standard syntax.
When selecting a vector component, such as x or y, the result is represented as a CenteredGrid with shifted locations.
End of explanation
"""
grid.values.x[3:4] # spatial slice
grid.values.x[0] # spatial slice
"""
Explanation: Grids do not support slicing along spatial dimensions because the result would be ambiguous with StaggeredGrids.
Instead, slice the values directly.
End of explanation
"""
grid.batch[0] # batch slice
"""
Explanation: Slicing along batch dimensions has no special effect, this just slices the values.
End of explanation
"""
|
seblabbe/MATH2010-Logiciels-mathematiques | NotesDeCours/15-fonctions.ipynb | gpl-3.0 | from __future__ import division, print_function # Python 3
"""
Explanation: $$
\def\CC{\bf C}
\def\QQ{\bf Q}
\def\RR{\bf R}
\def\ZZ{\bf Z}
\def\NN{\bf N}
$$
Fonctions def
End of explanation
"""
def FONCTION( PARAMETRES ):
INSTRUCTIONS
"""
Explanation: Une fonction rassemble un ensemble d'instructions qui permettent d'atteindre un certain objectif commun. Les fonctions permettent de séparer un programme en morceaux qui correspondent à la façon dont on pense à la résolution d'un problème.
La syntaxe pour la définition d'une fonction est:
End of explanation
"""
def somme(a, b):
return a + b
somme(4,7)
"""
Explanation: La ligne d'en-tête commence avec def et se termine par un deux-points. Le choix du nom de la fonction suit exactement les mêmes règles que pour le choix du nom d'une variable. Un bloc constitué d'une ou plusieurs instructions Python, chacune indentée du même nombre d'espace (la convention est d'utiliser 4 espaces) par rapport à la ligne d'en-tête. Nous avons déjà vu la boucle for qui suit ce modèle.
Le nom de la fonction est suivi par certains paramètres entre parenthèses. La liste des paramètres peut être vide, ou il peut contenir un certain nombre de paramètres séparés les uns des autres par des virgules. Dans les deux cas, les parenthèses sont nécessaires. Les paramètres spécifient les informations, le cas échéant, que nous devons fournir pour pouvoir utiliser la nouvelle fonction.
La ou les valeurs de retour d'une fonction sont retournées avec la commande return. Par exemple, la fonction qui retourne la somme de deux valeurs s'écrit:
End of explanation
"""
def volume(largeur, hauteur, profondeur):
v = volume(2,3,4)
v
"""
Explanation: La fonction qui calcule le volume d'un parallépipède rectangle s'écrit:
End of explanation
"""
def etat_de_leau(temperature):
if temperature < 0:
print("L'eau est solide")
elif temperature == 0:
print("L'eau est en transition de phase solide-liquide")
elif temperature < 100:
print("L'eau est liquide")
elif temperature == 100:
print("L'eau est en transition de phase liquide-gaz")
else:
print("L'eau est un gaz")
"""
Explanation: On peut rassemble le code sur la température de l'eau que l'on a écrit plus au sein d'une fonction etat_de_leau qui dépend du paramètre temperature :
End of explanation
"""
etat_de_leau(23)
etat_de_leau(-23)
etat_de_leau(0)
etat_de_leau(0.1)
etat_de_leau(102)
"""
Explanation: Cette fonction permet de tester le code sur la température de l'eau plus facilement:
End of explanation
"""
|
pombredanne/gensim | docs/notebooks/Corpora_and_Vector_Spaces.ipynb | lgpl-2.1 | import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Tutorial 1: Corpora and Vector Spaces
See this gensim tutorial on the web here.
Don’t forget to set:
End of explanation
"""
from gensim import corpora
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
"""
Explanation: if you want to see logging events.
From Strings to Vectors
This time, let’s start from documents represented as strings:
End of explanation
"""
# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
for document in documents]
# remove words that appear only once
from collections import defaultdict
frequency = defaultdict(int)
for text in texts:
for token in text:
frequency[token] += 1
texts = [[token for token in text if frequency[token] > 1] for text in texts]
from pprint import pprint # pretty-printer
pprint(texts)
"""
Explanation: This is a tiny corpus of nine documents, each consisting of only a single sentence.
First, let’s tokenize the documents, remove common words (using a toy stoplist) as well as words that only appear once in the corpus:
End of explanation
"""
dictionary = corpora.Dictionary(texts)
dictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference
print(dictionary)
"""
Explanation: Your way of processing the documents will likely vary; here, I only split on whitespace to tokenize, followed by lowercasing each word. In fact, I use this particular (simplistic and inefficient) setup to mimic the experiment done in Deerwester et al.’s original LSA article (Table 2).
The ways to process documents are so varied and application- and language-dependent that I decided to not constrain them by any interface. Instead, a document is represented by the features extracted from it, not by its “surface” string form: how you get to the features is up to you. Below I describe one common, general-purpose approach (called bag-of-words), but keep in mind that different application domains call for different features, and, as always, it’s garbage in, garbage out...
To convert documents to vectors, we’ll use a document representation called bag-of-words. In this representation, each document is represented by one vector where each vector element represents a question-answer pair, in the style of:
"How many times does the word system appear in the document? Once"
It is advantageous to represent the questions only by their (integer) ids. The mapping between the questions and ids is called a dictionary:
End of explanation
"""
print(dictionary.token2id)
"""
Explanation: Here we assigned a unique integer id to all words appearing in the corpus with the gensim.corpora.dictionary.Dictionary class. This sweeps across the texts, collecting word counts and relevant statistics. In the end, we see there are twelve distinct words in the processed corpus, which means each document will be represented by twelve numbers (ie., by a 12-D vector). To see the mapping between words and their ids:
End of explanation
"""
new_doc = "Human computer interaction"
new_vec = dictionary.doc2bow(new_doc.lower().split())
print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
"""
Explanation: To actually convert tokenized documents to vectors:
End of explanation
"""
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use
for c in corpus:
print(c)
"""
Explanation: The function doc2bow() simply counts the number of occurrences of each distinct word, converts the word to its integer word id and returns the result as a sparse vector. The sparse vector [(0, 1), (1, 1)] therefore reads: in the document “Human computer interaction”, the words computer (id 0) and human (id 1) appear once; the other ten dictionary words appear (implicitly) zero times.
End of explanation
"""
class MyCorpus(object):
def __iter__(self):
for line in open('mycorpus.txt'):
# assume there's one document per line, tokens separated by whitespace
yield dictionary.doc2bow(line.lower().split())
"""
Explanation: By now it should be clear that the vector feature with id=10 stands for the question “How many times does the word graph appear in the document?” and that the answer is “zero” for the first six documents and “one” for the remaining three. As a matter of fact, we have arrived at exactly the same corpus of vectors as in the Quick Example.
Corpus Streaming – One Document at a Time
Note that corpus above resides fully in memory, as a plain Python list. In this simple example, it doesn’t matter much, but just to make things clear, let’s assume there are millions of documents in the corpus. Storing all of them in RAM won’t do. Instead, let’s assume the documents are stored in a file on disk, one document per line. Gensim only requires that a corpus must be able to return one document vector at a time:
End of explanation
"""
# download the file mycorpus.txt with wget
# alternatively you can download the file manually and skip the following command
!wget http://radimrehurek.com/gensim/mycorpus.txt
corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
print(corpus_memory_friendly)
"""
Explanation: Download the sample mycorpus.txt file (if you have wget installed you can simply run the following cell to do this). The assumption that each document occupies one line in a single file is not important; you can mold the __iter__ function to fit your input format, whatever it is. Walking directories, parsing XML, accessing network... Just parse your input to retrieve a clean list of tokens in each document, then convert the tokens via a dictionary to their ids and yield the resulting sparse vector inside __iter__.
End of explanation
"""
for vector in corpus_memory_friendly: # load one vector into memory at a time
print(vector)
"""
Explanation: Corpus is now an object. We didn’t define any way to print it, so print just outputs address of the object in memory. Not very useful. To see the constituent vectors, let’s iterate over the corpus and print each document vector (one at a time):
End of explanation
"""
from six import iteritems
# collect statistics about all tokens
dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
# remove stop words and words that appear only once
stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
if stopword in dictionary.token2id]
once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
# remove stop words and words that appear only once
dictionary.filter_tokens(stop_ids + once_ids)
# remove gaps in id sequence after words that were removed
dictionary.compactify()
print(dictionary)
"""
Explanation: Although the output is the same as for the plain Python list, the corpus is now much more memory friendly, because at most one vector resides in RAM at a time. Your corpus can now be as large as you want.
Similarly, to construct the dictionary without loading all texts into memory:
End of explanation
"""
# create a toy corpus of 2 documents, as a plain Python list
corpus = [[(1, 0.5)], []] # make one document empty, for the heck of it
corpora.MmCorpus.serialize('/tmp/corpus.mm', corpus)
"""
Explanation: And that is all there is to it! At least as far as bag-of-words representation is concerned. Of course, what we do with such corpus is another question; it is not at all clear how counting the frequency of distinct words could be useful. As it turns out, it isn’t, and we will need to apply a transformation on this simple representation first, before we can use it to compute any meaningful document vs. document similarities. Transformations are covered in the next tutorial, but before that, let’s briefly turn our attention to corpus persistency.
Corpus Formats
There exist several file formats for serializing a Vector Space corpus (~sequence of vectors) to disk. Gensim implements them via the streaming corpus interface mentioned earlier: documents are read from (resp. stored to) disk in a lazy fashion, one document at a time, without the whole corpus being read into main memory at once.
One of the more notable file formats is the Matrix Market format. To save a corpus in the Matrix Market format:
End of explanation
"""
corpora.SvmLightCorpus.serialize('/tmp/corpus.svmlight', corpus)
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
corpora.LowCorpus.serialize('/tmp/corpus.low', corpus)
"""
Explanation: Other formats include Joachim’s SVMlight format, Blei’s LDA-C format and GibbsLDA++ format.
End of explanation
"""
corpus = corpora.MmCorpus('/tmp/corpus.mm')
"""
Explanation: Conversely, to load a corpus iterator from a Matrix Market file:
End of explanation
"""
print(corpus)
"""
Explanation: Corpus objects are streams, so typically you won’t be able to print them directly:
End of explanation
"""
# one way of printing a corpus: load it entirely into memory
print(list(corpus)) # calling list() will convert any sequence to a plain Python list
"""
Explanation: Instead, to view the contents of a corpus:
End of explanation
"""
# another way of doing it: print one document at a time, making use of the streaming interface
for doc in corpus:
print(doc)
"""
Explanation: or
End of explanation
"""
corpora.BleiCorpus.serialize('/tmp/corpus.lda-c', corpus)
"""
Explanation: The second way is obviously more memory-friendly, but for testing and development purposes, nothing beats the simplicity of calling list(corpus).
To save the same Matrix Market document stream in Blei’s LDA-C format,
End of explanation
"""
import gensim
import numpy as np
numpy_matrix = np.random.randint(10, size=[5,2])
corpus = gensim.matutils.Dense2Corpus(numpy_matrix)
numpy_matrix_dense = gensim.matutils.corpus2dense(corpus, num_terms=10)
"""
Explanation: In this way, gensim can also be used as a memory-efficient I/O format conversion tool: just load a document stream using one format and immediately save it in another format. Adding new formats is dead easy, check out the code for the SVMlight corpus for an example.
Compatibility with NumPy and SciPy
Gensim also contains efficient utility functions to help converting from/to numpy matrices:
End of explanation
"""
import scipy.sparse
scipy_sparse_matrix = scipy.sparse.random(5,2)
corpus = gensim.matutils.Sparse2Corpus(scipy_sparse_matrix)
scipy_csc_matrix = gensim.matutils.corpus2csc(corpus)
"""
Explanation: and from/to scipy.sparse matrices:
End of explanation
"""
|
n-witt/MachineLearningWithText_SS2017 | exercises/solutions/2 Matplotlib.ipynb | gpl-3.0 | import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 10, 100)
y = np.cos(x), np.cos(x + 1), np.cos(x + 2)
names = ['Signal 1', 'Signal 2', 'Signal 3']
"""
Explanation: 1. Reproduce this figure
<img src="images/exercise_1-1.png">
Here's the data and some code to get you started.
End of explanation
"""
fig, axes = plt.subplots(nrows=3, ncols=1)
for f, name, ax in zip(y, names, axes):
ax.plot(x, f, color='k')
ax.set(xticks=[], yticks=[], title=name, xlim=(0, 10))
plt.tight_layout()
plt.show()
"""
Explanation: Solution
End of explanation
"""
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
# Generate data...
y_raw = np.random.randn(1000).cumsum() + 15
x_raw = np.linspace(0, 24, y_raw.size)
# Get averages of every 100 samples...
x_pos = x_raw.reshape(-1, 100).min(axis=1)
y_avg = y_raw.reshape(-1, 100).mean(axis=1)
y_err = y_raw.reshape(-1, 100).ptp(axis=1)
bar_width = x_pos[1] - x_pos[0]
# Make a made up future prediction with a fake confidence
x_pred = np.linspace(0, 30)
y_max_pred = y_avg[0] + y_err[0] + 2.3 * x_pred
y_min_pred = y_avg[0] - y_err[0] + 1.2 * x_pred
# Just so you don't have to guess at the colors...
barcolor, linecolor, fillcolor = 'wheat', 'salmon', 'lightblue'
# Now you're on your own!
"""
Explanation: 2. Reproduce this figure
<img src="images/exercise_2.1-bar_and_fill_between.png">
Here is some code to start with:
End of explanation
"""
fig, axis = plt.subplots()
axis.set(xlim=(0, 30), xlabel='Minutes since class began', ylabel='Snarkiness (snark units)')
axis.plot(x_raw, y_raw, color=linecolor)
axis.bar(x_pos, y_avg, width=bar_width, edgecolor='grey', color=barcolor, yerr=y_err, ecolor='grey', capsize=3, align='edge')
axis.fill_between(x_pred, y_min_pred, y_max_pred, color=fillcolor)
plt.show()
"""
Explanation: Solution
End of explanation
"""
|
statsmodels/statsmodels.github.io | v0.12.2/examples/notebooks/generated/tsa_arma_1.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
from statsmodels.graphics.tsaplots import plot_predict
from statsmodels.tsa.arima_process import arma_generate_sample
from statsmodels.tsa.arima.model import ARIMA
np.random.seed(12345)
"""
Explanation: Autoregressive Moving Average (ARMA): Artificial data
End of explanation
"""
arparams = np.array([.75, -.25])
maparams = np.array([.65, .35])
"""
Explanation: Generate some data from an ARMA process:
End of explanation
"""
arparams = np.r_[1, -arparams]
maparams = np.r_[1, maparams]
nobs = 250
y = arma_generate_sample(arparams, maparams, nobs)
"""
Explanation: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated.
End of explanation
"""
dates = pd.date_range('1980-1-1', freq="M", periods=nobs)
y = pd.Series(y, index=dates)
arma_mod = ARIMA(y, order=(2, 0, 2), trend='n')
arma_res = arma_mod.fit()
print(arma_res.summary())
y.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10,8))
fig = plot_predict(arma_res, start='1999-06-30', end='2001-05-31', ax=ax)
legend = ax.legend(loc='upper left')
"""
Explanation: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/quickstart.ipynb | apache-2.0 | # Install required packages.
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
"""
Explanation: Working with TensorFlow Recommenders: Quickstart
Learning Objectives
Read the data and build vocabularies.
Define a model.
Create and train a model.
Introduction
In this tutorial, you build a simple matrix factorization model using the MovieLens 100K dataset with TFRS. You can use this model to recommend movies for a given user.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Import TFRS
First, install and import TFRS:
End of explanation
"""
# Import necessary libraries.
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
"""
Explanation: <strong>Note:</strong> Please ignore any incompatibility warnings and errors and re-run the above cell before proceeding.
End of explanation
"""
# TODO 1
# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
# TODO - Your code goes here
"""
Explanation: Read the data
End of explanation
"""
# Build the vocabularies
user_ids_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
"""
Explanation: Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
End of explanation
"""
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
"""
Explanation: Define a model
We can define a TFRS model by inheriting from tfrs.Model and implementing the compute_loss method:
End of explanation
"""
# TODO 2
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocabulary_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocabulary_size(), 64)
])
# Define your objectives.
task = # TODO - Your code goes here
"""
Explanation: Define the two models and the retrieval task.
End of explanation
"""
# TODO 3
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
# TODO - Your code goes here
# Use brute-force search to set up retrieval using the trained representations.
index = # TODO - Your code goes here
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
"""
Explanation: Fit and evaluate it.
Create the model, train it, and generate predictions:
End of explanation
"""
|
phanrahan/magmathon | notebooks/tutorial/coreir/Combinational and Sequential.ipynb | mit | import magma as m
import inspect
import fault
from hwtypes import BitVector
"""
Explanation: In this notebook we will discuss the combinational and sequential syntaxes in more detail. See https://magma.readthedocs.io/en/latest/circuit_definitions/ for the full documentation
End of explanation
"""
@m.circuit.combinational
def basic_if(I: m.Bits[2], S: m.Bit) -> m.Bit:
if S:
x = I[0]
else:
x = I[1]
return x
print(repr(basic_if.circuit_definition))
"""
Explanation: Combinational
The combinational syntax allows you to use if/else statements. These conditional statements are not executed in Python, instead they are lowered to hardware muxes.
End of explanation
"""
m.compile("build/basic_if", basic_if)
with open('.magma/basic_if.py', 'r') as f:
print(f.read())
"""
Explanation: Magma implements this syntax by converting the function to SSA form, and using the mux circuit to implement the phi nodes. We can inspect the intermediate Python code used by magma.
End of explanation
"""
tester = fault.PythonTester(basic_if)
assert tester(BitVector[2]([0, 1]), 0) == 1
assert tester(BitVector[2]([0, 1]), 1) == 0
tester = fault.Tester(basic_if)
tester(BitVector[2]([0, 1]), 0).expect(1)
tester(BitVector[2]([0, 1]), 1).expect(0)
tester.compile_and_run("verilator")
"""
Explanation: Let's test our function using fault
End of explanation
"""
from mantle import Not
@m.circuit.combinational
def invert(a: m.Bit) -> m.Bit:
return Not()(a)
print(repr(invert.circuit_definition))
tester = fault.PythonTester(invert)
assert tester(0) == 1
assert tester(1) == 0
tester = fault.Tester(invert)
tester(1).expect(0)
tester(0).expect(1)
# Need coreir commonlib since we are compiling multiple circuits so the namespace already has references to mux
tester.compile_and_run("verilator", magma_opts={"coreir_libs": {"commonlib"}})
"""
Explanation: You can insert code to instance magma circuits inside combinational.
End of explanation
"""
@m.circuit.combinational
def return_py_tuple(I: m.Bits[2]) -> (m.Bit, m.Bit):
return I[0], I[1]
print(repr(return_py_tuple.circuit_definition))
"""
Explanation: We can return multiple values as Python tuples. These will create output ports named O{i} where i is the index in the tuple
End of explanation
"""
@m.circuit.combinational
def return_magma_tuple(I: m.Bits[2]) -> m.Tuple[m.Bit, m.Bit]:
return m.tuple_([I[0], I[1]])
print(repr(return_magma_tuple.circuit_definition))
"""
Explanation: You can also return a magma tuple (this will only create one output)
End of explanation
"""
@m.circuit.combinational
def return_magma_named_tuple(I: m.Bits[2]) -> m.Product.from_fields("anon", {"x": m.Bit, "y": m.Bit}):
return m.product(x=I[0], y=I[1])
print(repr(return_magma_named_tuple.circuit_definition))
"""
Explanation: You can also return a magmas product (useful if you'd like to name the outputs)
End of explanation
"""
import ast_tools
from ast_tools.passes import begin_rewrite, loop_unroll, end_rewrite
n = 4
@m.circuit.combinational
@end_rewrite()
@loop_unroll()
@begin_rewrite()
def logic(a: m.Bits[n]) -> m.Bits[n]:
O = []
for i in ast_tools.macros.unroll(range(n)):
O.append(a[n - 1 - i])
return m.bits(O, n)
print(repr(logic.circuit_definition))
"""
Explanation: Statically elaborated for loops are supported using the ast_tools loop unrolling macro. Here's an example:
End of explanation
"""
@m.circuit.sequential(async_reset=True)
class Counter:
def __init__(self):
self.count : m.UInt[16] = 0
def __call__(self, inc : m.Bit) -> m.UInt[16]:
if inc:
self.count = self.count + 1
O = self.count
return O
m.compile("Counter", Counter, inline=True)
!coreir -i Counter.json -p instancecount -l commonlib
"""
Explanation: Sequential
The @m.circuit.sequential decorator extends the @m.circuit.combinational syntax with the ability to use Python's class system to describe stateful circuits.
The basic pattern uses the __init__ method to declare state, and a __call__ function that uses @m.circuit.combinational syntax to describe the transition function from the current state to the next state, as well as a function from the inputs to the outputs. State is referenced using the first argument self and is implicitly updated by writing to attributes of self (e.g. self.x = 3).
Here's an example of a Counter with an enable input inc.
End of explanation
"""
tester = fault.PythonTester(Counter, Counter.CLK)
tester.poke(Counter.inc, True)
tester.eval()
for i in range(4):
print(tester.peek(Counter.O))
assert tester.peek(Counter.O) == i + 1
tester.step(2)
tester.poke(Counter.inc, False)
tester.eval()
for i in range(4):
print(tester.peek(Counter.O))
assert tester.peek(Counter.O) == 4
tester.step(2)
tester.poke(Counter.ASYNCRESET, 1)
tester.eval()
print(tester.peek(Counter.O))
assert tester.peek(Counter.O) == 0
"""
Explanation: In the __init__ method, the circuit declares a statement self.count with an annotated type m.UInt[16] and an initial value 0. The __call__ method accepts an input inc of type Bit which acts as an enable on the counter logic. The __call__ method updates the counter state if the enable is high, and returns the next value of the counter (so when enable is high, it will output the state value plus one). Writes to state elements use Python semantics (Verilog blocking). Notice that the input and output of the __call__ method have type annotations just like m.circuit.combinational functions. The __call__ method should be treated as a standard @m.circuit.combinational function, with the special parameter self that provides access to the state.
End of explanation
"""
@m.circuit.sequential(async_reset=True)
class Register:
def __init__(self):
self.value: m.Bits[2] = m.bits(0, 2)
def __call__(self, I: m.Bits[2]) -> m.Bits[2]:
O = self.value
self.value = I
return O
@m.circuit.sequential(async_reset=True)
class TestShiftRegister:
def __init__(self):
self.x: Register = Register()
self.y: Register = Register()
def __call__(self, I: m.Bits[2]) -> m.Bits[2]:
x_prev = self.x(I)
y_prev = self.y(x_prev)
return y_prev
print(repr(TestShiftRegister))
# Need coreir commonlib since we are compiling multiple circuits so the namespace already has references to mux
m.compile("build/TestShiftRegister", TestShiftRegister, inline=True, coreir_libs={"commonlib"})
!cat build/TestShiftRegister.v
"""
Explanation: Sequential supports hierarchical composition
End of explanation
"""
|
mamrehn/machine-learning-tutorials | ipynb/[python] cheatsheet.ipynb | cc0-1.0 | sentence = 'the quick brown fox jumps over the lazy dog'
words = sentence.split()
word_lengths = [len(word) for word in words if 'the' != word]
print(word_lengths)
"""
Explanation: First Steps with Python
Source: learnpython.org
List comprehensions
End of explanation
"""
def foo(first, second, third, *therest):
print('First: %s' % first)
print('Second: %s' % second, end=' or ')
print('Second: {}'.format(second)) # more modern approach
print('Third: %s' % third)
print('And all the rest... %s' % list(therest))
return
print(foo(1,2,3,4,5))
"""
Explanation: Variable amount of parameters
End of explanation
"""
def bar(first, second, third, **options):
print('Options is a variable of {}.'.format(type(options)))
if 'sum' == options.get('action'):
print('The sum is: %d' % (first + second + third))
if 'first' == options.get('number'):
return first
result = bar(1, 2, 3, action='sum', number='first')
print('Result: %d' % result)
"""
Explanation: Variable amount of named parameters
End of explanation
"""
myExp = r'^(From|To|Cc).*?python-list@python.org'
import re
pattern = re.compile(myExp)
result = re.match(pattern, 'From sometextpython-list@python.org and some more')
if result:
print(result)
print('Whole result:', result.group(0), sep='\t')
print('First part:', result.group(1), sep='\t')
"""
Explanation: Regular expressions
End of explanation
"""
a = set(('Jake', 'John', 'Eric')) # generate a set from a tuple
b = set(['John', 'Jill']) # or generate a set from a list
print(a.intersection(b)) # in both sets
print(a.difference(b)) # in a but not in b
print(a.symmetric_difference(b)) # distinct
print(a.union(b)) # joined set
"""
Explanation: Sets
End of explanation
"""
import json
json_string = json.dumps([1, 2, 3, 'a', 'b', 'c'])
print(json_string)
print(json.loads(json_string))
json_string = json.dumps([1, 2, 3, 'a', 'b', 'c'], indent=2, sort_keys=True, separators=(',', ':'))
print(json_string)
"""
Explanation: JSON/Pickle serialization
Using JSON.<br/>
Properties of JSON serialization: json = {binary: false, humanReadable: true, pythonSpecific: false, serializeCustomClasses: false}
End of explanation
"""
writeFp = open('config.json', 'w')
json.dump({'b':1, 'a':2}, writeFp, sort_keys=True)
writeFp.close()
readFp = open('config.json', 'r')
for line in readFp:
print(line)
readFp.close()
# separators without spaces reduce json file size
print(json.dumps({'b':1, 'a':2}, sort_keys=True, separators=(',', ':')))
"""
Explanation: write JSON to and from a file
End of explanation
"""
import pickle # or cPickle for a faster implementation
pickled_string = pickle.dumps([1, 2, 3, 'a', 'b', 'c'])
print(pickle.loads(pickled_string), end='\n\n')
class MyTestClass:
def say(self):
return 'hello'
pickled_string = pickle.dumps(MyTestClass())
print('Pickled:\t{}\nUnpickled:\t{}\nInstance call:\t{}'.format(
pickled_string, pickle.loads(pickled_string), pickle.loads(pickled_string).say()
)
)
"""
Explanation: Using cPickle, Python's proprietary object serialization method. With pickle, objects can be serialized.<br/>
Properties of pickle serialization: pickle = {binary: true, humanReadable: false, pythonSpecific: true, serializeCustomClasses: true}
End of explanation
"""
|
rsterbentz/phys202-2015-work | assignments/assignment08/InterpolationEx02.ipynb | mit | %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
"""
Explanation: Interpolation Exercise 2
End of explanation
"""
x = np.array([-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-4,-4,-3,-3,-2,-2,-1,-1,0,0,0,1,1,2,2,3,3,4,4,5,5,5,5,5,5,5,5,5,5,5])
y = np.array([-5,-4,-3,-2,-1,0,1,2,3,4,5,-5,5,-5,5,-5,5,-5,5,-5,0,5,-5,5,-5,5,-5,5,-5,5,-5,-4,-3,-2,-1,0,1,2,3,4,5])
f = np.array([0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0])
"""
Explanation: Sparse 2d interpolation
In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:
The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$.
The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.
The value of $f$ is known at a single interior point: $f(0,0)=1.0$.
The function $f$ is not known at any other points.
Create arrays x, y, f:
x should be a 1d array of the x coordinates on the boundary and the 1 interior point.
y should be a 1d array of the y coordinates on the boundary and the 1 interior point.
f should be a 1d array of the values of f at the corresponding x and y coordinates.
You might find that np.hstack is helpful.
End of explanation
"""
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
"""
Explanation: The following plot should show the points on the boundary and the single point in the interior:
End of explanation
"""
xnew = np.linspace(-5,5,100)
ynew = np.linspace(-5,5,100)
Xnew, Ynew = np.meshgrid(xnew,ynew)
Fnew = griddata((x,y),f,(Xnew,Ynew), method='cubic')
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
"""
Explanation: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:
xnew and ynew should be 1d arrays with 100 points between $[-5,5]$.
Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.
Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).
Use cubic spline interpolation.
End of explanation
"""
f = plt.figure(figsize=(8.5,7))
plt.contour(Xnew,Ynew,Fnew)
plt.colorbar()
plt.set_cmap('autumn')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('Approximate Contour Map of $f(x,y)$');
assert True # leave this to grade the plot
"""
Explanation: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
End of explanation
"""
|
UWashington-Astro300/Astro300-A17 | Python_ReadingData.ipynb | mit | import os
"""
Explanation: Reading Data
Python has a large number of different ways to read data from external files.
Python supports almost any type of file you can think of, from simple text files to complex binary formats.
In this class we are going to mainly use the pakages Astropy and Pandas to load extrnal files.
Both of these packages create a python object called a Table.
Tables are very useful, since there are lots of built-in methods that allow us to easily manipulate the data.
End of explanation
"""
import numpy as np
from astropy.table import QTable
os.listdir()
planet_table = QTable.read('Planets.csv', format='ascii.csv')
planet_table
print(planet_table)
"""
Explanation: The AstroPy package - QTable
End of explanation
"""
planet_table.rename_column('col2', 'ecc')
print(planet_table)
planet_table['Name']
planet_table['Name'][0]
"""
Explanation: Renaming columns
End of explanation
"""
planet_table.sort(['ecc'])
planet_table
"""
Explanation: Sorting
End of explanation
"""
planet_table.sort(['a']) # re-sort our table
mask1 = np.where(planet_table['a'] > 5)
mask1
planet_table[mask1]
mask2 = ((planet_table['a'] > 5) &
(planet_table['ecc'] < 0.05))
planet_table[mask2]
"""
Explanation: Masking
End of explanation
"""
perihelion = planet_table['a'] * (1.0 - planet_table['ecc'])
perihelion
planet_table['Peri'] = perihelion
planet_table
"""
Explanation: Adding a column to the Table
End of explanation
"""
planet_table.write('NewPlanets.csv', format='ascii.csv')
os.listdir()
"""
Explanation: Saving a table
End of explanation
"""
import pandas as pd
planet_table2 = pd.read_csv('Planets.csv')
planet_table2
print(planet_table2)
"""
Explanation: The Pandas package - DataFrame
End of explanation
"""
planet_table2.rename(columns={'Unnamed: 2': 'ecc'}, inplace=True)
planet_table2
planet_table2['Name']
planet_table['Name'][0]
"""
Explanation: Renaming columns
End of explanation
"""
planet_table2.sort_values(['ecc'])
planet_table2
planet_table2.sort_values(['ecc'], ascending=False)
"""
Explanation: Sorting
End of explanation
"""
mask3 = planet_table['a'] > 5
mask3
planet_table2[mask3]
mask4 = ((planet_table2['a'] > 5) &
(planet_table2['ecc'] < 0.05))
planet_table2[mask4]
"""
Explanation: Masking
End of explanation
"""
perihelion = planet_table2['a'] * (1.0 - planet_table2['ecc'])
perihelion
planet_table2['Peri'] = perihelion
planet_table2
"""
Explanation: Adding a column to the Table
End of explanation
"""
planet_table2.to_csv('NewPlanets2.csv', index=False)
os.listdir()
"""
Explanation: Saving a table
End of explanation
"""
import datetime
doctor_table = pd.read_csv('Doctor.csv')
doctor_table
doctor_table.sort_values(['BirthDate'])
doctor_table['BirthDate'] = pd.to_datetime(doctor_table['BirthDate'])
doctor_table.sort_values(['BirthDate'])
today = datetime.date.today()
today
age = today - doctor_table['BirthDate']
age
doctor_table['AgeToday'] = age / np.timedelta64(1, 'Y')
doctor_table
doctor_table.describe()
"""
Explanation: QTables vs. DataFrames
As you can see, the astropy QTable and the pandas DataFrame are very similar.
There are some important differences that we will discover this quarter.
Astronomers use both packages, depending on the situation.
Pandas is the dominate packages outside astronomy.
Part I - Advantage Pandas
Pandas is really good for working with dates!
End of explanation
"""
messy_table = pd.read_csv('Mess.csv')
"""
Explanation: Messy Data
Pandas is a good choice when working with messy data files.
In the "real world" all data is messy.
For example, here is the contents of the file Mess.csv:
This is not going to end well ...
End of explanation
"""
messy_table = pd.read_csv('Mess.csv', skiprows = 6)
messy_table
"""
Explanation: Skip the header
End of explanation
"""
messy_table = pd.read_csv('Mess.csv', skiprows = 6, header= None)
messy_table
"""
Explanation: NaN = Not_A_Number, python's null value
Column names are messed up
Option 1 - Turn off the header
End of explanation
"""
cols = ["Name", "Size"]
messy_table = pd.read_csv('Mess.csv', skiprows = 6, names = cols)
messy_table
"""
Explanation: Option 2 - Add the column names
End of explanation
"""
messy_table['Name'].fillna("unknown", inplace=True)
messy_table['Size'].fillna(999.0, inplace=True)
messy_table
"""
Explanation: Deal with the missing data with fillna()
End of explanation
"""
|
anujjamwal/learning | cs231n/Lecture-2.ipynb | mit | import numpy as np
import matplotlib.pylab as plt
import math
from scipy.stats import mode
%matplotlib inline
from sklearn.datasets import fetch_mldata
from sklearn.model_selection import train_test_split
mnist = fetch_mldata('MNIST original', data_home='../data')
mnist.data.shape
X = np.append(np.ones((mnist.data.shape[0],1)), mnist.data, axis = 1)
Y = mnist.target
def display(x, label):
pixels = x.reshape((28, 28))
plt.title('{label}'.format(label=label))
plt.imshow(pixels, cmap='gray')
plt.show()
display(X[1][1:785], 'Y[0]')
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.33, random_state=42)
w = np.random.rand(10, X_train.shape[1])
w_orig = w
w.dot(X_train[30000])
"""
Explanation: Linear Classification
Lecture 2 on Linear Classification
Making use of MNIST dataset again.
End of explanation
"""
def manhattan_distance(s1, s2):
return np.sum(np.abs(s1 - s2), axis=1)
"""
Explanation: Loss Function
Manhattan Distance
Also known as L1 distance.
End of explanation
"""
class NearestNeighbour:
def __init__(self, k, loss = manhattan_distance):
self.k = k
self.loss = loss
def train(self, X, Y):
self.X = X
self.Y = Y
def test(self, X):
losses = self.loss(self.X, X)
return mode(self.Y[losses.argsort()[self.k]])[0][0]
n = NearestNeighbour(1)
n.train(X_train, Y_train)
n.test(X_test[0])
display(X_test[0][1:785], "%s"%Y_test[0])
def test(X_test, Y_test):
count_failed = 0
for i in range(X_test.shape[0]):
if Y_test[i] != n.test(X_test[i]):
count_failed += 1
return (count_failed, num_test)
count_failed, num_tests = test(X_test, Y_test)
print("\n Results:")
print("Total: %s " % num_tests)
print("Failed: %s " % count_failed)
print("Failed: %s " % (1.0 * count_failed / num_tests))
"""
Explanation: Nearest Neighbour Classification
k-nearest neighbour
Method makes use of distance or loss measure. Training is O(1) and Test is O(n). At test time, we pick the k training examples which have least distance to test point. The mode of the k points is picked as the class for test point.
End of explanation
"""
|
leriomaggio/numpy_euroscipy2015 | 07_ubiquitous_numpy.ipynb | mit | from IPython.core.display import Image, display
display(Image(filename='images/iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(filename='images/iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(filename='images/iris_virginica.jpg'))
print("Iris Virginica")
"""
Explanation: Ubiquitous NumPy
I called this notebook ubiquitous numpy as the main goal of this section is to show examples of how much is the impact of NumPy over the Scientific Python Ecosystem.
Later on, see also this extra notebook: Extra Torch Tensor - Requires PyTorch
1. pandas and pandas.DataFrame
Machine Learning (and Numpy Arrays)
Machine Learning is about building programs with tunable parameters (typically an
array of floating point values) that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at a very simple machine learning tasks here: the clustering task
Data for Machine Learning Algorithms
Data in machine learning algorithms, with very few exceptions, is assumed to be stored as a
two-dimensional array, of size [n_samples, n_features].
The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be [n_samples, n_features]
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample.
This is a case where scipy.sparse matrices can be useful, in that they are much more memory-efficient than numpy arrays.
Addendum
There is a dedicated notebook in the training material, explicitly dedicated to scipy.sparse: 07_1_Sparse_Matrices
A Simple Example: the Iris Dataset
End of explanation
"""
from sklearn.datasets import load_iris
iris = load_iris()
"""
Explanation: Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
End of explanation
"""
print(iris.keys())
print(iris.DESCR)
print(type(iris.data))
X = iris.data
print(X.size, X.shape)
y = iris.target
type(y)
"""
Explanation: Try by yourself one of the following commands where 'd' is the variable containing the dataset:
print(iris.keys()) # Structure of the contained data
print(iris.DESCR) # A complete description of the dataset
print(iris.data.shape) # [n_samples, n_features]
print(iris.target.shape) # [n_samples,]
print(iris.feature_names)
datasets.get_data_home() # This is where the datasets are stored
End of explanation
"""
from sklearn.cluster import KMeans
kmean = KMeans(n_clusters=3)
kmean.fit(iris.data)
kmean.cluster_centers_
kmean.cluster_centers_.shape
"""
Explanation: Clustering
Clustering example on iris dataset data using sklearn.cluster.KMeans
End of explanation
"""
from itertools import combinations
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
rgb = np.empty(shape=y.shape, dtype='<U1')
rgb[y==0] = 'r'
rgb[y==1] = 'g'
rgb[y==2] = 'b'
for cols in combinations(range(4), 2):
f, ax = plt.subplots(figsize=(7.5, 7.5))
ax.scatter(X[:, cols[0]], X[:, cols[1]], c=rgb)
ax.scatter(kmean.cluster_centers_[:, cols[0]],
kmean.cluster_centers_[:, cols[1]], marker='*', s=250,
color='black', label='Centers')
feature_x = iris.feature_names[cols[0]]
feature_y = iris.feature_names[cols[1]]
ax.set_title("Features: {} vs {}".format(feature_x.title(),
feature_y.title()))
ax.set_xlabel(feature_x)
ax.set_ylabel(feature_y)
ax.legend(loc='best')
plt.show()
"""
Explanation: Plotting using matplotlib
Matplotlib is one of the most popular and widely used plotting library in Python. Matplotlib is tightly integrated with NumPy as all the functions expect ndarray in input.
End of explanation
"""
|
eniltonangelim/data-science | m4ml/Exercicios-Cap01/Exercicios01.ipynb | mit | from math import pi, sqrt
from random import sample
from collections import Counter
x = 2
y = 5
def soma(x, y): print(x+y)
def subtrair(x, y): print(x-y)
def multi(x, y): print(x*y)
def dividir(x, y): print (x/y)
soma(x, y)
subtrair(x, y)
multi(x, y)
dividir(x, y)
"""
Explanation: <font color='blue'>Data Science Academy</font>
<font color='blue'>Matemática Para Machine Learning</font>
Lista de Exercícios - Capítulo 1
O objetivo desta lista de exercícios é você praticar algumas operações matemáticas básicas, ao mesmo tempo que desenvolve suas habilidades em lógica de programação com a linguagem Python.
Caso tenha dúvidas, isso é absolutamente normal e faça um trabalho de pesquisa a fim de relembrar o formato das operações matemáticas.
Quando encontrar o formato de uma operação que resolva o exercício proposto, use a linguagem Python para representar esta operação. Em essência, é assim que aplicamos Matemática Para Machine Learning, construindo algoritmos e representando esses algoritmos em linguagem de programação.
Divirta-se!!
Parte 1 - Operações Básicas
Exercício 1 - Leia 2 números do terminal e imprima a soma, subtração, multiplicação e divisão.
Dica: Crie funções em Python para formatar a saída das operações
End of explanation
"""
def radiano(grau): print(2 * pi * (grau/360))
radiano(45)
"""
Explanation: Exercício 2 - Escreva um programa em Python para converter uma medida de graus em radianos.
Nota: O radiano é a unidade padrão de medida angular, usada em muitas áreas da matemática. A medida de um ângulo em radianos é numericamente igual ao comprimento de um arco correspondente de um círculo unitário; um radiano é um pouco menos que 57,3 graus (quando o comprimento do arco é igual ao raio).
End of explanation
"""
def retangulo(b, h): print(b*h)
def losango(D, d): print(D*d / 2)
retangulo(9, 9) #81
losango(5, 4) #10
"""
Explanation: Exercício 3 - Escreva um programa em Python para calcular a área de um paralelogramo.
Nota: Um paralelogramo é um quadrilátero com lados opostos paralelos (e portanto ângulos opostos iguais). Um quadrilátero com lados iguais é chamado de losango, e um paralelogramo cujos ângulos são todos ângulos retos é chamado de retângulo.
End of explanation
"""
def vol_superfice(raio): print((4/3)*pi*raio**3)
def area_esferica(raio): print(4*pi*raio**2)
vol_superfice(25)
area_esferica(25)
"""
Explanation: Exercício 4 - Escreva um programa em Python para calcular o volume da superfície e a área de uma esfera.
Nota: Uma esfera é um objeto geométrico perfeitamente redondo no espaço tridimensional que é a superfície de uma bola completamente redonda.
End of explanation
"""
from IPython.display import Image
Image("images/image02.png")
def permutacao(n):
if len(n) == 0: return []
if len(n) == 1: return [n]
l = []
for i in range(len(n)):
m = n[i]
extractList = n[:i] + n[i+1:]
for p in permutacao(extractList):
l.append([m] + p)
return l
data = list('ABCD')
for p in permutacao(data):
print(p)
"""
Explanation: Exercício 5 - Escreva um programa em Python para imprimir todas as permutações de uma determinada string (incluindo duplicatas). Por exemplo a string: ABCD.
Na matemática, a noção de permutação refere-se ao ato de organizar todos os membros de um conjunto em alguma sequência ou ordem, ou se o conjunto já está ordenado, reorganizando (reordenando) seus elementos, um processo chamado de permutação. Estes diferem das combinações, que são seleções de alguns membros de um conjunto onde a ordem é desconsiderada.
Na imagem a seguir, cada uma das seis linhas é uma permutação diferente de três bolas distintas
End of explanation
"""
from IPython.display import Image
Image("images/image01.png")
def discriminant():
x = 12
y = 4
z = 8
discriminant = (y**2) - 4*x*z
if discriminant > 0:
print("Two Solutions. Discriminant value is:", discriminant)
elif discriminant == 0:
print ("One solution. Discriminant value is:", discriminant)
elif discriminant < 0:
print ("No real solutions. Discriminant value is:", discriminant)
discriminant()
"""
Explanation: Exercício 6 - Escreva um programa em Python para calcular o valor discriminante.
Nota: O discriminante é o nome dado à expressão que aparece sob o sinal de raiz quadrada (radical) na fórmula quadrática (amplamente usada em Machine Learning).
Dica: Crie uma função em Python com base na regra abaixo
End of explanation
"""
def babylonianAlgorithm(number):
if (number == 0):
return 0
g = number/2.0
g2 = g+1
while (g != g2):
n = number/g
g2 = g
g = (g+n)/2
return g
babylonianAlgorithm(0.3)
"""
Explanation: Exercício 7 - Escreva um programa em Python para calcular raízes quadradas usando o método babilônico.
Talvez o primeiro algoritmo usado para se aproximar de √S seja conhecido como método babilônico, batizado em homenagem aos babilônios, ou "Hero's method", em homenagem ao matemático grego do primeiro século, Hero de Alexandria, que deu a primeira descrição explícita do método.
Pode ser derivado (mas antecede por 16 séculos) do método de Newton. A ideia básica é que, se x é uma superestimativa para a raiz quadrada de um número real não negativo S, então S/x será subestimado e, portanto, pode-se esperar que a média desses dois números forneça uma melhor aproximação.
Representação gráfica do método babilônico
https://en.wikipedia.org/wiki/Methods_of_computing_square_roots#/media/File:Babylonian_method.svg
End of explanation
"""
# Solução
def create_classes(numbers, n):
low = min(numbers)
high = max(numbers)
# Tamanho de cada classe
width = (high - low)/n
classes = []
a = low
b = low + width
classes = []
while a < (high-width):
classes.append((a, b))
a = b
b = a + width
# A última classe pode ser menor que a largura
classes.append((a, high+1))
return classes
def classify(numbers, classes):
# Cria uma lista com o mesmo número de elementos que o número de classes
count = [0]*len(classes)
for n in numbers:
for index, c in enumerate(classes):
if n >= c[0] and n < c[1]:
count[index] += 1
break
return count
def read_data(filename):
numbers = []
with open(filename) as f:
for line in f:
numbers.append(float(line))
return numbers
if __name__ == '__main__':
num_classes = int(input('Digite o número desejado de classes: '))
numbers = read_data('arquivos/notas.txt')
classes = create_classes(numbers, num_classes)
count = classify(numbers, classes)
for c, cnt in zip(classes, count):
print('\n{0:.2f} - {1:.2f} \t {2}'.format(c[0], c[1], cnt))
"""
Explanation: Exercício 8 - Escreva um programa em Python que receba um arquivo com notas de um teste e divida as notas em classes de frequência.
Dica: seu programa deve permitir ao usuário escolher o número de classes e fazer a leitura do arquivo
End of explanation
"""
sample_dt = sample(range(1, 900), 300)
# sample_dt = [52, 61, 63, 66, 75, 77, 82, 86, 100, 193]
# sample_dt = [3, 5, 7, 10, 3, 3, 9, 2, 5, 10, 9]
def media(data): return sum(data) / len(data)
def mediana(data):
size = len(data)
center = size//2
if size%2:
return sorted(data)[center]
else:
return sum(sorted(data)[center-1:center+1]) / 2
def moda(data):
c = Counter(data)
return [k for k, v in c.items() if v == c.most_common(1)[0][0]]
def amplitude(data):
orders = sorted(data)
return max(orders)-min(orders)
def variancia(data, m):
size = len(data)
return sum([(data[i]-m)**2 for i in range(size)])/(size-1)
def desvio_padrao(data, m):
return sqrt(variancia(data, m))
print("Media:", media(sample_dt))
print("Mediana:", mediana(sample_dt))
print("Variancia:", round(variancia(sample_dt, media(sample_dt)), 2))
print("Desvio padrão:", round(desvio_padrao(sample_dt, media(sample_dt)), 2))
print("Amplitude:", amplitude(sample_dt))
print("Moda:", moda(sample_dt))
"""
Explanation: Exercício 9 - Escreva um programa em Python que receba um arquivo com valores e calcule as principais medidas estatísticas: média, mediana, moda, variância e desvio padrão.
Dica: Crie uma função Python para cada medida estatística e uma função para leitura do arquivo. Depois faça chamada a cada função para calcular as estatísticas no arquivo.
End of explanation
"""
def equacaoQ(a, b , c):
D = (b**2 - 4*a*c)
x1 = (-b + D**(1/2)) / (2*a)
x2 = (-b - D**(1/2)) / (2*a)
return x1, x2
equacaoQ(3, 4, 5)
"""
Explanation: Exercício 10 - Escreva um programa em Python que dados 3 números, resolva a equação quadrática
Nota: Em matemática, uma equação quadrática ou equação do segundo grau é uma equação polinomial de grau dois.
Dica: Você pode usar o módulo cmath para resolver a equação quadrática usando Python. Isso ocorre porque as raízes das equações quadráticas podem ser complexas por natureza.
Equação quadrática: ax**2 + bx + c = 0
End of explanation
"""
# NumPy http://www.numpy.org/
import numpy as np
"""
Explanation: Parte 2 - Escalares, Vetores, Matrizes e Tensores
End of explanation
"""
s = np.array([range(10)])
"""
Explanation: Tipos de Dados e Shapes
A maneira mais comum de trabalhar com números em NumPy é através de objetos ndarray. Eles são semelhantes às listas em Python, mas podem ter qualquer número de dimensões. Além disso, o ndarray suporta operações matemáticas rápidas, o que é exatamente o que queremos.
Como você pode armazenar qualquer número de dimensões, você pode usar ndarrays para representar qualquer um dos tipos de dados que abordamos antes: escalares, vetores, matrizes ou tensores.
Scalar
Escalares em NumPy são mais eficientes do que em Python. Em vez dos tipos básicos do Python como int, float, etc., o NumPy permite especificar tipos mais específicos, bem como diferentes tamanhos. Então, em vez de usar int em Python, você tem acesso a tipos como uint8, int8, uint16, int16 e assim por diante, ao usar o NumPy.
Esses tipos são importantes porque todos os objetos que você cria (vetores, matrizes, tensores) acabam por armazenar escalares. E quando você cria uma matriz NumPy, você pode especificar o tipo (mas cada item na matriz deve ter o mesmo tipo). Nesse sentido, os arrays NumPy são mais como arrays C do que as listas em Python.
Exercício 11 - Crie uma matriz NumPy que contenha um escalar, usando a função array do NumPy
End of explanation
"""
s.shape
"""
Explanation: Você ainda pode realizar matemática entre ndarrays, escalares NumPy e escalares Python normais, como veremos mais adiante.
Exercício 12 - Qual o shape de um Escalar?
End of explanation
"""
x = s[0][5] - 2
x
"""
Explanation: Mesmo que os escalares estejam dentro de arrays, você ainda os usa como um escalar normal, para operações matemáticas:
End of explanation
"""
type(x)
"""
Explanation: E x seria igual a 6. Se você verificar o tipo de x, vai perceber que é numpy.int64, pois você está trabalhando com tipos NumPy, e não com os tipos Python.
Mesmo os tipos escalares suportam a maioria das funções de matriz. Então você pode chamar x.shape e retornaria () porque tem zero dimensões, mesmo que não seja uma matriz. Se você tentar usar o objeto como um escalar Python normal, você obterá um erro.
End of explanation
"""
vec = np.zeros((2, 3, 4))
vec.shape = (3, 8)
"""
Explanation: Vetores
Exercício 13 - Crie um vetor com NumPy a partir de uma lista em Python e visualize o shape
End of explanation
"""
vec[1]
"""
Explanation: Se você verificar o atributo de shape do vetor, ele retornará um único número representando o comprimento unidimensional do vetor. No exemplo acima, vec.shape retorna (3,).
Agora que há um número, você pode ver que a forma é uma tupla com os tamanhos de cada uma das dimensões do ndarray. Para os escalares, era apenas uma tupla vazia, mas os vetores têm uma dimensão, então a tupla inclui um número e uma vírgula. (Python não entende (3) como uma tupla com um item, por isso requer a vírgula. Documentação oficial do Python 3.6 sobre Tuplas: https://docs.python.org/3/tutorial/datastructures.html#tuples-and-sequences
Você pode acessar um elemento dentro do vetor usando índices, como este abaixo (como você pode ver, em Python a indexaçã começa por 0 e o índice 1 representa o segundo elemento da matriz).
End of explanation
"""
vec[1:]
"""
Explanation: NumPy também suporta técnicas avançadas de indexação. Por exemplo, para acessar os itens do segundo elemento em diante, você usaria:
End of explanation
"""
m = np.array([range(9), range(9), range(9)])
"""
Explanation: NumPy slicing é bastante poderoso, permitindo que você acesse qualquer combinação de itens em um ndarray. Documentação oficial sobre indexação e slicing de arrays: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
Matrizes
Você cria matrizes usando a função de array() NumPy, exatamente como você fez com os vetores. No entanto, em vez de apenas passar uma lista, você precisa fornecer uma lista de listas, onde cada lista representa uma linha.
Exercício 14 - Crie uma matriz 3x3 contendo os números de um a nove e visualize o shape
End of explanation
"""
m
m[1][2]
"""
Explanation: Verificando o atributo shape, retornaria a tupla (3, 3) para indicar que a matriz tem duas dimensões, cada dimensão com comprimento 3.
Você pode acessar elementos de matrizes como vetores, mas usando valores de índice adicionais. Então, para encontrar o número 6 na matriz acima, você usaria:
End of explanation
"""
# Solução
t = np.array([[[[1],[2]],[[3],[4]],[[5],[6]]],[[[7],[8]],\
[[9],[10]],[[11],[12]]],[[[13],[14]],[[15],[16]],[[17],[17]]]])
"""
Explanation: Tensores
Os tensores são como vetores e matrizes, mas podem ter mais dimensões.
Exercício 15 - Crie um tensor 3x3x2x1 e visualize o shape
End of explanation
"""
t[2][2][1][0]
"""
Explanation: Exercício 16 - Imprima na tela um único elemento do tensor
Nota: Para acessar um elemento do tensor, usamos a indexação da mesma forma que fazemos com vetores e matrizes
End of explanation
"""
vec = np.array([1,2,3,4])
vec.shape
"""
Explanation: Alterando o Formato (shape)
Às vezes, você precisará alterar a forma de seus dados sem realmente alterar seu conteúdo. Por exemplo, você pode ter um vetor, que é unidimensional, mas precisa de uma matriz, que é bidimensional. Há duas maneiras pelas quais você pode fazer isso.
Digamos que você tenha o seguinte vetor:
End of explanation
"""
vec.reshape(2,2)
"""
Explanation: Exercício 17 - Altere o shape do vetor acima
End of explanation
"""
# Lista de valores
valores = [1, 2, 3, 4, 5]
# Looop for para adicionar 5 a cada elemento da lista
for i in range(len(valores)):
valores[i] += 5
print(valores)
"""
Explanation: Exercício 18 - Suponha que você tenha uma lista de números e que você deseja adicionar 5 a cada item da lista. Sem NumPy, você pode fazer algo como isto:
End of explanation
"""
valores = np.array([1, 2, 3, 4, 5])
valores + 5
"""
Explanation: Mas usando NumPy, como você resolveria este mesmo problema?
End of explanation
"""
valores * 0
"""
Explanation: Exercício 19 - Digamos que você tenha um objeto chamado "valores" e você queira reutilizá-lo, mas primeiro você precisa definir todos os seus valores em zero. Fácil, basta multiplicar por zero e atribuir o resultado de volta ao objeto, certo? Como você faria isso?
End of explanation
"""
x = np.array([[1,3],[5,7]])
x
y = np.array([[2,4],[6,8]])
y
x + y
"""
Explanation: Exercício 20 - Considerando as duas matrizes abaixo, como você faria a soma entre elas?
End of explanation
"""
import numpy as np
m = np.array([[1,2,3],[4,5,6]])
m
m * .5
"""
Explanation: Parte 3 - Operações com Matrizes
Exercício 21 - Dada a matriz abaixo, multiplique 0.5 por cada elemento da matriz
End of explanation
"""
a = np.array([[1,2,3,4],[5,6,7,8]])
a
b = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
b
np.matmul(b, a)
"""
Explanation: Exercício 22 - Qual a razão da mensagem de erro ao tentar executar a multiplicação das duas matrizes abaixo?
End of explanation
"""
m = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
m
m.T
"""
Explanation: Exercício 23 - Obter a transposição de uma matriz é realmente fácil em NumPy. Basta acessar seu atributo T. Há também a função transpose() que retorna a mesma coisa, mas raramente você verá isso usado em qualquer lugar porque digitar T é muito mais fácil. O que representa a transposta de uma matriz?
End of explanation
"""
inputs = np.array([[-0.27, 0.45, 0.64, 0.31]])
inputs
inputs.shape
pesos = np.array([[0.02, 0.001, -0.03, 0.036], \
[0.04, -0.003, 0.025, 0.009], [0.012, -0.045, 0.28, -0.067]])
pesos
pesos.shape
"""
Explanation: Exemplo em Redes Neurais
Digamos que você tenha as seguintes duas matrizes, chamadas inputs e pesos:
End of explanation
"""
np.matmul(inputs, pesos)
"""
Explanation: Os pesos são multiplicados aos inputs nas camadas das redes neurais, logo precisamos multiplicar pesos por inputs. Uma multiplicação de matrizes. Isso é muito fácil com o NumPy.
End of explanation
"""
np.matmul(inputs, pesos.T)
"""
Explanation: Exercício 24 - Ops, o que deu errado? Se você fez o exercício de multiplicação de matrizes, então você já viu esse erro antes. As matrizes estão com shapes incompatíveis porque o número de colunas na matriz esquerda, 4, não é igual ao número de linhas na matriz direita, 3.
Como resolvemos isso? Podemos usar a transposta da matriz:
End of explanation
"""
np.matmul(pesos, inputs.T)
"""
Explanation: Também funciona se você gerar a transposta da matriz de inputs e trocar a ordem dos parâmetros na função:
End of explanation
"""
import numpy as np
import numpy.linalg as linalg
# Criando as matrizes em formato NumPy
A = np.array([ [3, 2, -1], [6, 4, -2], [5, 0, 3]])
B = np.array([ [2, 3, 2], [3, -4, -2], [4, -1, 1]])
# Calculamos o Rank da matriz A
linalg.matrix_rank(A)
"""
Explanation: As duas respostas são transpostas uma da outra, de modo que a multiplicação que você usa realmente depende apenas da forma (shape) que deseja para a saída.
Desafio
Use as duas matrizes abaixo para resolver os exercícios de 25 a 28
$$
\mathbf{A} = \left[\begin{array}{lcr}
3 & 2 & -1\
6 & 4 & -2\
5 & 0 & 3\
\end{array}\right]
\quad
\mathbf{B} = \left[\begin{array}{lcr}
2 & 3 & 2\
3 & -4 & -2\
4 & -1 & 1\
\end{array}\right]
$$
Exercício 25 - Qual o Rank da Matriz $\mathbf{A}$?
Na álgebra linear, o Rank de uma matriz A é a dimensão do espaço vetorial gerado (ou estendido) por suas colunas. Isso corresponde ao número máximo de colunas linearmente independentes de A. Isso, por sua vez, é idêntico à dimensão do espaço ocupado por suas linhas. Rank é, portanto, uma medida da "não-degeneração" do sistema de equações lineares e transformação linear codificada por A. Existem várias definições equivalentes de Rank e o Rank de uma matriz é uma de suas características mais fundamentais.
End of explanation
"""
A @ B
np.dot(A, B)
"""
Explanation: Exercício 26 - Calcule $\mathbf{A \cdot B}$
End of explanation
"""
eig_vals, eig_vecs = linalg.eig(B)
eig_vecs[:,1]
"""
Explanation: Exercício 27 - Qual o segundo autovetor em $\mathbf{B}$?
End of explanation
"""
# Solução
b = np.array([14, -1, 11])
linalg.solve(B, b)
"""
Explanation: Exercício 28 - Resolva $\mathbf{B}\vec{x} = \vec{b}$ onde $\vec{b} = \left[14, -1, 11\right]$
End of explanation
"""
# Criando as matrizes em Python
a_matrix = [[3, 2, 1],
[2,-1,0],
[1,1,-2]]
b_matrix = [5, 4, 12]
# Convertendo as matrizes para o formato NumPy
np_a_matrix = np.array(a_matrix)
np_b_matrix = np.array(b_matrix).transpose()
# Resolvendo o problema
np_a_inv = linalg.inv(np_a_matrix)
np_x_matrix = np_a_inv.dot(np_b_matrix)
# Print
print(np_x_matrix)
"""
Explanation: Exercício 29 - Calcule a inversa da matriz abaixo
Uma matriz quadrada A é dita invertível quando existe outra matriz denotada A^-1 tal que
A^-1 . A = I
e
A . A^-1 = I
A inversa de uma matriz pode ser encontrada usando o comando linalg.inverse. Considere o seguinte sistema de equações:
$$\begin{array}{lr}
3 x + 2 y + z & = 5\
2 x - y & = 4 \
x + y - 2z & = 12 \
\end{array}$$
Podemos codificá-lo como uma equação matricial:
$$\left[\begin{array}{lcr}
3 & 2 & 1\
2 & -1 & 0\
1 & 1 & -2\
\end{array}\right]
\left[\begin{array}{l}
x\
y\
z\
\end{array}\right]
=
\left[\begin{array}{l}
5\
4\
12\
\end{array}\right]$$
$$\mathbf{A}\mathbf{x} = \mathbf{b}$$
$$\mathbf{A}^{-1}\mathbf{b} = \mathbf{x}$$
Crie o código Python para resolver a equação matricial acima.
End of explanation
"""
# Solução
x = [-0.42,1.34,1.6,2.65,3.53,4.48,5.48,6.21,7.49,8.14,8.91,10.1]
y = [1.58,1.61,2.04,5.47,9.8,16.46,25.34,33.32,49.7,58.79,71.26,93.34]
x = np.array(x)
y = np.array(y)
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(x,y, 'o-')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
"""
Explanation: Exercício 30 - Faça um Plot, usando Matplotlib, que mostre a relação entre x e y
End of explanation
"""
|
BrentDorsey/pipeline | gpu.ml/notebooks/06a_Train_Model_XLA_GPU.ipynb | apache-2.0 | import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
"""
Explanation: Train Model with XLA_GPU (and CPU*)
Some operations do not have XLA_GPU equivalents, so we still need to use CPU.
IMPORTANT: You Must STOP All Kernels and Terminal Session
The GPU is wedged at this point. We need to set it free!!
End of explanation
"""
tf.reset_default_graph()
"""
Explanation: Reset TensorFlow Graph
Useful in Jupyter Notebooks
End of explanation
"""
config = tf.ConfigProto(
log_device_placement=True,
)
config.gpu_options.allow_growth=True
config.graph_options.optimizer_options.global_jit_level \
= tf.OptimizerOptions.ON_1
print(config)
sess = tf.Session(config=config)
print(sess)
"""
Explanation: Create TensorFlow Session
End of explanation
"""
from datetime import datetime
version = int(datetime.now().strftime("%s"))
"""
Explanation: Generate Model Version (current timestamp)
End of explanation
"""
num_samples = 100000
import numpy as np
import pylab
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
with tf.device("/device:XLA_GPU:0"):
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/device:XLA_GPU:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
"""
Explanation: Load Model Training and Test/Validation Data
End of explanation
"""
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
"""
Explanation: Randomly Initialize Variables (Weights and Bias)
The goal is to learn more accurate Weights and Bias during training.
End of explanation
"""
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_train, y_train)
"""
Explanation: View Accuracy of Pre-Training, Initial Random Variables
We want this to be close to 0, but it's relatively far away. This is why we train!
End of explanation
"""
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_gpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_gpu/%s/test' % version,
graph=tf.get_default_graph())
"""
Explanation: Setup Loss Summary Operations for Tensorboard
End of explanation
"""
%%time
from tensorflow.python.client import timeline
with tf.device("/device:XLA_GPU:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-xla-gpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
"""
Explanation: Train Model
End of explanation
"""
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/xla_gpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_xla_gpu.pb' % optimize_me_parent_path
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
print(unoptimized_model_graph_path)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
"""
Explanation: View Loss Summaries in Tensorboard
Navigate to the Scalars and Graphs tab at this URL:
http://[ip-address]:6006
Save Graph For Optimization
We will use this later.
End of explanation
"""
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/xla_gpu/unoptimized_xla_gpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/xla_gpu/unoptimized_xla_gpu.pb'
output_dot='/root/notebooks/unoptimized_xla_gpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/unoptimized_xla_gpu.dot \
-o /root/notebooks/unoptimized_xla_gpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/unoptimized_xla_gpu.png', width=1024, height=768)
"""
Explanation: Show Graph
End of explanation
"""
%%bash
dot -T png /tmp/hlo_graph_1.*.dot -o /root/notebooks/hlo_graph_1.png &>/dev/null
dot -T png /tmp/hlo_graph_10.*.dot -o /root/notebooks/hlo_graph_10.png &>/dev/null
dot -T png /tmp/hlo_graph_50.*.dot -o /root/notebooks/hlo_graph_50.png &>/dev/null
dot -T png /tmp/hlo_graph_75.*.dot -o /root/notebooks/hlo_graph_75.png &>/dev/null
"""
Explanation: XLA JIT Visualizations
End of explanation
"""
|
lknelson/text-analysis-2017 | 04-Dictionaries/00-DictionaryMethod_ExerciseSolutions.ipynb | bsd-3-clause | #import the necessary packages
import pandas
import nltk
from nltk import word_tokenize
import string
#read the Music Reviews corpus into a Pandas dataframe
df = pandas.read_csv("../Data/BDHSI2016_music_reviews.csv", encoding='utf-8', sep = '\t')
df['body'] = df['body'].apply(lambda x: ''.join([i for i in x if not i.isdigit()]))
df['body_tokens'] = df['body'].str.lower()
df['body_tokens'] = df['body_tokens'].apply(nltk.word_tokenize)
df['body_tokens'] = df['body_tokens'].apply(lambda x: [word for word in x if word not in string.punctuation])
df['token_count'] = df['body_tokens'].apply(lambda x: len(x))
#view the dataframe
df
#Read in dictionary files
pos_sent = open("../Data/positive_words.txt", encoding='utf-8').read()
neg_sent = open("../Data/negative_words.txt", encoding='utf-8').read()
#view part of the pos_sent variable, to see how it's formatted.
print(pos_sent[:101])
#remember the split function? We'll split on the newline character (\n) to create a list
positive_words=pos_sent.split('\n')
negative_words=neg_sent.split('\n')
#view the first elements in the lists
print(positive_words[:10])
print(negative_words[:10])
"""
Explanation: Dictionary Method: Exercise Solutions
First I'll recreate what we did in the tutorial.
End of explanation
"""
#exercise code here
#1. Create a column with the number of positive words and another with the proportion of positive words
df['pos_num'] = df['body_tokens'].apply(lambda x: len([word for word in x if word in positive_words]))
df['pos_prop'] = df['pos_num']/df['token_count']
#2. Create a column with the number of negative words, and another with the proportion of negative words
df['neg_num'] = df['body_tokens'].apply(lambda x: len([word for word in x if word in negative_words]))
df['neg_prop'] = df['neg_num']/df['token_count']
df
#3. Print the average proportion of negative and positive words by genre
grouped = df.groupby('genre')
print("Averge proportion of positive words by genre")
print(grouped['pos_prop'].mean().sort_values(ascending=False))
print()
print("Averge proportion of negative words by genre")
grouped['neg_prop'].mean().sort_values(ascending=False)
# 4. Compare this to the average score by genre
print("Averge score by genre")
grouped['score'].mean().sort_values(ascending=False)
"""
Explanation: Great! You know what to do now.
Exercise:
1. Create a column with the number of positive words, and another with the proportion of positive words
2. Create a column with the number of negative words, and another with the proportion of negative words
3. Print the average proportion of negative and positive words by genre
4. Compare this to the average score by genre
End of explanation
"""
#import the function CountVectorizer
from sklearn.feature_extraction.text import CountVectorizer
countvec = CountVectorizer()
#create our document term matrix as a pandas dataframe
dtm_df = pandas.DataFrame(countvec.fit_transform(df.body).toarray(), columns=countvec.get_feature_names(), index = df.index)
"""
Explanation: 3. Dictionary Method using Scikit-learn
We can also do this using the document term matrix. We'll again do this in pandas, to make it conceptually clear. As you get more comfortable with programming you may want to eventually shift over to working with sparse matrix format.
End of explanation
"""
#create a columns variable that is a list of all column names
columns = list(dtm_df)
pos_columns = [word for word in columns if word in positive_words]
#create a dtm from our dtm_df that keeps only positive sentiment columns
dtm_pos = dtm_df[pos_columns]
#count the number of positive words for each document
dtm_pos['pos_count'] = dtm_pos.sum(axis=1)
#dtm_pos.drop('pos_count',axis=1, inplace=True)
dtm_pos['pos_count']
"""
Explanation: Now we can keep only those columns that occur in our positive words list. To do this, we'll first save a list of the columns names as a variable, and then only keep the elements of the list that occur in our positive words list. We'll then create a new dataframe keeping only those select columns.
End of explanation
"""
#EX: Do the same for negative words.
neg_columns = [word for word in columns if word in negative_words]
dtm_neg = dtm_df[neg_columns]
dtm_neg['neg_count'] = dtm_neg.sum(axis=1)
dtm_neg['neg_count']
#EX: Calculate the proportion of negative and positive words for each document.
dtm_pos['pos_proportion'] = dtm_pos['pos_count']/dtm_df.sum(axis=1)
print(dtm_pos['pos_proportion'])
print()
dtm_neg['neg_proportion'] = dtm_neg['neg_count']/dtm_df.sum(axis=1)
print(dtm_neg['neg_proportion'])
"""
Explanation: EX: Do the same for negative words.
EX: Calculate the proportion of negative and positive words for each document.
End of explanation
"""
|
vaxherra/vaxherra.github.io | _files/bacterial_names/RNNs_KERAS.ipynb | mit | import keras
from keras.layers import Concatenate,Dense,Embedding
rnn_num_units = 64
embedding_size = 16
#Let's create layers for our recurrent network
#Note: we create layers but we don't "apply" them yet
embed_x = Embedding(n_tokens,embedding_size) # an embedding layer that converts character ids into embeddings
#a dense layer that maps input and previous state to new hidden state, [x_t,h_t]->h_t+1
get_h_next = Dense(rnn_num_units, activation="tanh")
#a dense layer that maps current hidden state to probabilities of characters [h_t+1]->P(x_t+1|h_t+1)
get_probas = Dense(n_tokens, activation="softmax")
def rnn_one_step(x_t, h_t):
"""
Recurrent neural network step that produces next state and output
given prev input and previous state.
We'll call this method repeatedly to produce the whole sequence.
"""
#convert character id into embedding
x_t_emb = embed_x(tf.reshape(x_t,[-1,1]))[:,0]
#print(tf.shape(x_t_emb)) #Tensor("Shape_16:0", shape=(2,), dtype=int32)
#print(tf.shape(h_t)) #Tensor("Shape_16:0", shape=(2,), dtype=int32)
#concatenate x embedding and previous h state
#x_and_h = Concatenate()([x_t_emb, h_t])###YOUR CODE HERE <keras.layers.merge.Concatenate object at 0x7f87e5bfc6a0>
x_and_h = tf.concat([x_t_emb, h_t], 1)
#compute next state given x_and_h
h_next = get_h_next(x_and_h)
#get probabilities for language model P(x_next|h_next)
output_probas = get_probas(h_next)
return output_probas,h_next
input_sequence = tf.placeholder('int32',(MAX_LENGTH,None))
batch_size = tf.shape(input_sequence)[1]
predicted_probas = []
h_prev = tf.zeros([batch_size,rnn_num_units]) #initial hidden state
for t in range(MAX_LENGTH): #for every time-step 't' ( each character)
x_t = input_sequence[t]
probas_next,h_next = rnn_one_step(x_t,h_prev)
h_prev = h_next
predicted_probas.append(probas_next)
predicted_probas = tf.stack(predicted_probas)
predictions_matrix = tf.reshape(predicted_probas[:-1],[-1,len(tokens)])
answers_matrix = tf.one_hot(tf.reshape(input_sequence[1:],[-1]), n_tokens)
from keras.objectives import categorical_crossentropy
loss = tf.reduce_mean(categorical_crossentropy(answers_matrix, predictions_matrix))
optimize = tf.train.AdamOptimizer().minimize(loss)
from IPython.display import clear_output
from random import sample
s = keras.backend.get_session()
s.run(tf.global_variables_initializer())
history = []
for i in range(5000):
batch = to_matrix(sample(names,32),max_len=MAX_LENGTH)
loss_i,_ = s.run([loss,optimize],{input_sequence:batch})
history.append(loss_i)
if (i+1)%100==0:
clear_output(True)
plt.plot(history,label='loss')
plt.legend()
plt.show()
assert np.mean(history[:10]) > np.mean(history[-10:]), "RNN didn't converge."
"""
Explanation: II. RNN
<img src="rnnAML.png" width=900>
End of explanation
"""
x_t = tf.placeholder('int32',(None,))
h_t = tf.Variable(np.zeros([1,rnn_num_units],'float32'))
next_probs,next_h = rnn_one_step(x_t,h_t)
def generate_sample(seed_phrase=None,max_length=MAX_LENGTH):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
parameters:
The phrase is set using the variable seed_phrase
The optional input "N" is used to set the number of characters of text to predict.
'''
if seed_phrase==None:
seed_phrase=' '
else:
seed_phrase=' ' + str(seed_phrase).strip().lower()
x_sequence = [token_to_id[token] for token in seed_phrase]
s.run(tf.assign(h_t,h_t.initial_value))
#feed the seed phrase, if any
for ix in x_sequence[:-1]:
s.run(tf.assign(h_t,next_h),{x_t:[ix]})
#start generating
for _ in range(max_length-len(seed_phrase)):
x_probs,_ = s.run([next_probs,tf.assign(h_t,next_h)],{x_t:[x_sequence[-1]]})
x_sequence.append(np.random.choice(n_tokens,p=x_probs[0]))
return ''.join([tokens[ix] for ix in x_sequence])
for i in range(3):
print(str(i+1) + ". " + generate_sample())
for i in range(5):
print(str(i+1) + ". " + generate_sample())
for i in range(3):
print(str(i+1) + ". " + generate_sample("trump"))
for i in range(5):
print(str(i+1) + ". " + generate_sample("trump"))
for i in range(10):
print(str(i+1) + ". " + generate_sample("Kwapich"))
"""
Explanation: III. Sampling
End of explanation
"""
|
yl565/statsmodels | examples/notebooks/formulas.ipynb | bsd-3-clause | from __future__ import print_function
import numpy as np
import statsmodels.api as sm
"""
Explanation: Formulas: Fitting models using R-style formulas
Since version 0.5.0, statsmodels allows users to fit statistical models using R-style formulas. Internally, statsmodels uses the patsy package to convert formulas and data to the matrices that are used in model fitting. The formula framework is quite powerful; this tutorial only scratches the surface. A full description of the formula language can be found in the patsy docs:
Patsy formula language description
Loading modules and functions
End of explanation
"""
from statsmodels.formula.api import ols
"""
Explanation: Import convention
You can import explicitly from statsmodels.formula.api
End of explanation
"""
sm.formula.ols
"""
Explanation: Alternatively, you can just use the formula namespace of the main statsmodels.api.
End of explanation
"""
import statsmodels.formula.api as smf
"""
Explanation: Or you can use the following conventioin
End of explanation
"""
sm.OLS.from_formula
"""
Explanation: These names are just a convenient way to get access to each model's from_formula classmethod. See, for instance
End of explanation
"""
dta = sm.datasets.get_rdataset("Guerry", "HistData", cache=True)
df = dta.data[['Lottery', 'Literacy', 'Wealth', 'Region']].dropna()
df.head()
"""
Explanation: All of the lower case models accept formula and data arguments, whereas upper case ones take endog and exog design matrices. formula accepts a string which describes the model in terms of a patsy formula. data takes a pandas data frame or any other data structure that defines a __getitem__ for variable names like a structured array or a dictionary of variables.
dir(sm.formula) will print a list of available models.
Formula-compatible models have the following generic call signature: (formula, data, subset=None, *args, **kwargs)
OLS regression using formulas
To begin, we fit the linear model described on the Getting Started page. Download the data, subset columns, and list-wise delete to remove missing observations:
End of explanation
"""
mod = ols(formula='Lottery ~ Literacy + Wealth + Region', data=df)
res = mod.fit()
print(res.summary())
"""
Explanation: Fit the model:
End of explanation
"""
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region)', data=df).fit()
print(res.params)
"""
Explanation: Categorical variables
Looking at the summary printed above, notice that patsy determined that elements of Region were text strings, so it treated Region as a categorical variable. patsy's default is also to include an intercept, so we automatically dropped one of the Region categories.
If Region had been an integer variable that we wanted to treat explicitly as categorical, we could have done so by using the C() operator:
End of explanation
"""
res = ols(formula='Lottery ~ Literacy + Wealth + C(Region) -1 ', data=df).fit()
print(res.params)
"""
Explanation: Patsy's mode advanced features for categorical variables are discussed in: Patsy: Contrast Coding Systems for categorical variables
Operators
We have already seen that "~" separates the left-hand side of the model from the right-hand side, and that "+" adds new columns to the design matrix.
Removing variables
The "-" sign can be used to remove columns/variables. For instance, we can remove the intercept from a model by:
End of explanation
"""
res1 = ols(formula='Lottery ~ Literacy : Wealth - 1', data=df).fit()
res2 = ols(formula='Lottery ~ Literacy * Wealth - 1', data=df).fit()
print(res1.params, '\n')
print(res2.params)
"""
Explanation: Multiplicative interactions
":" adds a new column to the design matrix with the interaction of the other two columns. "*" will also include the individual columns that were multiplied together:
End of explanation
"""
res = smf.ols(formula='Lottery ~ np.log(Literacy)', data=df).fit()
print(res.params)
"""
Explanation: Many other things are possible with operators. Please consult the patsy docs to learn more.
Functions
You can apply vectorized functions to the variables in your model:
End of explanation
"""
def log_plus_1(x):
return np.log(x) + 1.
res = smf.ols(formula='Lottery ~ log_plus_1(Literacy)', data=df).fit()
print(res.params)
"""
Explanation: Define a custom function:
End of explanation
"""
import patsy
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='dataframe')
print(y[:5])
print(X[:5])
"""
Explanation: Any function that is in the calling namespace is available to the formula.
Using formulas with models that do not (yet) support them
Even if a given statsmodels function does not support formulas, you can still use patsy's formula language to produce design matrices. Those matrices
can then be fed to the fitting function as endog and exog arguments.
To generate numpy arrays:
End of explanation
"""
f = 'Lottery ~ Literacy * Wealth'
y,X = patsy.dmatrices(f, df, return_type='dataframe')
print(y[:5])
print(X[:5])
print(sm.OLS(y, X).fit().summary())
"""
Explanation: To generate pandas data frames:
End of explanation
"""
|
robblack007/clase-metodos-numericos | Practicas/P5/Practica 5 - Interpolacion.ipynb | mit | from matplotlib.pyplot import plot
"""
Explanation: Graficación
Antes que nada, tenemos que aprender a graficar en Python, lo manera mas fácil de graficar es usando la función plot de la libería matplotlib, asi que importamos esta función:
End of explanation
"""
plot([0,1], [2,3])
"""
Explanation: y la usamos como cualquier función, dandole dos listas, una con todos los valores de $x$, y otra con todos los valores de $y$:
End of explanation
"""
%matplotlib inline
"""
Explanation: Sin embargo no aparece nada, para esto es necesario decirle a la librería matplotlib que queremos que nos muestre las gráfcias en linea con nuestro notebook, por lo que utilizamos:
End of explanation
"""
plot([0,1],[2,3])
"""
Explanation: y si graficamos ahora:
End of explanation
"""
from numpy import sin
from numpy import linspace
"""
Explanation: De la misma manera, podemos gráficar cualquier función, creada o importada, empecemos con:
$$
y=\sin{x}
$$
Antes que nada, importemos esta función de la librería numpy, así como la función linspace:
End of explanation
"""
xs = linspace(0, 10, 100)
xs
"""
Explanation: La función linspace nos ayudara a crear un arreglo lineal de datos el cual define al eje $x$:
End of explanation
"""
ys = sin(xs)
ys
"""
Explanation: Ahora solo tenemos que meter estos datos a la función $\sin$:
End of explanation
"""
plot(xs, ys)
"""
Explanation: Con lo que vemos que tenemos nuestros datos, pero lo principal es graficarlo, por lo que le damos estos dos arreglos a plot y obtendremos:
End of explanation
"""
plot(xs, ys, "o")
"""
Explanation: Una opción que puedo utilizar para darle formato a mi gráfica es "o", con lo que nos mostrará los datos como puntos, en lugar de una linea que conecta a estos:
End of explanation
"""
datos_x = [0, 1, 3, 6]
datos_y = [-3, 0, 5, 7]
"""
Explanation: y despues de esta breve introducción a la graficación en Python, empecemos con nuestro problema:
Lo que queremos es una función que pase exactamente por los siguientes puntos:
$i$ | 0|1|2|3
---------|--|-|-|-
$f(x_i)$ |-3|0|5|7
$x_i$ | 0|1|3|6
El primer paso será guardar estos datos en variables de Python, especificamente listas:
End of explanation
"""
L0 = lambda x: ((x - datos_x[1])*(x - datos_x[2])*(x - datos_x[3]))/((datos_x[0] - datos_x[1])*(datos_x[0] - datos_x[2])*(datos_x[0] - datos_x[3]))
L0(5)
"""
Explanation: Utilizando la formula de interpolación por polinomios de Lagrange, tenemos que:
$$
p(x) = L_0(x)f(x_0) + L_1(x)f(x_1) + L_2(x)f(x_2)
$$
en donde cada uno de los polinomios de Lagrange, se calcula con la formula:
$$
L_i(x) = \prod_{j=0, j\ne i}^n \frac{x-x_j}{x_i-x_j}
$$
en una sola linea, esto se ve asi:
End of explanation
"""
L01 = lambda x: (x - datos_x[1])/(datos_x[0] - datos_x[1])
L02 = lambda x: (x - datos_x[2])/(datos_x[0] - datos_x[2])
L03 = lambda x: (x - datos_x[3])/(datos_x[0] - datos_x[3])
L0 = lambda x: L01(x)*L02(x)*L03(x)
L01(5)
L02(5)
L03(5)
L0(5)
"""
Explanation: Sin embargo esta solución solo calcula uno de los polinomios de Lagrange y solo sirve para el caso de $4$ datos; el primer paso para crear una solución total, es separar estos polinomios en cada uno de los multiplicandos:
End of explanation
"""
L0
L01, L02, L03
"""
Explanation: Por lo que vemos que este método es equivalente, tan solo tenemos que crear un metodo iterativo para encontrar estos polinomios.
Quiero hacer notar que estos objetos que creamos son funciones, de la misma manera que sus elementos:
End of explanation
"""
dato_x0 = datos_x[0]
datos_x0 = datos_x[1:]
dato_x0
datos_x0
"""
Explanation: Tambien notar, que para la creación de estas funciones, utilice un subconjunto de estos datos, que excluia al primer elemento; crearé una lista que excluya justo a este primer elemento:
End of explanation
"""
L0s = []
for i in range(len(datos_x0)):
L0s.append(lambda x, i=i: (x - datos_x0[i])/(dato_x0 - datos_x0[i]))
"""
Explanation: y con un ciclo for, agregaré funciones en una lista, en la cual estarán cada uno de los multiplicandos del primer polinomio de Lagrange:
End of explanation
"""
L0s
"""
Explanation: Note que L0s esta compuesto de funciones, exactamente igual que L01, L02 y L03:
End of explanation
"""
from functools import reduce
L0 = reduce(lambda x, y: lambda z :x(z)*y(z), L0s)
L0(5)
"""
Explanation: Ahora solo tengo que multiplicar estas funciones, para obtener L0, utilizaré una función llamada reduce, la cual evaluará por pares, segun la regla que le doy (multiplicación):
End of explanation
"""
dato_x1 = datos_x[1]
datos_x1 = datos_x[:1] + datos_x[2:]
L1s = []
for i in range(len(datos_x1)):
L1s.append(lambda x, i=i: (x - datos_x1[i])/(dato_x1 - datos_x1[i]))
L1 = reduce(lambda x, y: lambda z :x(z)*y(z), L1s)
L1(5)
"""
Explanation: Una vez que tengo L0, puedo obtener los siguientes polinomios:
End of explanation
"""
Ls = []
for j in range(len(datos_x)):
dato_xi = datos_x[j]
datos_xi = datos_x[:j] + datos_x[j+1:]
Lis = []
for i in range(len(datos_xi)):
Lis.append(lambda x, i=i, dato_xi=dato_xi, datos_xi=datos_xi: (x - datos_xi[i])/(dato_xi - datos_xi[i]))
Li = reduce(lambda x, y: lambda z: x(z)*y(z), Lis)
Ls.append(Li)
Ls
"""
Explanation: Pero esto me obligaría a escribir código tantas veces como datos tenga, lo cual no es aceptable; de la misma manera en que los multiplicandos los metí en una lista, agregaré estos polinomios de Lagrange en una lista:
End of explanation
"""
Lf0 = lambda x: Ls[0](x)*datos_y[0]
Lf0(5)
"""
Explanation: Con lo que tenemos las funciones asociadas a cada uno de los polinomios de Lagrange, estos a su vez, tienen que se multiplicados por los datos_y, por lo que creamos para el primer polinomio:
End of explanation
"""
Lfs = []
for j in range(len(datos_y)):
Lfi = lambda x, j=j: Ls[j](x)*datos_y[j]
Lfs.append(Lfi)
"""
Explanation: O bien, para todos de un solo jalón:
End of explanation
"""
interp = reduce(lambda x, y: lambda z: x(z)+y(z), Lfs)
"""
Explanation: Por ultimo, todos estos terminos que se encuentran guardados en Lfs, podemos sumarlos usando reduce nuevamente, pero utilizando la regla de suma ahora:
$$
p(x) = \sum_{i=0}^n L_i(x) f(x_i)
$$
End of explanation
"""
interp(5)
"""
Explanation: Y al evaluarlo en un número real, se comporta como esperamos:
End of explanation
"""
xs = linspace(0, 10, 100)
ys = interp(xs)
plot(xs, ys)
"""
Explanation: Si ahora creamos un arreglo de datos desde $0$ hasta $10$, tan solo para asegurar que vemos todos los datos:
End of explanation
"""
plot(xs, ys)
plot(datos_x, datos_y, "o")
"""
Explanation: Y graficar esta función en conjunto con los datos originales es facil:
End of explanation
"""
def interpolacion_Lagrange(datos_x, datos_y):
Ls = []
for j in range(len(datos_x)):
dato_xi = datos_x[j]
datos_xi = datos_x[:j] + datos_x[j+1:]
Lis = []
for i in range(len(datos_xi)):
Lis.append(lambda x, i=i, dato_xi=dato_xi, datos_xi=datos_xi: (x - datos_xi[i])/(dato_xi - datos_xi[i]))
Li = reduce(lambda x, y: lambda z: x(z)*y(z), Lis)
Ls.append(Li)
Lfs = []
for j in range(len(datos_y)):
Lfi = lambda x, j=j: Ls[j](x)*datos_y[j]
Lfs.append(Lfi)
interp = reduce(lambda x, y: lambda z: x(z)+y(z), Lfs)
return interp
dx = [0, 1, 3, 6]
dy = [-3, 0, 5, 7]
poli = interpolacion_Lagrange(dx, dy)
xs = linspace(0, 6, 100)
ys = poli(xs)
plot(xs, ys)
plot(dx, dy , "o")
"""
Explanation: De tal manera que podemos crear una función que haga todo el trabajo por nosotros:
End of explanation
"""
|
google/compass | packages/propensity/01.eda_ga.ipynb | apache-2.0 | # Uncomment to install required python modules
# !sh ../utils/setup.sh
# Add custom utils module to Python environment
import os
import sys
sys.path.append(os.path.abspath(os.pardir))
import pandas as pd
from gps_building_blocks.cloud.utils import bigquery as bigquery_utils
from utils import eda_ga
from utils import helpers
"""
Explanation: 1. Exploratory Data Analysis (EDA) for Propensity Modeling
This notebook helps to:
check feasibility of building propensity model;
inspect dataset fields in order to identify relevant information for features and targets (labels);
perform initial exploratory data analysis to identify insights that help with building propensity model.
Google Merchandize Store GA360 dataset is used as an example.
Requirements
Google Analytics dataset stored in BigQuery.
Install and import required modules
End of explanation
"""
# Prints all the outputs from cell (instead of using display each time)
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
"""
Explanation: Notebook custom settings
End of explanation
"""
configs = helpers.get_configs('config.yaml')
source_configs, dest_configs = configs.source, configs.destination
# GCP project ID where queries and other computation will be run.
PROJECT_ID = dest_configs.project_id
# BigQuery dataset name to store query results (if needed).
DATASET_NAME = dest_configs.dataset_name
# To specify how many rows to display when examining dataframes
N_ROWS = 5
params = {
'project': PROJECT_ID,
'dataset_path': f'{source_configs.project_id}.{source_configs.dataset_name}',
'verbose': True
}
"""
Explanation: Configuration
Edit config.yaml to update GCP configuration that is used across the notebook.
Set parameters
End of explanation
"""
bq_utils = bigquery_utils.BigQueryUtils(project_id=PROJECT_ID)
eda = eda_ga.Analysis(bq_utils=bq_utils, params=params)
"""
Explanation: First, we initialize Analysis with config parameters.
End of explanation
"""
schema_html = 'https://support.google.com/analytics/answer/3437719?hl=en#'
df_schema = pd.read_html(schema_html)[0]
df_schema
"""
Explanation: 1. Define the business and ML problem
Before proceeding into EDA for Propensity Modeling, define the business problem and questions that need to be addressed by the Propensity Model. Following are some high-level questions to answer before doing EDA:
* What is the business problem you are trying to solve?
* What are the success criteria of the project?
* What target do you want to predict?
* What are the essential fields to consider as the potential features?
2. Extract dataset schema and field descriptions
Following is an example of GA360 dataset schema and field descriptions more details read into Pandas DataFrame for reference:
End of explanation
"""
table_options, description = eda.get_ds_description()
"""
Explanation: 3. Understand Dataset Structure
This section helps to answer the following questions:
Is the dataset description available, and what does it say?
How long does the dataset stretch for, i.e., what is the entire period, and how many daily tables does it have?
How big are the daily tables?
Are there any missing days?
If the data is stored in BigQuery, then its schema can be extracted via INFORMATION_SCHEMA.
End of explanation
"""
tables = eda.get_tables_stats()
"""
Explanation: Check daily tables
End of explanation
"""
# First set of tables.
tables[:N_ROWS]
# Last set of tables.
tables[-N_ROWS:]
"""
Explanation: Inspect sizes of the tables
End of explanation
"""
# Filter tables to analyse permanent `daily sessions` only
mask_not_intraday = (~tables['is_intraday'])
mask_sessions = (tables['table_id'].str.startswith('ga_sessions_'))
tables_permanent = tables[mask_sessions & mask_not_intraday].sort_values(
'table_id', ascending=True)
helpers.generate_date_range_stats(tables_permanent['last_suffix'])
"""
Explanation: Check if there are missing tables
End of explanation
"""
|
harrywang/pgm | course-s2016/bn-student.ipynb | mit | from pgmpy.models import BayesianModel
student_model = BayesianModel()
"""
Explanation: This is the program for a student Bayesian network
End of explanation
"""
student_model.add_nodes_from(['difficulty', 'intelligence', 'grade', 'sat', 'letter'])
student_model.nodes()
student_model.add_edges_from([('difficulty', 'grade'), ('intelligence', 'grade'), ('intelligence', 'sat'), ('grade', 'letter')])
student_model.edges()
"""
Explanation: Add nodes and edges
End of explanation
"""
from pgmpy.factors import TabularCPD
#TabularCPD?
cpd_difficulty = TabularCPD('difficulty', 2, [[0.6], [0.4]])
cpd_intelligence = TabularCPD('intelligence', 2, [[0.7], [0.3]])
cpd_sat = TabularCPD('sat', 2, [[0.95, 0.2],
[0.05, 0.8]], evidence=['intelligence'], evidence_card=[2])
cpd_grade = TabularCPD('grade', 3, [[0.3, 0.05, 0.9, 0.5],
[0.4, 0.25, 0.08, 0.3],
[0.3, 0.7, 0.02, 0.2]],
evidence=['intelligence', 'difficulty'], evidence_card=[2, 2])
cpd_letter = TabularCPD('letter', 2, [[0.1, 0.4, 0.99], [0.9, 0.6, 0.01]], evidence=['grade'], evidence_card=[3])
student_model.add_cpds(cpd_difficulty, cpd_intelligence, cpd_sat, cpd_grade, cpd_letter)
student_model.get_cpds()
print(cpd_difficulty) # 0:easy, 1:hard
print(cpd_intelligence) # 0:low, 1:high
print(cpd_grade) # 0:A, 1:B, 2:C
print(cpd_sat) # 0:low, 1:high
print(cpd_letter) # 0:week, 1:strong
"""
Explanation: In a Bayesian network, each node has an associated CPD (conditional probability distribution).
End of explanation
"""
student_model.check_model()
student_model.get_independencies()
"""
Explanation: To check the consistency of the model and associated CPDs
End of explanation
"""
student_model.is_active_trail('difficulty', 'intelligence')
student_model.is_active_trail('difficulty', 'intelligence',
observed='grade')
"""
Explanation: if an influence can flow in a trail in a network, it is known as an active trail
End of explanation
"""
from pgmpy.inference import VariableElimination
student_infer = VariableElimination(student_model)
# marginal prob of grade
probs = student_infer.query(['grade', 'letter'])
print(probs['grade'])
print(probs['letter'])
"""
Explanation: You can query the network as follows: query(variables, evidence=None, elimination_order=None)
variables: list :
list of variables for which you want to compute the probability
evidence: dict :
a dict key, value pair as {var: state_of_var_observed} None if no evidence
elimination_order: list :
order of variable eliminations (if nothing is provided) order is computed automatically
End of explanation
"""
# probs of grades given knowing nothing about course difficulty and intelligence
print(probs['grade'])
# probs of grades knowing course is hard
prob_grade_hard = student_infer.query(['grade'], {'difficulty':1})
print(prob_grade_hard['grade'])
# probs of getting an A knowing course is easy, and intelligence is low
prob_grade_easy_smart = student_infer.query(['grade'], {'difficulty':0, 'intelligence':1})
print(prob_grade_easy_smart['grade'])
"""
Explanation: Direct Causal Influence
End of explanation
"""
# probs of letter knowing nothing
print(probs['letter'])
# probs of letter knowing course is difficult
prob_letter_hard = student_infer.query(['letter'], {'difficulty':1})
print(prob_letter_hard['letter'])
"""
Explanation: Indirect Causal Influence
End of explanation
"""
|
tensorflow/workshops | extras/archive/00_test_install.ipynb | apache-2.0 | import tensorflow as tf
print("You have version %s" % tf.__version__)
"""
Explanation: You can press shift + enter to quickly advance through each line of a notebook. Try it!
Check that you have a recent version of TensorFlow installed, v1.3 or higher.
End of explanation
"""
%matplotlib inline
import pylab
import numpy as np
# create some data using numpy. y = x * 0.1 + 0.3 + noise
x = np.random.rand(100).astype(np.float32)
noise = np.random.normal(scale=0.01, size=len(x))
y = x * 0.1 + 0.3 + noise
# plot it
pylab.plot(x, y, '.')
"""
Explanation: Check if Matplotlib is working. After running this cell, you should see a plot appear below.
End of explanation
"""
import PIL.Image as Image
import numpy as np
from matplotlib.pyplot import imshow
image_array = np.random.rand(200,200,3) * 255
img = Image.fromarray(image_array.astype('uint8')).convert('RGBA')
imshow(np.asarray(img))
"""
Explanation: Check if Numpy and Pillow are working. After runnign this cell, you should see a random image appear below.
End of explanation
"""
import pandas as pd
names = ['Bob','Jessica','Mary','John','Mel']
births = [968, 155, 77, 578, 973]
BabyDataSet = list(zip(names,births))
pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
"""
Explanation: Check if Pandas is working. After running this cell, you should see a table appear below.
End of explanation
"""
|
fluentpython/pythonic-api | pythonic-api-notebook.ipynb | mit | s = 'Fluent'
L = [10, 20, 30, 40, 50]
print(list(s)) # list constructor iterates over its argument
a, b, *middle, c = L # tuple unpacking iterates over right side
print((a, b, c))
for i in L:
print(i, end=' ')
"""
Explanation: Pythonic APIs: the workshop notebook
Tutorial overview
Introduction
A simple but full-featured Pythonic class
Exercise: custom formatting and alternate constructor
A Pythonic sequence
Exercise: implementing sequence behavior
Coffee break
A Pythonic sequence (continued)
Exercise: custom formatting
Operator overloading
Exercise: implement @ for dot product
Wrap-up
What is Pythonic?
Pythonic code is concise and expressive. It leverages Python features and idioms to acomplish maximum effect with minimum effort, without being unreadable. It uses the language as it's designed to be used, so it is most readable to the fluent Pythonista.
Real example 1: the requests API
requests is pleasant HTTP client library. It's great but it would be awesome if it was asynchronous (but could it be pleasant and asynchronous at the same time?). The examples below are from Kenneth Reitz, the author of requests (source).
Pythonic, using requests
```python
import requests
r = requests.get('https://api.github.com', auth=('user', 'pass'))
print r.status_code
print r.headers['content-type']
------
200
'application/json'
```
Unpythonic, using urllib2
```python
import urllib2
gh_url = 'https://api.github.com'
req = urllib2.Request(gh_url)
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(None, gh_url, 'user', 'pass')
auth_manager = urllib2.HTTPBasicAuthHandler(password_manager)
opener = urllib2.build_opener(auth_manager)
urllib2.install_opener(opener)
handler = urllib2.urlopen(req)
print handler.getcode()
print handler.headers.getheader('content-type')
------
200
'application/json'
```
Real example 2: classes are optional in py.test and nosetests
Features of idiomaitc Python APIs
Let the user apply previous knowledge of the standard types and operations
Make it easy to leverage existing libraries
Come with “batteries included”
Use duck typing for enhanced interoperation with user-defined types
Provide ready to use objects (no instantiation needed)
Don't require subclassing for basic usage
Leverage standard language objects: containers, functions, classes, modules
Make proper use of the Data Model (i.e. special methods)
Introduction
One of the keys to consistent, Pythonic, behavior in Python is understanding and leveraging the Data Model. The Python Data Model defines standard APIs which enable...
Iteration
End of explanation
"""
len(s), len(L)
s.__len__(), L.__len__()
"""
Explanation: Sizing with len()
End of explanation
"""
a = 2
b = 3
a * b, a.__mul__(b)
L = [1, 2, 3]
L.append(L)
L
"""
Explanation: Arithmetic
End of explanation
"""
x = 2**.5
x
format(x, '.3f')
from datetime import datetime
agora = datetime.now()
print(agora)
print(format(agora, '%H:%M'))
'{1:%H}... {0:.3f}!'.format(x, agora)
"""
Explanation: A simple but full-featured Pythonic class
String formatting mini-language
End of explanation
"""
|
vipmunot/Data-Science-Course | Data Visualization/Lab 5/w05_Vipul_Munot.ipynb | mit | import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
sns.set_style('white')
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
"""
Explanation: W5 Lab Assignment
This lab covers some fundamental plots of 1-D data.
End of explanation
"""
print( np.random.rand(10) )
"""
Explanation: Q1 1-D Scatter Plot
Using fake data
Remember that, if you want to play with visualization tools, you can use not only the real data, but also fake data. Actually it is a nice way to experiment because you can control every aspect of data. Let's create some random numbers.
The function np.random.randn() generates a sample with size $N$ from the standard normal distribution.
End of explanation
"""
def generate_many_numbers(N=10, mean=5, sigma=3):
return mean + sigma * np.random.randn(N)
"""
Explanation: The following small function generates $N$ normally distributed numbers:
End of explanation
"""
data = generate_many_numbers(N=10)
print(data)
"""
Explanation: Generate 10 normally distributed numbers with mean 5 and sigma 3:
End of explanation
"""
x = np.arange(1,11)
y = x + 5
print(x)
print(y)
plt.scatter(x, y)
"""
Explanation: The most immediate method to visualize 1-D data is just plotting it. Here we can use the scatter() function to draw a scatter plot. The most basic usage of this function is to provide x and y.
End of explanation
"""
print(np.zeros_like(data))
"""
Explanation: But here we only have x (the generated data). We can set the y values to 0. The np.zeros_like(data) function creates a numpy array (list) that have the same dimension as the argument.
End of explanation
"""
plt.figure(figsize=(10,1)) # set figure size, width = 10, height = 1
plt.scatter(data, np.zeros_like(data), s=50) # set size of symbols to 50. Change it and see what happens.
plt.gca().axes.get_yaxis().set_visible(False) # set y axis invisible
"""
Explanation: Now let's plot the generated 1-D data.
End of explanation
"""
# TODO: generate 100 numbers and plot them in the same way.
data = np.random.rand(100)
plt.figure(figsize=(10,1))
plt.scatter(data, np.zeros_like(data), s = 50)
plt.gca().axes.get_yaxis().set_visible(False)
"""
Explanation: Ok, I think we can see all data points. But what if we have more numbers?
End of explanation
"""
data = generate_many_numbers(N=100)
# TODO: create a list of 100 random numbers using np.random.rand()
# zittered_ypos = ??
zittered_ypos = np.random.rand(100)
plt.figure(figsize=(10,1))
plt.scatter(data, zittered_ypos, s=50)
plt.gca().axes.get_yaxis().set_visible(False)
"""
Explanation: Of course we can't see much at the center. We can add "jitters" using the np.random.rand() function.
End of explanation
"""
data = generate_many_numbers(N=200)
# From the last question
# zittered_ypos = ??
# TODO: implement this
# plt.figure(figsize=(10,1))
# plt.scatter( ?? )
# plt.gca().axes.get_yaxis().set_visible(False)
# TODO: implement this
zittered_ypos = np.random.rand(200)
plt.figure(figsize=(10,1))
plt.scatter(data, zittered_ypos, s = 50, alpha = 0.35)
plt.gca().axes.get_yaxis().set_visible(False)
"""
Explanation: Let's also make the symbol transparent. Here is a useful Google query, and the documentation of scatter() also helps.
End of explanation
"""
# TODO: implement this
# data = ??
# zittered_ypos = ??
# TODO: implement this
# plt.figure(figsize=(10,1))
# plt.scatter( ?? )
# plt.gca().axes.get_yaxis().set_visible(False)
data = np.random.rand(1000)
zittered_ypos = np.random.rand(1000)
plt.figure(figsize=(10,1))
plt.scatter(data, zittered_ypos, s = 50, c = 'white', edgecolors='r')
plt.gca().axes.get_yaxis().set_visible(False)
"""
Explanation: We can use transparency as well as empty symbols.
Increase the number of points to 1,000
Set the symbol empty and edgecolor red (a useful query)
End of explanation
"""
movie_df = pd.read_csv('imdb.csv', delimiter='\t')
movie_df.head()
"""
Explanation: Lots and lots of points
Let's use real data. Load the IMDb dataset that we used before.
End of explanation
"""
# TODO: plot 'rating'
rating = movie_df['Rating'].values
plt.figure(figsize=(10,1))
plt.scatter(rating, np.zeros_like(rating), s = 50)
plt.gca().axes.get_yaxis().set_visible(False)
"""
Explanation: Try to plot the 'Rating' information using 1D scatter plot. Does it work?
End of explanation
"""
movie_df['Rating'].hist()
"""
Explanation: Q2 Histogram
There are too many data points! Let's try histogram. Actually pandas supports plotting through matplotlib and you can directly visualize dataframes and series.
End of explanation
"""
# TODO: try different number of bins
movie_df['Rating'].hist(bins = 30)
movie_df['Rating'].hist(bins = 20)
"""
Explanation: Looks good! Can you increase or decrease the number of bins? Find the documentation here.
End of explanation
"""
movie_df['Rating'].plot(kind='box', vert=False)
"""
Explanation: Q3 Boxplot
Now let's try boxplot. We can use pandas' plotting functions. The usages of boxplot is here.
End of explanation
"""
sns.boxplot(movie_df['Rating'])
"""
Explanation: Or try seaborn's boxplot() function:
End of explanation
"""
df = movie_df.sort('Year')
df.head()
"""
Explanation: We can also easily draw a series of boxplots grouped by categories. For example, let's do the boxplots of movie ratings for different decades.
End of explanation
"""
print(1874//10)
print(1874//10*10)
decade = (df['Year']//10) * 10
decade.head()
ax = sns.boxplot(x=decade, y=df['Rating'])
ax.figure.set_size_inches(12, 8)
"""
Explanation: One easy way to transform a particular year to the decade (e.g., 1874 -> 1870): divide by 10 and multiply it by 10 again.
In Python 3, the // operator is used for integer division.
End of explanation
"""
# TODO
ax = sns.boxplot(x=decade, y=df['Votes'])
ax.figure.set_size_inches(12, 8)
"""
Explanation: Can you draw boxplots of movie votes for different decade?
End of explanation
"""
log_votes = np.log(df['Votes'])
log_votes.head()
"""
Explanation: What do you see? Can you actually see the "box"? The number of votes span a very wide range, from 1 to more than 1.4 million. One way to deal with this is to make a log-transformation of votes, which can be done with the numpy.log() function.
End of explanation
"""
# TODO
ax = sns.boxplot(x=decade, y = log_votes)
ax.figure.set_size_inches(12, 8)
"""
Explanation: Can you draw boxplots of log-transformed movie votes for different decade?
End of explanation
"""
|
ml6973/Course | assignment/Vaidyanathan.N-Girish/assign-04-girishvat123.ipynb | apache-2.0 | #find out for different iterations to find out the optimal iterations
iter1=10000
iter2=15000
iter3=26000
learningRate = tf.train.exponential_decay(learning_rate=0.0008,
global_step= 1,
decay_steps=trainX.shape[0],
decay_rate= 0.95,
staircase=True)
#Define the placeholder variables
numfeatures=trainX.shape[1]
numlabels=trainY.shape[1]
X=tf.placeholder(tf.float32,shape=[None,numfeatures])
Y=tf.placeholder(tf.float32,shape=[None,numlabels])
#Define the weights and biases
#Define weights and biases as variables since it changes over the iterations
w=tf.Variable(tf.random_normal([numfeatures,numlabels],mean=0,
stddev=(np.sqrt(6/numfeatures+
numlabels+1))))
b=tf.Variable(tf.random_normal([1,numlabels],mean=0,
stddev=(np.sqrt(6/numfeatures+
numlabels+1))))
#Find out the predicted Y value
init=tf.initialize_all_variables()
Y_predicted=tf.nn.sigmoid(tf.add(tf.matmul(X,w),b))
#Define the loss function and optimizer
#We use a mean squared loss function
#There is a function in tensorflow tf.nn.l2_loss which finds the mean squared loss without square root
loss=tf.nn.l2_loss(Y_predicted-Y)
optimizer=tf.train.GradientDescentOptimizer(learningRate).minimize(loss)
#Define the session to compute the graph
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter1):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=accuracy.eval(feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter2):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=sess.run(accuracy,feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
"""
Explanation: TRYING OUT DIFFERENT ITERATIONS TO FIND THE BEST ONE
End of explanation
"""
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter3):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=sess.run(accuracy,feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
"""
Explanation: FOUND THAT ACCURACY IS BETTER WITH ~26K ITERATIONS
End of explanation
"""
iter4=26000
learningRate = tf.train.exponential_decay(learning_rate=0.0008,
global_step= 1,
decay_steps=trainX.shape[0],
decay_rate= 0.95,
staircase=True)
init=tf.initialize_all_variables()
h1=tf.nn.sigmoid(tf.add(tf.matmul(X,w1),b1))
Y_predicted = tf.nn.sigmoid(tf.add(tf.matmul(h1, w2), b2))
#Define the loss function and optimizer
#We use a mean squared loss function
#There is a function in tensorflow tf.nn.l2_loss which finds the mean squared loss without square root
loss=tf.nn.l2_loss(Y_predicted-Y)
optimizer=tf.train.GradientDescentOptimizer(learningRate).minimize(loss)
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter4):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=accuracy.eval(feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
`
"""
Explanation: PART B:
WE ADD A HIDDEN LAYER WITH 4 NODES .
End of explanation
"""
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
"""
Explanation: WHEN WE ADD A HIDDEN LAYER WITH SAME NUMBER OF ITERATIONS,ACCURACY INCREASES TO 99%
End of explanation
"""
|
amueller/scipy-2017-sklearn | notebooks/06.Supervised_Learning-Regression.ipynb | cc0-1.0 | x = np.linspace(-3, 3, 100)
print(x)
rng = np.random.RandomState(42)
y = np.sin(4 * x) + x + rng.uniform(size=len(x))
plt.plot(x, y, 'o');
"""
Explanation: Supervised Learning Part 2 -- Regression Analysis
In regression we are trying to predict a continuous output variable -- in contrast to the nominal variables we were predicting in the previous classification examples.
Let's start with a simple toy example with one feature dimension (explanatory variable) and one target variable. We will create a dataset out of a sine curve with some noise:
End of explanation
"""
print('Before: ', x.shape)
X = x[:, np.newaxis]
print('After: ', X.shape)
"""
Explanation: Linear Regression
The first model that we will introduce is the so-called simple linear regression. Here, we want to fit a line to the data, which
One of the simplest models again is a linear one, that simply tries to predict the data as lying on a line. One way to find such a line is LinearRegression (also known as Ordinary Least Squares (OLS) regression).
The interface for LinearRegression is exactly the same as for the classifiers before, only that y now contains float values, instead of classes.
As we remember, the scikit-learn API requires us to provide the target variable (y) as a 1-dimensional array; scikit-learn's API expects the samples (X) in form a 2-dimensional array -- even though it may only consist of 1 feature. Thus, let us convert the 1-dimensional x NumPy array into an X array with 2 axes:
End of explanation
"""
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
"""
Explanation: Again, we start by splitting our dataset into a training (75%) and a test set (25%):
End of explanation
"""
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(X_train, y_train)
"""
Explanation: Next, we use the learning algorithm implemented in LinearRegression to fit a regression model to the training data:
End of explanation
"""
print('Weight coefficients: ', regressor.coef_)
print('y-axis intercept: ', regressor.intercept_)
"""
Explanation: After fitting to the training data, we paramerterized a linear regression model with the following values.
End of explanation
"""
min_pt = X.min() * regressor.coef_[0] + regressor.intercept_
max_pt = X.max() * regressor.coef_[0] + regressor.intercept_
plt.plot([X.min(), X.max()], [min_pt, max_pt])
plt.plot(X_train, y_train, 'o');
"""
Explanation: Since our regression model is a linear one, the relationship between the target variable (y) and the feature variable (x) is defined as
$$y = weight \times x + \text{intercept}$$.
Plugging in the min and max values into thos equation, we can plot the regression fit to our training data:
End of explanation
"""
y_pred_train = regressor.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data")
plt.plot(X_train, y_pred_train, 'o', label="prediction")
plt.plot([X.min(), X.max()], [min_pt, max_pt], label='fit')
plt.legend(loc='best')
"""
Explanation: Similar to the estimators for classification in the previous notebook, we use the predict method to predict the target variable. And we expect these predicted values to fall onto the line that we plotted previously:
End of explanation
"""
y_pred_test = regressor.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data")
plt.plot(X_test, y_pred_test, 'o', label="prediction")
plt.plot([X.min(), X.max()], [min_pt, max_pt], label='fit')
plt.legend(loc='best');
"""
Explanation: As we can see in the plot above, the line is able to capture the general slope of the data, but not many details.
Next, let's try the test set:
End of explanation
"""
regressor.score(X_test, y_test)
"""
Explanation: Again, scikit-learn provides an easy way to evaluate the prediction quantitatively using the score method. For regression tasks, this is the R<sup>2</sup> score. Another popular way would be the Mean Squared Error (MSE). As its name implies, the MSE is simply the average squared difference over the predicted and actual target values
$$MSE = \frac{1}{n} \sum_{i=1}^{n} (\text{predicted}_i - \text{true}_i)^2$$
End of explanation
"""
# %load solutions/06B_lin_with_sine.py
"""
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Add a feature containing `sin(4x)` to `X` and redo the fit. Visualize the predictions with this new richer, yet linear, model.
</li>
</ul>
</div>
End of explanation
"""
from sklearn.neighbors import KNeighborsRegressor
kneighbor_regression = KNeighborsRegressor(n_neighbors=1)
kneighbor_regression.fit(X_train, y_train)
"""
Explanation: KNeighborsRegression
As for classification, we can also use a neighbor based method for regression. We can simply take the output of the nearest point, or we could average several nearest points. This method is less popular for regression than for classification, but still a good baseline.
End of explanation
"""
y_pred_train = kneighbor_regression.predict(X_train)
plt.plot(X_train, y_train, 'o', label="data", markersize=10)
plt.plot(X_train, y_pred_train, 's', label="prediction", markersize=4)
plt.legend(loc='best');
"""
Explanation: Again, let us look at the behavior on training and test set:
End of explanation
"""
y_pred_test = kneighbor_regression.predict(X_test)
plt.plot(X_test, y_test, 'o', label="data", markersize=8)
plt.plot(X_test, y_pred_test, 's', label="prediction", markersize=4)
plt.legend(loc='best');
"""
Explanation: On the training set, we do a perfect job: each point is its own nearest neighbor!
End of explanation
"""
kneighbor_regression.score(X_test, y_test)
"""
Explanation: On the test set, we also do a better job of capturing the variation, but our estimates look much messier than before.
Let us look at the R<sup>2</sup> score:
End of explanation
"""
# %load solutions/06A_knn_vs_linreg.py
"""
Explanation: Much better than before! Here, the linear model was not a good fit for our problem; it was lacking in complexity and thus under-fit our data.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Compare the KNeighborsRegressor and LinearRegression on the boston housing dataset. You can load the dataset using ``sklearn.datasets.load_boston``. You can learn about the dataset by reading the ``DESCR`` attribute.
</li>
</ul>
</div>
End of explanation
"""
|
sdpython/ensae_teaching_cs | _doc/notebooks/td1a_algo/BJKST_enonce.ipynb | mit | from jyquickhelper import add_notebook_menu
add_notebook_menu()
"""
Explanation: 1A.algo - BJKST - calculer le nombre d'éléments distincts
Comment calculer le nombre d'éléments distincts d'un ensemble de données quand celui-ci est trop grand pour tenir en mémoire. C'est ce que fait l'algorithme BJKST.
End of explanation
"""
from pyquickhelper.helpgen import NbImage
NbImage("images/bjkst.png", width=600)
"""
Explanation: Exercice 1 : première version
L'extrait qui suit est tiré de
Counting distinct elements in a data stream. Il faut implémenter l'idée développé dans le second paragraphe.
End of explanation
"""
from pyquickhelper.helpgen import NbImage
NbImage("images/bjkst2.png", width=600)
"""
Explanation: Saurez-vous convertir ce texte en un algorithme ?
Exercice 2 : version plus rapide
L'extrait qui suit est tiré de
CS85: Data Stream Algorithms, Lecture Notes (page 17). Il faut implémenter l'idée développé dans le second paragraphe.
End of explanation
"""
|
dolittle007/dolittle007.github.io | notebooks/lasso_block_update.ipynb | gpl-3.0 | %pylab inline
from matplotlib.pylab import *
from pymc3 import *
import numpy as np
d = np.random.normal(size=(3, 30))
d1 = d[0] + 4
d2 = d[1] + 4
yd = .2*d1 +.3*d2 + d[2]
"""
Explanation: Lasso regression with block updating
Sometimes, it is very useful to update a set of parameters together. For example, variables that are highly correlated are often good to update together. In PyMC 3 block updating is simple, as example will demonstrate.
Here we have a LASSO regression model where the two coefficients are strongly correlated. Normally, we would define the coefficient parameters as a single random variable, but here we define them separately to show how to do block updates.
First we generate some fake data.
End of explanation
"""
lam = 3
with Model() as model:
s = Exponential('s', 1)
tau = Uniform('tau', 0, 1000)
b = lam * tau
m1 = Laplace('m1', 0, b)
m2 = Laplace('m2', 0, b)
p = d1*m1 + d2*m2
y = Normal('y', mu=p, sd=s, observed=yd)
"""
Explanation: Then define the random variables.
End of explanation
"""
with model:
start = find_MAP()
step1 = Metropolis([m1, m2])
step2 = Slice([s, tau])
trace = sample(10000, [step1, step2], start=start)
traceplot(trace);
hexbin(trace[m1],trace[m2], gridsize = 50)
"""
Explanation: For most samplers, including Metropolis and HamiltonianMC, simply pass a list of variables to sample as a block. This works with both scalar and array parameters.
End of explanation
"""
|
omoju/udacityUd120Lessons | Evaluation Metrics.ipynb | gpl-3.0 |
import pickle
import sys
sys.path.append("../tools/")
from feature_format import featureFormat, targetFeatureSplit
data_dict = pickle.load(open("../final_project/final_project_dataset.pkl", "r") )
### first element is our labels, any added elements are predictor
### features. Keep this the same for the mini-project, but you'll
### have a different feature list when you do the final project.
features_list = ["poi", "salary"]
data = featureFormat(data_dict, features_list)
labels, features = targetFeatureSplit(data)
print len(labels), len(features)
"""
Explanation: Lesson 14 - Evaluation Metrics
Task: Identify Persons Of Interest (POI) for Enron fraud dataset.
End of explanation
"""
from sklearn import tree
from time import time
def submitAcc(features, labels):
return clf.score(features, labels)
clf = tree.DecisionTreeClassifier()
t0 = time()
clf.fit(features, labels)
print("done in %0.3fs" % (time() - t0))
pred = clf.predict(features)
print "Classifier with accurancy %.2f%%" % (submitAcc(features, labels))
"""
Explanation: Create a decision tree classifier (just use the default parameters), train it on all the data. Print out the accuracy.
THIS IS AN OVERFIT TREE, DO NOT TRUST THIS NUMBER! Nonetheless,
- what’s the accuracy?
End of explanation
"""
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(features, labels, test_size=0.30, random_state=42)
print len(X_train), len(y_train)
print len(X_test), len(y_test)
clf = tree.DecisionTreeClassifier()
t0 = time()
clf.fit(X_train, y_train)
print("done in %0.3fs" % (time() - t0))
pred = clf.predict(X_test)
print "Classifier with accurancy %.2f%%" % (submitAcc(X_test, y_test))
"""
Explanation: Now you’ll add in training and testing, so that you get a trustworthy accuracy number. Use the train_test_split validation available in sklearn.cross_validation; hold out 30% of the data for testing and set the random_state parameter to 42 (random_state controls which points go into the training set and which are used for testing; setting it to 42 means we know exactly which events are in which set, and can check the results you get).
- What’s your updated accuracy?
End of explanation
"""
numPoiInTestSet = len([p for p in y_test if p == 1.0])
print numPoiInTestSet
"""
Explanation: How many POIs are in the test set for your POI identifier?
(Note that we said test set! We are not looking for the number of POIs in the whole dataset.)
End of explanation
"""
from __future__ import division
1.0 - numPoiInTestSet/29
"""
Explanation: If your identifier predicted 0. (not POI) for everyone in the test set, what would its accuracy be?
End of explanation
"""
from sklearn.metrics import *
precision_score(y_test,clf.predict(X_test))
"""
Explanation: Aaaand the testing data brings us back down to earth after that 99% accuracy.
Concerns with Accuracy
If you have a skewed dataset, as is the case with this dataset
The problem might be of such that it is best to err on the side of guessing innocence
For another case, you may want to err on the side of predicting guilt, with the hopes that the innocent persons will be cleared through the investigation.
Accuracy is not particularly good if any of these cases apply to you. Precision and recall are a better metric for evaluating the performance of the model.
Picking The Most Suitable Metric
As you may now see, having imbalanced classes like we have in the Enron dataset (many more non-POIs than POIs) introduces some special challenges, namely that you can just guess the more common class label for every point, not a very insightful strategy, and still get pretty good accuracy!
Precision and recall can help illuminate your performance better.
- Use the precision_score and recall_score available in sklearn.metrics to compute those quantities.
- What’s the precision?
End of explanation
"""
recall_score(y_test,clf.predict(X_test))
y_true = y_test
y_pred = clf.predict(X_test)
cM = confusion_matrix(y_true, y_pred)
print "{:>72}".format('Actual Class')
print "{:>20}{:>20}{:>20}{:>23}".format('Predicted', '', 'Positive', 'Negative')
print "{:>20}{:>20}{:>20.3f}{:>23.3f}".format('', 'Positive', cM[0][0], cM[0][1])
print "{:>20}{:>20}{:>20.3f}{:>23.3f}".format('', 'Negative', cM[1][0], cM[1][1])
"""
Explanation: Obviously this isn’t a very optimized machine learning strategy (we haven’t tried any algorithms besides the decision tree, or tuned any parameters, or done any feature selection), and now seeing the precision and recall should make that much more apparent than the accuracy did.
End of explanation
"""
|
qkitgroup/qkit | qkit/doc/notebooks/VirtualAWG_basics.ipynb | gpl-2.0 | testsample = sample.Sample()
testsample.readout_tone_length = 200e-9 # length of the readout tone
testsample.clock = 1e9 # sample rate of your physical awg/pulse generator
testsample.tpi = 100e-9 # duration of a pi-pulse
testsample.tpi2 = 50e-9 # duration of a pi/2-pulse
testsample.iq_frequency = 20e6 # iq_frequency for iq mixing (set to 0 for homodyne measurements)
#testsample.awg = my_awg #<- qkit instrument (your actual awg)
"""
Explanation: Initializing test sample
End of explanation
"""
#example:
pi = ps.Pulse(50e-9, name = "pi-pulse", shape = ps.ShapeLib.gauss, iq_frequency=50e6)
#this creates a 50ns gaussian pulse with name "pi-pulse" at an iq_frequency of 50MHz.
"""
Explanation: Building sequences
Sequences are python objects, encoding the experiment you want to run on your pulse generator.
They are built from pulses (another type of object), wait times, and the readout.
The pulse object
Pulse objects are initialized with:
mypulse = ps.Pulse(length, pulse-shape, name, amplitude, phase, iq_frequency, iq_dc_offset, iq_angle)
<br>
length is the pulse length in seconds <br>
pulse-shape is the shape of the pulse. Currently rect (square pulse, this is the default) and gaussian shapes are implemented. To use gaussian shape, write shape = ps.ShapeLib.gauss<it><br>
name is the name you want to give your pulse. This is not mandatory and only used for plotting.<br>
amplitude: relative amplitude of your pulse.<br>
phase: relative phase of your pulse in degree.<br>
iq_frequency: If iq_frequency is 0, homodyne mixing is used.<br>
iq_dc_offset, iq_angle are currently not in use, but will be used in the near future to enable a calibration of the mixers.
End of explanation
"""
my_sequence = ps.PulseSequence(testsample) # create sequence object
my_sequence.add(pi) # add pi pulse, as defined in the example above
my_sequence.add_wait(lambda t: t) # add a variable wait time with length t
my_sequence.add_readout() # add the readout
my_sequence.plot() # show SCHEMATIC plot of the pulse sequence
"""
Explanation: Building sequences from pulses:
Sequences are built from pulse objects in an intuitive way: You just start adding pulses to your sequence, which are then appended. In the example below, a simple T1 measurement sequence is built, using our recently defined pi-pulse.
End of explanation
"""
spinecho = sl.spinecho(testsample, n_pi = 2) # spinecho with 2 pi-pulses
spinecho.plot()
"""
Explanation: As you can see, wait times can be added with the add_wait command. In this case the wait time is given by a lambda function. This enable the implementation of variable time steps. This can also be used to add pulses with variable length and amplitude. The variable name t is later used to set values for this wait time. <br>
Of course, there are also pre-built sequences for standard experiments, which you can find in the sequence_library class (here imported as sl).
Currently, this includes sequences for Rabi, T1, Ramsey, spin-echo and spin-locking experiments.
End of explanation
"""
vawg = VirtAWG.VirtualAWG(testsample) # by default, the virtual awg is initialized with a single channel
time = np.arange(0, 500e-9, 50e-9) # time t for the sequence
vawg.set_sequence(my_sequence, t=time) # set_sequence deletes all previously stored sequences in a channel
vawg.add_sequence(spinecho, t=time*2) # add_sequence appends the next sequence to the sequences stored in the channel
# Note, this enables you to run multiple experiments, such as T1-measurement and spin-echo in parallel!#
vawg.plot()
# In the plot, the time starts at 0 together with the readout.
# The position of the readout is also used as a phase reference for all pulses.
"""
Explanation: Single channel virtual AWG
End of explanation
"""
# If you do not want the experiments to run consecutively, but to interleave them instead:
vawg.set_interleave(True)
# This also works for more than 2 sequences.
vawg.plot()
"""
Explanation: Please note, that this plot always displays the amplitude of your signal (not I or Q).
We refrained from displaying the pulses with their iq_frequency to prevent confusion.
Similarly, it is not necessary to initialize the virtual awg with a channel for each I and Q. Whether two physical channels are needed to generate the desired output is determined automatically by the load script (see below).
End of explanation
"""
vawg = VirtAWG.VirtualAWG(testsample, channels=2) #Initialize with two channels channel (number is arbitrary)
vawg.set_sequence(my_sequence, channel=1, t=time) # set my_sequence (T1 measurement) on channel 1
vawg.set_sequence(spinecho, channel=2, t=time) # set spinecho on channel 2
vawg.plot()
"""
Explanation: If you are satisfied with the results, load the sequences onto your physical device with:
<br>vawg.load() <br>
Currently, this is only enabled for the tabor awg.
Multi-channel virtual AWG
This feature enables the user to run multiple sequences on different channels at the same time. <br>
The readout of each sequence is used to synchronize the channels (i.e. it is expected that the readout happens simultaneously).
End of explanation
"""
|
andrzejkrawczyk/python-course | workshops/Gr1-2018/Zadania.ipynb | apache-2.0 | max_num("9512983", 1) # "9"
max_num("9512983", 3) # "998"
max_num("9512983", 7) # "9512983"
"""
Explanation: <center><h3>1. Napisz funkcje, która zwraca maksymalną zadaną liczbę ze stringa, składającą się z kolejnych liczb</h3></center>
End of explanation
"""
POST = {
u"page[1][1]['id']": [u'baloes_bd_8_1'],
u"page[0][1]['text']": [u'Mum, dad! Look, the school email. '],
u"page[1][0]['id']": [u'baloes_bd_9_1'],
u"page[0][1]['id']": [u'baloes_bd_6_1'],
u"page[0][0]['id']": [u'baloes_bd_5_1'],
u'next': [u'/mycontent/5910974510923776'],
u"page[0][0]['text']": [u'Some time later\u2026'],
u'space_id': [u'5910974510923776'],
u"page[1][0]['text']": [u'You open the email, Luana. \u2028It\u2019s for you!'
],
u"page[1][1]['text']": [u'Me too!'],
u'skip_editor': [u'1'],
}
"""
Explanation: <center><h3>2. Mając dane w takiej postaci, gdzie pierwszy index to numer strony, a drugi to numer zawartości.</center></h3>
End of explanation
"""
[
[u'Some time later\u2026', u'Mum, dad! Look, the school email.'],
[u'You open the email, Luana. \u2028It\u2019s for you!', u'Me too!']
]
"""
Explanation: Stwórz listę stron, która będzie zawierała kolejne zawartości 'text'.
End of explanation
"""
|
mattilyra/gensim | docs/notebooks/soft_cosine_tutorial.ipynb | lgpl-2.1 | # Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
"""
Explanation: Finding similar documents with Word2Vec and Soft Cosine Measure
Soft Cosine Measure (SCM) is a promising new tool in machine learning that allows us to submit a query and return the most relevant documents. In part 1, we will show how you can compute SCM between two documents using softcossim. In part 2, we will use SoftCosineSimilarity to retrieve documents most similar to a query and compare the performance against other similarity measures.
First, however, we go through the basics of what Soft Cosine Measure is.
Soft Cosine Measure basics
Soft Cosine Measure (SCM) is a method that allows us to assess the similarity between two documents in a meaningful way, even when they have no words in common. It uses a measure of similarity between words, which can be derived [2] using word2vec [3] vector embeddings of words. It has been shown to outperform many of the state-of-the-art methods in the semantic text similarity task in the context of community question answering [2].
SCM is illustrated below for two very similar sentences. The sentences have no words in common, but by modeling synonymy, SCM is able to accurately measure the similarity between the two sentences. The method also uses the bag-of-words vector representation of the documents (simply put, the word's frequencies in the documents). The intution behind the method is that we compute standard cosine similarity assuming that the document vectors are expressed in a non-orthogonal basis, where the angle between two basis vectors is derived from the angle between the word2vec embeddings of the corresponding words.
This method was perhaps first introduced in the article “Soft Measure and Soft Cosine Measure: Measure of Features in Vector Space Model” by Grigori Sidorov, Alexander Gelbukh, Helena Gomez-Adorno, and David Pinto (link to PDF).
In this tutorial, we will learn how to use Gensim's SCM functionality, which consists of the softcossim function for one-off computation, and the SoftCosineSimilarity class for corpus-based similarity queries.
Note:
If you use this software, please consider citing [1] and [2].
Running this notebook
You can download this Jupyter notebook, and run it on your own computer, provided you have installed the gensim, jupyter, sklearn, pyemd, and wmd Python packages.
The notebook was run on an Ubuntu machine with an Intel core i7-6700HQ CPU 3.10GHz (4 cores) and 16 GB memory. Assuming all resources required by the notebook have already been downloaded, running the entire notebook on this machine takes about 30 minutes.
End of explanation
"""
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
sentence_orange = 'Oranges are my favorite fruit'.lower().split()
"""
Explanation: Part 1: Computing the Soft Cosine Measure
To use SCM, we need some word embeddings first of all. You could train a word2vec (see tutorial here) model on some corpus, but we will use pre-trained word2vec embeddings.
Let's create some sentences to compare.
End of explanation
"""
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
sentence_orange = [w for w in sentence_orange if w not in stop_words]
# Prepare a dictionary and a corpus.
from gensim import corpora
documents = [sentence_obama, sentence_president, sentence_orange]
dictionary = corpora.Dictionary(documents)
corpus = [dictionary.doc2bow(document) for document in documents]
# Convert the sentences into bag-of-words vectors.
sentence_obama = dictionary.doc2bow(sentence_obama)
sentence_president = dictionary.doc2bow(sentence_president)
sentence_orange = dictionary.doc2bow(sentence_orange)
"""
Explanation: The first two sentences have very similar content, and as such the SCM should be large. Before we compute the SCM, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
End of explanation
"""
%%time
import gensim.downloader as api
w2v_model = api.load("glove-wiki-gigaword-50")
similarity_matrix = w2v_model.similarity_matrix(dictionary)
"""
Explanation: Now, as we mentioned earlier, we will be using some downloaded pre-trained embeddings. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the softcossim function.
End of explanation
"""
from gensim.matutils import softcossim
similarity = softcossim(sentence_obama, sentence_president, similarity_matrix)
print('similarity = %.4f' % similarity)
"""
Explanation: So let's compute SCM using the softcossim function.
End of explanation
"""
similarity = softcossim(sentence_obama, sentence_orange, similarity_matrix)
print('similarity = %.4f' % similarity)
"""
Explanation: Let's try the same thing with two completely unrelated sentences. Notice that the similarity is smaller.
End of explanation
"""
%%time
from itertools import chain
import json
from re import sub
from os.path import isfile
import gensim.downloader as api
from gensim.utils import simple_preprocess
from nltk.corpus import stopwords
from nltk import download
download("stopwords") # Download stopwords list.
stopwords = set(stopwords.words("english"))
def preprocess(doc):
doc = sub(r'<img[^<>]+(>|$)', " image_token ", doc)
doc = sub(r'<[^<>]+(>|$)', " ", doc)
doc = sub(r'\[img_assist[^]]*?\]', " ", doc)
doc = sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', " url_token ", doc)
return [token for token in simple_preprocess(doc, min_len=0, max_len=float("inf")) if token not in stopwords]
corpus = list(chain(*[
chain(
[preprocess(thread["RelQuestion"]["RelQSubject"]), preprocess(thread["RelQuestion"]["RelQBody"])],
[preprocess(relcomment["RelCText"]) for relcomment in thread["RelComments"]])
for thread in api.load("semeval-2016-2017-task3-subtaskA-unannotated")]))
print("Number of documents: %d" % len(documents))
"""
Explanation: Part 2: Similarity queries using SoftCosineSimilarity
You can use SCM to get the most similar documents to a query, using the SoftCosineSimilarity class. Its interface is similar to what is described in the Similarity Queries Gensim tutorial.
Qatar Living unannotated dataset
Contestants solving the community question answering task in the SemEval 2016 and 2017 competitions had an unannotated dataset of 189,941 questions and 1,894,456 comments from the Qatar Living discussion forums. As our first step, we will use the same dataset to build a corpus.
End of explanation
"""
%%time
from gensim.corpora import Dictionary
from gensim.models import TfidfModel
from gensim.models import Word2Vec
from multiprocessing import cpu_count
dictionary = Dictionary(corpus)
tfidf = TfidfModel(dictionary=dictionary)
w2v_model = Word2Vec(corpus, workers=cpu_count(), min_count=5, size=300, seed=12345)
similarity_matrix = w2v_model.wv.similarity_matrix(dictionary, tfidf, nonzero_limit=100)
print("Number of unique words: %d" % len(dictionary))
"""
Explanation: Using the corpus we have just build, we will now construct a dictionary, a TF-IDF model, a word2vec model, and a term similarity matrix.
End of explanation
"""
datasets = api.load("semeval-2016-2017-task3-subtaskBC")
"""
Explanation: Evaluation
Next, we will load the validation and test datasets that were used by the SemEval 2016 and 2017 contestants. The datasets contain 208 original questions posted by the forum members. For each question, there is a list of 10 threads with a human annotation denoting whether or not the thread is relevant to the original question. Our task will be to order the threads so that relevant threads rank above irrelevant threads.
End of explanation
"""
from math import isnan
from time import time
from gensim.similarities import MatrixSimilarity, WmdSimilarity, SoftCosineSimilarity
import numpy as np
from sklearn.model_selection import KFold
from wmd import WMD
def produce_test_data(dataset):
for orgquestion in datasets[dataset]:
query = preprocess(orgquestion["OrgQSubject"]) + preprocess(orgquestion["OrgQBody"])
documents = [
preprocess(thread["RelQuestion"]["RelQSubject"]) + preprocess(thread["RelQuestion"]["RelQBody"])
for thread in orgquestion["Threads"]]
relevance = [
thread["RelQuestion"]["RELQ_RELEVANCE2ORGQ"] in ("PerfectMatch", "Relevant")
for thread in orgquestion["Threads"]]
yield query, documents, relevance
def cossim(query, documents):
# Compute cosine similarity between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = MatrixSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
num_features=len(dictionary))
similarities = index[query]
return similarities
def softcossim(query, documents):
# Compute Soft Cosine Measure between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = SoftCosineSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
similarity_matrix)
similarities = index[query]
return similarities
def wmd_gensim(query, documents):
# Compute Word Mover's Distance as implemented in PyEMD by William Mayner
# between the query and the documents.
index = WmdSimilarity(documents, w2v_model)
similarities = index[query]
return similarities
def wmd_relax(query, documents):
# Compute Word Mover's Distance as implemented in WMD by Source{d}
# between the query and the documents.
words = [word for word in set(chain(query, *documents)) if word in w2v_model.wv]
indices, words = zip(*sorted((
(index, word) for (index, _), word in zip(dictionary.doc2bow(words), words))))
query = dict(tfidf[dictionary.doc2bow(query)])
query = [
(new_index, query[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in query]
documents = [dict(tfidf[dictionary.doc2bow(document)]) for document in documents]
documents = [[
(new_index, document[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in document] for document in documents]
embeddings = np.array([w2v_model.wv[word] for word in words], dtype=np.float32)
nbow = dict(((index, list(chain([None], zip(*document)))) for index, document in enumerate(documents)))
nbow["query"] = (None, *zip(*query))
distances = WMD(embeddings, nbow, vocabulary_min=1).nearest_neighbors("query")
similarities = [-distance for _, distance in sorted(distances)]
return similarities
strategies = {
"cossim" : cossim,
"softcossim": softcossim,
"wmd-gensim": wmd_gensim,
"wmd-relax": wmd_relax}
def evaluate(split, strategy):
# Perform a single round of evaluation.
results = []
start_time = time()
for query, documents, relevance in split:
similarities = strategies[strategy](query, documents)
assert len(similarities) == len(documents)
precision = [
(num_correct + 1) / (num_total + 1) for num_correct, num_total in enumerate(
num_total for num_total, (_, relevant) in enumerate(
sorted(zip(similarities, relevance), reverse=True)) if relevant)]
average_precision = np.mean(precision) if precision else 0.0
results.append(average_precision)
return (np.mean(results) * 100, time() - start_time)
def crossvalidate(args):
# Perform a cross-validation.
dataset, strategy = args
test_data = np.array(list(produce_test_data(dataset)))
kf = KFold(n_splits=10)
samples = []
for _, test_index in kf.split(test_data):
samples.append(evaluate(test_data[test_index], strategy))
return (np.mean(samples, axis=0), np.std(samples, axis=0))
%%time
from multiprocessing import Pool
args_list = [
(dataset, technique)
for dataset in ("2016-test", "2017-test")
for technique in ("softcossim", "wmd-gensim", "wmd-relax", "cossim")]
with Pool() as pool:
results = pool.map(crossvalidate, args_list)
"""
Explanation: Finally, we will perform an evaluation to compare three unsupervised similarity measures – the Soft Cosine Measure, two different implementations of the Word Mover's Distance, and standard cosine similarity. We will use the Mean Average Precision (MAP) as an evaluation measure and 10-fold cross-validation to get an estimate of the variance of MAP for each similarity measure.
End of explanation
"""
from IPython.display import display, Markdown
output = []
baselines = [
(("2016-test", "**Winner (UH-PRHLT-primary)**"), ((76.70, 0), (0, 0))),
(("2016-test", "**Baseline 1 (IR)**"), ((74.75, 0), (0, 0))),
(("2016-test", "**Baseline 2 (random)**"), ((46.98, 0), (0, 0))),
(("2017-test", "**Winner (SimBow-primary)**"), ((47.22, 0), (0, 0))),
(("2017-test", "**Baseline 1 (IR)**"), ((41.85, 0), (0, 0))),
(("2017-test", "**Baseline 2 (random)**"), ((29.81, 0), (0, 0)))]
table_header = ["Dataset | Strategy | MAP score | Elapsed time (sec)", ":---|:---|:---|---:"]
for row, ((dataset, technique), ((mean_map_score, mean_duration), (std_map_score, std_duration))) \
in enumerate(sorted(chain(zip(args_list, results), baselines), key=lambda x: (x[0][0], -x[1][0][0]))):
if row % (len(strategies) + 3) == 0:
output.extend(chain(["\n"], table_header))
map_score = "%.02f ±%.02f" % (mean_map_score, std_map_score)
duration = "%.02f ±%.02f" % (mean_duration, std_duration) if mean_duration else ""
output.append("%s|%s|%s|%s" % (dataset, technique, map_score, duration))
display(Markdown('\n'.join(output)))
"""
Explanation: The table below shows the pointwise estimates of means and standard variances for MAP scores and elapsed times. Baselines and winners for each year are displayed in bold. We can see that the Soft Cosine Measure gives a strong performance on both the 2016 and the 2017 dataset.
End of explanation
"""
|
quinterojs/US-Bicycle-Commuting-Analysis | Bicycle Commuting and Public Health in the United States.ipynb | mit | # Libraries
import pandas as pd
import numpy as np
from scipy import stats
from sklearn import preprocessing
number = preprocessing.LabelEncoder()
import statsmodels.api as sm
import seaborn as sns
sns.set(style='white', context='talk')
p = sns.color_palette('husl', 8)
import matplotlib.pyplot as plt
plt.rcParams["axes.facecolor"] = "#FFFFFF"
plt.rcParams["figure.facecolor"] = "#FFFFFF"
plt.rcParams["savefig.facecolor"] = "#FFFFFF"
plt.rcParams["savefig.facecolor"] = "#FFFFFF"
plt.rcParams["font.family"] = "sans-serif"
plt.rc('axes',edgecolor='#FFFFFF')
plt.rcParams['figure.dpi'] = 300
fig_size = (9, 6)
%matplotlib inline
import plotly
import plotly.express as px
import plotly.graph_objects as go
from plotly.graph_objs import *
from plotly import figure_factory as FF
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import chart_studio as py
# import colorlover as cl
import plotly.io as pio
pio.renderers.default = "notebook_connected"
"""
Explanation: A Look at Bicycle Commuting and Public Health in the United States
Introduction
The U.S. Department of Transportation recently partnered with the Centers for Disease Control (CDC) and the American Public Health Association (APHA) to release data on transportation and public health indicators for each U.S. state and metropolitan area. The purpose of the data release, available via the Transportation and Health Tool (THT), is to provide practitioners with the data that they need to make informed decisions when planning changes to transportation infrastructure.
The purpose of this study is to perform exploratory data analysis on the DOT's publicly released aggregate cross-sectional dataset; with the overall objective of teasing out relationships between bicycle commuting and public health in the United States.
Motivation
Research has linked active transportation and physical activity with public health outcomes. Using a similar aggregate cross-sectional dataset and bivariate regression analysis, a recent study found that active transportation has a statistically significant negative relationship with diabetes and self-reported levels of obesity. If those associations are true, then it may be important to discover which forms of active transportation are the most effective in combating these negative health outcomes.
In this study, I focus on state level data and develop a statistical model to evaluate the proposition that commuting by bike is a greater indicator of getting at least 10 minutes of physical activity during a commute than traveling by foot. I conjecture that cycling promotes staying active for longer periods of time because of the greater distances that can be covered on a bicycle––incentivizing a longer commute.
Data
<b>Physical Activity from Transportation</b>: Measures the percentage of all trips made by foot or by bicycle that are at least 10 minutes long. Data come from the 2009 National Household Travel Survey (NHTS).
<b>Commute mode shares (by bicycle, walking, and automobile)</b>: Measures the percentage of workers aged 16 years and over who commute either by bicycle, by private vehicle (including car, truck, van, taxicab, and motorcycle), by public transportation (including bus, rail, and ferry), or by foot. Data come from the 2012 one-year estimates from the American Community Survey (ACS)
<b>Complete Streets Policy</b>: Categorical variable indicating whether or not a state or the metropolitan planning organization that serves the region or a given metro area has adopted a complete streets policy that requires or encourages a safe, comfortable, integrated transportation network for all users, regardless of age, ability, income, ethnicity, or mode of transportation.
<b>Alcohol-Impared Fatalities (DUI/DWI)</b>: Measures the rate of fatal traffic crashes that involve a driver who is impaired by alcohol, per 100,000 Residents. Data on fatalities come from the 2012 Fatality Analysis Reporting System (FARS). Population data come from the 2012 American Community Survey (ACS) 1-year estimates.
<b>Proximity to major roadways</b>: Estimates the percentage of people who live within 200 meters, or approximately 650 feet, of a high traffic roadway that carries over 125,000 vehicles per day. Data on the location of roads and traffic levels come from the 2011 National Transportation Atlas Database; data on population come from the 2010 Census.
<b>Use of Federal Funds for Bicycle and Pedestrian Efforts</b>: Measures the percentage of federal transportation dollars that go to bicycle and pedestrian infrastructure projects.
A more detailed description of each indicator is available from the DOT.
Methodology
A linear regression model with heteroscedastically robust standard errors is used for parameter estimation. Residuals vs fitted values and QQ plots are assessed for statistical validation of the OLS model.
Results
The results from this study indicate that bicycle commuting is the only transportation mode that is statistically and positively associated with higher levels of physical activity from transportation at the state level (p < 0.05, adjusted R-squared = 0.761). The key insight being that bicycle commuting is more likely to get people moving for at least 10 minutes than commuting by foot. Increases in bicycle commuting at the state level may therefore lead to increases in physical activity in the general population–and by extension also decreases in certain negative health outcomes.
Limitations
Cross-sectional aggregate data essentially provides a snapshot in time. Therefore, the results of this study are correlational and not necessarily causal.
In addition, it could be argued that physical activity from transportation and bike share of commuting are confounding variables because both are, in a sense, a measure of transportation by bicycle. The argument is valid. However, physical activity from transportation is measured as the percentage of all types of trips made by foot or by bicycle that are at least 10 minutes long, whereas commuting by bicycle is measured as the percentage of only work trips––without the time component. More importantly, the physical activity from transportation data was collected from the 2009 National Household Travel Survey (NHTS), whereas the bicycle commute share data comes from the 2012 one-year estimates from the American Community Survey (ACS), so there is also independence in the data samples. These distinctions may be sufficient to argue that the data generating process for each measure is independent.
<hr>
Contents
Data Processing
Data Visualization and Exploratory Data Analysis
Regression Analysis and Diagnostics
<hr>
Please feel free to share, and don't hesitate to reach out with comments or questions!
Enjoy!
Sebastian Quintero<br>
sebastianquintero.co
End of explanation
"""
### Reading the data and assessing data frame structure
df = pd.read_csv('BikeState.csv', header=0)
df.info()
df.head()
### Data Cleaning and Preprocessing
# Dealing with missing values
df = df.replace(' ', np.nan)
df = df.dropna()
df = df.reset_index()
# Making column titles a bit more manageable
df['funds_raw'] = df['Use of Federal Funds for Bicycle and Pedestrian Efforts: Raw Value'].astype(float)
df['funds'] = df['Use of Federal Funds for Bicycle and Pedestrian Efforts: Score'].astype(int)
df['bike_score'] = df['Commute Mode Share - Bicycle: Score'].astype(int)
df['bike_share'] = df['Commute Mode Share - Bicycle: Raw Value'].astype(float)
df['physical_score'] = df['Physical Activity from Transportation: Score'].astype(float)
df['physical_raw'] = df['Physical Activity from Transportation: Raw Value'].astype(float)
df['bike_fatalities_score'] = df['Road Traffic Fatalities per 100,000 Residents - Bicycle: Score'].astype(int)
df['bike_fatalities_exposure'] = df['Road Traffic Fatalities Exposure Rate - Bicycle: Raw Value'].astype(float)
df['walk_score'] = df['Commute Mode Share - Walk: Score'].astype(int)
df['walk_share'] = df['Commute Mode Share - Walk: Raw Value'].astype(float)
df['transit_score'] = df['Commute Mode Share - Transit: Score'].astype(int)
df['transit_share'] = df['Commute Mode Share - Transit: Raw Value'].astype(float)
df['auto_score'] = df['Commute Mode Share - Auto: Score'].astype(int)
df['auto_share'] = df['Commute Mode Share - Auto: Raw Value'].astype(float)
df['proxmajorhwy_score'] = df['Proximity to Major Roadways: Score'].astype(int)
df['proxmajorhwy_raw'] = df['Proximity to Major Roadways: Raw Value'].astype(float)
df['mileswalk_score'] = df['Person Miles of Travel by Walking: Score'].astype(int)
df['mileswalk_raw'] = df['Person Miles of Travel by Walking: Raw Value'].astype(float)
df['duidwi_score'] = df['DUI/DWI Fatalities per 10,000 Residents: Score'].astype(int)
df['duidwi_raw'] = df['DUI/DWI Fatalities per 10,000 Residents: Raw Value'].astype(float)
df['complete_streets_policy'] = df['Complete Streets Policies: Raw Value']
# The Complete Streets Policy column needs to be transformed into a boolean variable.
df['complete_streets_policy'].unique()
# Transforming categorical variables
def convert(data):
number = preprocessing.LabelEncoder()
data['complete_streets_policy'] = number.fit_transform(data.complete_streets_policy)
return data
# Sending data through the converter and visually inspecting results
df = convert(df)
df.head()
df[['Code', 'Complete Streets Policies: Raw Value', 'complete_streets_policy']].tail()
"""
Explanation: 1. Data Processing
You can download a fresh version of the data file from Data.gov, or just download the one I uploaded to the repo.*
* Note: The file in the repo has the state abbreviations added under the column called 'Code'.
* <b>Update 2020</b>: Data.gov resource is no longer available, but the data is still hosted at the Transportation and Health Tool (THT).
End of explanation
"""
for col in df.columns:
df[col] = df[col].astype(str)
# create the text column for the map
df['text'] = '<br>Commute Shares by Type' + '<br>' +\
'Bike: ' + (df['bike_share'].astype(float)*100).astype(str) + '%' + '<br>' +\
'Walk: ' + (df['walk_share'].astype(float)*100).astype(str) + '%' + '<br>' +\
'Transit: ' + (df['transit_share'].astype(float)*100).astype(str) + '%' + '<br>' +\
'Auto: ' + (df['auto_share'].astype(float)*100).astype(str) + '%'
fig = go.Figure(data=go.Choropleth(
colorscale = 'Blues',
autocolorscale = False,
locations = df['Code'],
z = df['physical_raw'].astype(float),
locationmode = 'USA-states',
text = df['text'],
marker_line_color='white',
colorbar_title = "Physical Activity<br>Raw Value"
))
fig.update_layout(
title_text='Plot 1: 2016 US Physical Activity from Transportation and Commute Shares by Type<br>(Hover for commute shares by type breakdown)',
geo = dict(
scope='usa',
projection=go.layout.geo.Projection(type = 'albers usa'),
showlakes=True, # lakes
lakecolor='rgb(255, 255, 255)'),
)
iplot(fig, filename='choropleth-us', validate=False)
"""
Explanation: 2. Data Visualization and Exploratory Data Analysis
Let's first take a look at the variance in physical activity from transportation across US states. An interactive choropleth map could help make this more intuitive. Commute Shares by State are added as annotated data for each stage, which can be viewed by hoving over the chart.
* Note: Commute values do not add up to 100%, likely because there are other transportion modes not captured in the data release.
End of explanation
"""
x = df['physical_raw'].astype(float)
hist_data = [x]
group_labels = ['Physical Activity Raw Value']
fig = FF.create_distplot(hist_data,
group_labels,
bin_size=.01,
show_rug=False)
# Add title
fig['layout'].update(title = 'Plot 2: Physical Activity From Transportation Raw Value<br>Probability Distribution(Hover for Detail)')
fig.add_vline(x=x.mean(), line_width=2, line_dash="dash", line_color="darkblue")
# Plot!
iplot(fig, filename='Probability Distribution – Physical Activity Raw Value', validate=False)
"""
Explanation: The states with the highest levels of physical activity from transportation are New York, Oregon, and California, whereas those with the lowest levels are Tennessee, Arkansas, Louisiana, and North Dakota.
It's interesting that Oregon and New York both have a really high value of Physical Activity and achieved it by similar means, though New York has a slightly higher walk score and a slightly lower bike score. The trend overall indicates that states with high Physical Activity Scores have higher Bike scores but even higher Walk scores. At this point, the data suggests that we may not be able to reject the null hypothesis, but we must still perform the statistical test.
The public transit value for New York is also remarkably higher than anywhere else in the country, which makes sense.
Across the country there seems to be a consistant rate of auto commuting–between 60 and 95 percent. On the other hand, bike and walk values range between 0 and 6 percent.
While a choropleth map is insightful, a distribution plot may help us better understand what the data looks like.
End of explanation
"""
x1 = df['bike_share'].astype(float)
x2 = df['walk_share'].astype(float)
hist_data = [x1, x2]
group_labels = ['Bike Share', 'Walk Share']
# colors = ['#2BCDC1', '#F66095']
fig = FF.create_distplot(hist_data,
group_labels,
# colors=colors,
bin_size=.01,
show_rug=False)
#Add title
fig['layout'].update(title='Plot 3: Share of Travel by Walking and Biking<br>Probability Distributions (Hover for Detail)')
fig.add_vline(x=x1.mean(), line_width=2, line_dash="dash", line_color="darkblue")
fig.add_vline(x=x2.mean(), line_width=2, line_dash="dash", line_color="darkorange")
iplot(fig, filename='Probability Distribution Share of Total Travel by Commute Mode', validate=False)
"""
Explanation: The bulk of the values are between 0.04 and 0.13, with a mean of about 0.09 (dashed vertical line). Oregon and New York are the outliers at .18 and .2, respectively.
Moving on, we should assess the probability distributions for the commute mode shares. To keep things simple, we'll focus on just the bike and walk values, but I'd encourage you to plot the distributions for public transit and auto commute values yourself.
End of explanation
"""
x = 'bike_share'
y = 'physical_raw'
fig = px.scatter(df,
x=x,
y=y,
trendline="ols",
labels={
"bike_share": "Bike Commute Share",
"physical_raw": "Physical Activity Raw Value",
},
title="Plot 4: Physical Activity from Transportation and Commuting by Bike<br>Scatter Plot and Line of Best Fit")
fig.show()
# results = px.get_trendline_results(fig)
# results.px_fit_results.iloc[0].summary()
x = 'walk_share'
y = 'physical_raw'
fig = px.scatter(df,
x=x,
y=y,
trendline="ols",
labels={
"walk_share": "Walk Commute Share",
"physical_raw": "Physical Activity Raw Value",
},
title="Plot 5: Physical Activity from Transportation and Commuting by Foot<br>Scatter Plot and Line of Best Fit")
fig.show()
# results = px.get_trendline_results(fig)
# results.px_fit_results.iloc[0].summary()
x = 'transit_share'
y = 'physical_raw'
fig = px.scatter(df,
x=x,
y=y,
trendline="ols",
labels={
"transit_share": "Transit Commute Share",
"physical_raw": "Physical Activity Raw Value",
},
title="Plot 6: Physical Activity from Transportation and Commuting by Public Transit<br>Scatter Plot and Line of Best Fit"
)
fig.show()
# results = px.get_trendline_results(fig)
# results.px_fit_results.iloc[0].summary()
x = 'auto_share'
y = 'physical_raw'
fig = px.scatter(df,
x=x,
y=y,
trendline="ols",
labels={
"auto_share": "Car Commute Share",
"physical_raw": "Physical Activity Raw Value",
},
title="Plot 7: Physical Activity from Transportation and Commuting by Car<br>Scatter Plot and Line of Best Fit",
)
fig.show()
# results = px.get_trendline_results(fig)
# results.px_fit_results.iloc[0].summary()
"""
Explanation: The walk distribution has a notable positive skew. There is very little variance in the bike share distribution. The bike share distribution also appears to be bimodal, with a high frequency at zero and another peak near 0.01.
Now that we have an idea of the probability distributions for our variables of interest, it's time to build a few simple linear regression plots for pattern recognition––to see if we can identify any correlative relationships between physical activity and the commute covariates.
End of explanation
"""
fig = px.histogram(df,
x="Complete Streets Policies: Raw Value",
title="Plot 8: Complete Streets Policies Histogram"
)
fig.show()
"""
Explanation: The regression plots above indicate that biking, walking, and using public transit are positively correlated with physical activity, and commuting by car is negatively correlated with physical activity.
I'm also interested in seeing how Complete Streets Policies affect physical activity across the US. States either enact those policies or they don't. States are split roughly evenly at 52 percent that do versus 48 percent that do not enact these policies.
End of explanation
"""
### Linear Regression Model
# Iteration 3 with Robust Standard Errors
np.asarray(df)
y = df['physical_raw']
X = df[['bike_share', 'walk_share', 'auto_share', 'duidwi_raw', 'proxmajorhwy_raw']]
X = sm.add_constant(X)
mod = sm.OLS(y.astype(float), X.astype(float))
res = mod.fit(cov_type='HC3')
# Regression results
print(res.summary())
"""
Explanation: 3. Regression Analysis and Diagnostics
My working hypothesis is that states with higher levels of bicycle commuting will tend to have higher levels of physically active commutes that last 10 minutes or longer. I conjecture that cycling promotes staying active for longer periods of time because it is a more efficient form of transportation––more ground is easily covered. The greater distances that can be covered on a bicycle may incentize a longer active commute.
So with that in mind, let's combine everything we've observed up until this point, and build the linear model,
$$ y = X\beta + \epsilon $$
where y is the response variable Physical Activity from Transportation, X is a vector of covariate, and $\epsilon$ is the error term.
Heteroscedastically robust standard errors, a fitted values vs. residuals plot, and a Q-Q plot are applied for statistical validation of the OLS model.
End of explanation
"""
# Regression Diagnostics
# Fitted Values vs. Standardized Residuals Plot
fig, ax = plt.subplots(figsize=fig_size)
residuals = res.resid
fitted = res.fittedvalues
ax = sns.residplot(fitted, residuals, lowess=False, color='black')
ax.set(xlabel='Fitted values', ylabel='Residuals')
fig.suptitle('Fitted Values vs. Standardized Residuals Plot', size=16)
# Q-Q Plot
fig, ax = plt.subplots(figsize=fig_size)
fig.suptitle('Q-Q Plot', size=16)
pplot = sm.ProbPlot(residuals, fit=True,)
ax = pplot.qqplot(ax=ax, color='black')
sm.qqline(ax.axes[0], line='45', fmt='blue')
"""
Explanation: The following variables were decided upon to evaluate the research hypothesis after an iterative model selection process because they were found to maximize the adjusted R-squared value while maintaining an approximately normal distribution of residuals, no heteroscedasticity, and no multicollinearity: Commuting by bike, commuting by car, commuting by foot, alcohol-impared fatalities, and proximity to major highways.
The fitted values vs. residuals plot below indicates that there is no obvious heteroscedasticity present in the model, but there are definitely one or two outliers present. The Q-Q plot seems reasonable, and no multicollinearity warnings are present in the regression results. With the technical assumptions of the OLS model validated I can feel confident in the technical procedures.
With an adjusted R-squared of 0.761, this model suggests that states with higher levels of physical activity from transportation will have higher levels of bicycle commuting (p < 0.05), lower alcohol-impared fatalities (p < 0.01), and lower levels of commuting by car (p < 0.05). Proximity to major highways and commuting by foot are not statistically correlated with physical activity from transportation at the state level.
End of explanation
"""
|
javierarilos/deep-learning-ud | 2_fullyconnected.ipynb | apache-2.0 | # These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
"""
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
"""
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
"""
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
"""
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
"""
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
"""
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Let's run this computation and iterate:
End of explanation
"""
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
"""
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of session.run().
End of explanation
"""
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
"""
Explanation: Let's run it:
End of explanation
"""
##########################################################################
## Turning logistic regression to 1-hidden layer neural net with nn.relu
batch_size = 128
hidden_nodes = 1024
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Layer 1
l1_weights = tf.Variable(tf.truncated_normal([image_size * image_size, hidden_nodes]))
l1_biases = tf.Variable(tf.zeros([hidden_nodes]))
l1_logits = tf.matmul(tf_train_dataset, l1_weights) + l1_biases
l1_output = tf.nn.relu(l1_logits)
# Layer 2
l2_weights = tf.Variable(tf.truncated_normal([hidden_nodes, num_labels]))
l2_biases = tf.Variable(tf.truncated_normal([num_labels]))
l2_logits = tf.matmul(l1_output, l2_weights) + l2_biases
# Loss
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(l2_logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(l2_logits)
l1_logits = tf.matmul(tf_valid_dataset, l1_weights) + l1_biases
l1_output = tf.nn.relu(l1_logits)
l2_logits = tf.matmul(l1_output, l2_weights) + l2_biases
valid_prediction = tf.nn.softmax(l2_logits)
l1_logits = tf.matmul(tf_test_dataset, l1_weights) + l1_biases
l1_output = tf.nn.relu(l1_logits)
l2_logits = tf.matmul(l1_output, l2_weights) + l2_biases
test_prediction = tf.nn.softmax(l2_logits)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
print(train_labels.shape[0])
print(batch_data.shape)
print(batch_labels.shape)
"""
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units nn.relu() and 1024 hidden nodes. This model should improve your validation / test accuracy.
End of explanation
"""
|
karlstroetmann/Formal-Languages | ANTLR4-Python/LR-Parser-Generator/Shift-Reduce-Parser-Pure.ipynb | gpl-2.0 | import re
"""
Explanation: A Shift-Reduce Parser for Arithmetic Expressions
In this notebook we implement a simple recursive descend parser for arithmetic expressions.
This parser will implement the following grammar:
$$
\begin{eqnarray}
\mathrm{expr} & \rightarrow & \mathrm{expr}\;\;\texttt{'+'}\;\;\mathrm{product} \
& \mid & \mathrm{expr}\;\;\texttt{'-'}\;\;\mathrm{product} \
& \mid & \mathrm{product} \[0.2cm]
\mathrm{product} & \rightarrow & \mathrm{product}\;\;\texttt{''}\;\;\mathrm{factor} \
& \mid & \mathrm{product}\;\;\texttt{'/'}\;\;\mathrm{factor} \
& \mid & \mathrm{factor} \[0.2cm]
\mathrm{factor} & \rightarrow & \texttt{'('} \;\;\mathrm{expr} \;\;\texttt{')'} \
& \mid & \texttt{NUMBER}
\end{eqnarray*}
$$
Implementing a Scanner
End of explanation
"""
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
lexSpec = r'''([ \t]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ 'NUMBER' ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
else:
result += [ f'ERROR({error})']
return result
tokenize('1 + 2 * (3 - 4)')
class ShiftReduceParser():
def __init__(self, actionTable, gotoTable):
self.mActionTable = actionTable
self.mGotoTable = gotoTable
def parse(self, TL):
index = 0 # points to next token
Symbols = [] # stack of symbols
States = ['s0'] # stack of states, s0 is start state
TL += ['$']
while True:
q = States[-1]
t = TL[index]
print('Symbols:', ' '.join(Symbols + ['|'] + TL[index:]).strip())
p = self.mActionTable.get((q, t), 'error')
if p == 'error':
return False
elif p == 'accept':
return True
elif p[0] == 'shift':
s = p[1]
Symbols += [t]
States += [s]
index += 1
elif p[0] == 'reduce':
head, body = p[1]
n = len(body)
Symbols = Symbols[:-n]
States = States [:-n]
Symbols = Symbols + [head]
state = States[-1]
States += [ self.mGotoTable[state, head] ]
ShiftReduceParser.parse = parse
del parse
%run parse-table.py
"""
Explanation: The function tokenize transforms the string s into a list of tokens. See below for an example.
End of explanation
"""
def test(s):
parser = ShiftReduceParser(actionTable, gotoTable)
TL = tokenize(s)
print(f'tokenlist: {TL}\n')
if parser.parse(TL):
print('Parse successful!')
else:
print('Parse failed!')
test('(1 + 2) * 3')
test('1 * 2 + 3 * (4 - 5) / 2')
test('11+22*(33-44)/(5-10*5/(4-3))')
test('1+2*3-')
"""
Explanation: Testing
End of explanation
"""
|
nntisapeh/intro_programming | notebooks/lists_tuples.ipynb | mit | students = ['bernice', 'aaron', 'cody']
for student in students:
print("Hello, " + student.title() + "!")
"""
Explanation: Lists and Tuples
In this notebook, you will learn to store more than one valuable in a single variable. This by itself is one of the most powerful ideas in programming, and it introduces a number of other central concepts such as loops. If this section ends up making sense to you, you will be able to start writing some interesting programs, and you can be more confident that you will be able to develop overall competence as a programmer.
Previous: Variables, Strings, and Numbers |
Home |
Next: Introducing Functions
Contents
Lists
Introducing Lists
Example
Naming and defining a list
Accessing one item in a list
Exercises
Lists and Looping
Accessing all elements in a list
Enumerating a list
Exercises
Common List Operations
Modifying elements in a list
Finding an element in a list
Testing whether an element is in a list
Adding items to a list
Creating an empty list
Sorting a list
Finding the length of a list
Exercises
Removing Items from a List
Removing items by position
Removing items by value
Popping items
Exercises
Want to see what functions are?
Slicing a List
Copying a list
Exercises
Numerical Lists
The range() function
The min(), max(), sum() functions
Exercises
List Comprehensions
Numerical comprehensions
Non-numerical comprehensions
Exercises
Strings as Lists
Strings as a list of characters
Slicing strings
Finding substrings
Replacing substrings
Counting substrings
Splitting strings
Other string methods
Exercises
Challenges
Tuples
Defining tuples, and accessing elements
Using tuples to make strings
Exercises
Coding Style: PEP 8
Why have style conventions?
What is a PEP?
Basic Python style guidelines
Exercises
Overall Challenges
Lists
Introducing Lists
Example
A list is a collection of items, that is stored in a variable. The items should be related in some way, but there are no restrictions on what can be stored in a list. Here is a simple example of a list, and how we can quickly access each item in the list.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
"""
Explanation: Naming and defining a list
Since lists are collection of objects, it is good practice to give them a plural name. If each item in your list is a car, call the list 'cars'. If each item is a dog, call your list 'dogs'. This gives you a straightforward way to refer to the entire list ('dogs'), and to a single item in the list ('dog').
In Python, square brackets designate a list. To define a list, you give the name of the list, the equals sign, and the values you want to include in your list within square brackets.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[0]
print(dog.title())
"""
Explanation: Accessing one item in a list
Items in a list are identified by their position in the list, starting with zero. This will almost certainly trip you up at some point. Programmers even joke about how often we all make "off-by-one" errors, so don't feel bad when you make this kind of error.
To access the first element in a list, you give the name of the list, followed by a zero in parentheses.
End of explanation
"""
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[1]
print(dog.title())
"""
Explanation: The number in parentheses is called the index of the item. Because lists start at zero, the index of an item is always one less than its position in the list. So to get the second item in the list, we need to use an index of 1.
End of explanation
"""
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-1]
print(dog.title())
"""
Explanation: Accessing the last items in a list
You can probably see that to get the last item in this list, we would use an index of 2. This works, but it would only work because our list has exactly three items. To get the last item in a list, no matter how long the list is, you can use an index of -1.
End of explanation
"""
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-2]
print(dog.title())
"""
Explanation: This syntax also works for the second to last item, the third to last, and so forth.
End of explanation
"""
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dog = dogs[-4]
print(dog.title())
"""
Explanation: You can't use a negative number larger than the length of the list, however.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print(dog)
"""
Explanation: top
<a id="Exercises-lists"></a>
Exercises
First List
Store the values 'python', 'c', and 'java' in a list. Print each of these values out, using their position in the list.
First Neat List
Store the values 'python', 'c', and 'java' in a list. Print a statement about each of these values, using their position in the list.
Your statement could simply be, 'A nice programming language is value.'
Your First List
Think of something you can store in a list. Make a list with three or four items, and then print a message that includes at least one item from your list. Your sentence could be as simple as, "One item in my list is a ____."
top
Lists and Looping
Accessing all elements in a list
This is one of the most important concepts related to lists. You can have a list with a million items in it, and in three lines of code you can write a sentence for each of those million items. If you want to understand lists, and become a competent programmer, make sure you take the time to understand this section.
We use a loop to access all the elements in a list. A loop is a block of code that repeats itself until it runs out of items to work with, or until a certain condition is met. In this case, our loop will run once for every item in our list. With a list that is three items long, our loop will run three times.
Let's take a look at how we access all the items in a list, and then try to understand how it works.
End of explanation
"""
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dog + 's.')
"""
Explanation: We have already seen how to create a list, so we are really just trying to understand how the last two lines work. These last two lines make up a loop, and the language here can help us see what is happening:
for dog in dogs:
The keyword "for" tells Python to get ready to use a loop.
The variable "dog", with no "s" on it, is a temporary placeholder variable. This is the variable that Python will place each item in the list into, one at a time.
The first time through the loop, the value of "dog" will be 'border collie'.
The second time through the loop, the value of "dog" will be 'australian cattle dog'.
The third time through, "dog" will be 'labrador retriever'.
After this, there are no more items in the list, and the loop will end.
The site <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print(dog)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor.com</a> allows you to run Python code one line at a time. As you run the code, there is also a visualization on the screen that shows you how the variable "dog" holds different values as the loop progresses. There is also an arrow that moves around your code, showing you how some lines are run just once, while other lines are run multiple tiimes. If you would like to see this in action, click the Forward button and watch the visualization, and the output as it is printed to the screen. Tools like this are incredibly valuable for seeing what Python is doing with your code.
Doing more with each item
We can do whatever we want with the value of "dog" inside the loop. In this case, we just print the name of the dog.
print(dog)
We are not limited to just printing the word dog. We can do whatever we want with this value, and this action will be carried out for every item in the list. Let's say something about each dog in our list.
End of explanation
"""
###highlight=[6,7,8]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dog + 's.')
print('No, I really really like ' + dog +'s!\n')
print("\nThat's just how I feel about dogs.")
"""
Explanation: Visualize this on <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print('I+like+'+%2B+dog+%2B+'s.')&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor</a>.
Inside and outside the loop
Python uses indentation to decide what is inside the loop and what is outside the loop. Code that is inside the loop will be run for every item in the list. Code that is not indented, which comes after the loop, will be run once just like regular code.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print("Results for the dog show are as follows:\n")
for index, dog in enumerate(dogs):
place = str(index)
print("Place: " + place + " Dog: " + dog.title())
"""
Explanation: Notice that the last line only runs once, after the loop is completed. Also notice the use of newlines ("\n") to make the output easier to read. Run this code on <a href="http://pythontutor.com/visualize.html#code=dogs+%3D+%5B'border+collie',+'australian+cattle+dog',+'labrador+retriever'%5D%0A%0Afor+dog+in+dogs%3A%0A++++print('I+like+'+%2B+dog+%2B+'s.')%0A++++print('No,+I+really+really+like+'+%2B+dog+%2B's!%5Cn')%0A++++%0Aprint(%22%5CnThat's+just+how+I+feel+about+dogs.%22)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=3&curInstr=0">pythontutor</a>.
top
Enumerating a list
When you are looping through a list, you may want to know the index of the current item. You could always use the list.index(value) syntax, but there is a simpler way. The enumerate() function tracks the index of each item for you, as it loops through the list:
End of explanation
"""
###highlight=[6]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print("Results for the dog show are as follows:\n")
for index, dog in enumerate(dogs):
place = str(index + 1)
print("Place: " + place + " Dog: " + dog.title())
"""
Explanation: To enumerate a list, you need to add an index variable to hold the current index. So instead of
for dog in dogs:
You have
for index, dog in enumerate(dogs)
The value in the variable index is always an integer. If you want to print it in a string, you have to turn the integer into a string:
str(index)
The index always starts at 0, so in this example the value of place should actually be the current index, plus one:
End of explanation
"""
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print(dogs)
"""
Explanation: A common looping error
One common looping error occurs when instead of using the single variable dog inside the loop, we accidentally use the variable that holds the entire list:
End of explanation
"""
###highlight=[5]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
for dog in dogs:
print('I like ' + dogs + 's.')
"""
Explanation: In this example, instead of printing each dog in the list, we print the entire list every time we go through the loop. Python puts each individual item in the list into the variable dog, but we never use that variable. Sometimes you will just get an error if you try to do this:
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs[0] = 'australian shepherd'
print(dogs)
"""
Explanation: <a id="Exercises-loops"></a>
Exercises
First List - Loop
Repeat First List, but this time use a loop to print out each value in the list.
First Neat List - Loop
Repeat First Neat List, but this time use a loop to print out your statements. Make sure you are writing the same sentence for all values in your list. Loops are not effective when you are trying to generate different output for each value in your list.
Your First List - Loop
Repeat Your First List, but this time use a loop to print out your message for each item in your list. Again, if you came up with different messages for each value in your list, decide on one message to repeat for each value in your list.
top
Common List Operations
Modifying elements in a list
You can change the value of any element in a list if you know the position of that item.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('australian cattle dog'))
"""
Explanation: Finding an element in a list
If you want to find out the position of an element in a list, you can use the index() function.
End of explanation
"""
###highlight=[4]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print(dogs.index('poodle'))
"""
Explanation: This method returns a ValueError if the requested item is not in the list.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
print('australian cattle dog' in dogs)
print('poodle' in dogs)
"""
Explanation: Testing whether an item is in a list
You can test whether an item is in a list using the "in" keyword. This will become more useful after learning how to use if-else statements.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.append('poodle')
for dog in dogs:
print(dog.title() + "s are cool.")
"""
Explanation: Adding items to a list
Appending items to the end of a list
We can add an item to a list using the append() method. This method adds the new item to the end of the list.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
dogs.insert(1, 'poodle')
print(dogs)
"""
Explanation: Inserting items into a list
We can also insert items anywhere we want in a list, using the insert() function. We specify the position we want the item to have, and everything from that point on is shifted one position to the right. In other words, the index of every item after the new item is increased by one.
End of explanation
"""
# Create an empty list to hold our users.
usernames = []
# Add some users.
usernames.append('bernice')
usernames.append('cody')
usernames.append('aaron')
# Greet all of our users.
for username in usernames:
print("Welcome, " + username.title() + '!')
"""
Explanation: Note that you have to give the position of the new item first, and then the value of the new item. If you do it in the reverse order, you will get an error.
Creating an empty list
Now that we know how to add items to a list after it is created, we can use lists more dynamically. We are no longer stuck defining our entire list at once.
A common approach with lists is to define an empty list, and then let your program add items to the list as necessary. This approach works, for example, when starting to build an interactive web site. Your list of users might start out empty, and then as people register for the site it will grow. This is a simplified approach to how web sites actually work, but the idea is realistic.
Here is a brief example of how to start with an empty list, start to fill it up, and work with the items in the list. The only new thing here is the way we define an empty list, which is just an empty set of square brackets.
End of explanation
"""
###highlight=[10,11,12]
# Create an empty list to hold our users.
usernames = []
# Add some users.
usernames.append('bernice')
usernames.append('cody')
usernames.append('aaron')
# Greet all of our users.
for username in usernames:
print("Welcome, " + username.title() + '!')
# Recognize our first user, and welcome our newest user.
print("\nThank you for being our very first user, " + usernames[0].title() + '!')
print("And a warm welcome to our newest user, " + usernames[-1].title() + '!')
"""
Explanation: If we don't change the order in our list, we can use the list to figure out who our oldest and newest users are.
End of explanation
"""
students = ['bernice', 'aaron', 'cody']
# Put students in alphabetical order.
students.sort()
# Display the list in its current order.
print("Our students are currently in alphabetical order.")
for student in students:
print(student.title())
#Put students in reverse alphabetical order.
students.sort(reverse=True)
# Display the list in its current order.
print("\nOur students are now in reverse alphabetical order.")
for student in students:
print(student.title())
"""
Explanation: Note that the code welcoming our newest user will always work, because we have used the index -1. If we had used the index 2 we would always get the third user, even as our list of users grows and grows.
Sorting a List
We can sort a list alphabetically, in either order.
End of explanation
"""
students = ['bernice', 'aaron', 'cody']
# Display students in alphabetical order, but keep the original order.
print("Here is the list in alphabetical order:")
for student in sorted(students):
print(student.title())
# Display students in reverse alphabetical order, but keep the original order.
print("\nHere is the list in reverse alphabetical order:")
for student in sorted(students, reverse=True):
print(student.title())
print("\nHere is the list in its original order:")
# Show that the list is still in its original order.
for student in students:
print(student.title())
"""
Explanation: sorted() vs. sort()
Whenever you consider sorting a list, keep in mind that you can not recover the original order. If you want to display a list in sorted order, but preserve the original order, you can use the sorted() function. The sorted() function also accepts the optional reverse=True argument.
End of explanation
"""
students = ['bernice', 'aaron', 'cody']
students.reverse()
print(students)
"""
Explanation: Reversing a list
We have seen three possible orders for a list:
- The original order in which the list was created
- Alphabetical order
- Reverse alphabetical order
There is one more order we can use, and that is the reverse of the original order of the list. The reverse() function gives us this order.
End of explanation
"""
numbers = [1, 3, 4, 2]
# sort() puts numbers in increasing order.
numbers.sort()
print(numbers)
# sort(reverse=True) puts numbers in decreasing order.
numbers.sort(reverse=True)
print(numbers)
numbers = [1, 3, 4, 2]
# sorted() preserves the original order of the list:
print(sorted(numbers))
print(numbers)
numbers = [1, 3, 4, 2]
# The reverse() function also works for numerical lists.
numbers.reverse()
print(numbers)
"""
Explanation: Note that reverse is permanent, although you could follow up with another call to reverse() and get back the original order of the list.
Sorting a numerical list
All of the sorting functions work for numerical lists as well.
End of explanation
"""
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print(user_count)
"""
Explanation: Finding the length of a list
You can find the length of a list using the len() function.
End of explanation
"""
# Create an empty list to hold our users.
usernames = []
# Add some users, and report on how many users we have.
usernames.append('bernice')
user_count = len(usernames)
print("We have " + str(user_count) + " user!")
usernames.append('cody')
usernames.append('aaron')
user_count = len(usernames)
print("We have " + str(user_count) + " users!")
"""
Explanation: There are many situations where you might want to know how many items in a list. If you have a list that stores your users, you can find the length of your list at any time, and know how many users you have.
End of explanation
"""
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will cause an error: " + user_count)
###highlight=[5]
usernames = ['bernice', 'cody', 'aaron']
user_count = len(usernames)
print("This will work: " + str(user_count))
"""
Explanation: On a technical note, the len() function returns an integer, which can't be printed directly with strings. We use the str() function to turn the integer into a string so that it prints nicely:
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove the first dog from the list.
del dogs[0]
print(dogs)
"""
Explanation: <a id="Exercises-operations"></a>
Exercises
Working List
Make a list that includes four careers, such as 'programmer' and 'truck driver'.
Use the list.index() function to find the index of one career in your list.
Use the in function to show that this career is in your list.
Use the append() function to add a new career to your list.
Use the insert() function to add a new career at the beginning of the list.
Use a loop to show all the careers in your list.
Starting From Empty
Create the list you ended up with in Working List, but this time start your file with an empty list and fill it up using append() statements.
Print a statement that tells us what the first career you thought of was.
Print a statement that tells us what the last career you thought of was.
Ordered Working List
Start with the list you created in Working List.
You are going to print out the list in a number of different orders.
Each time you print the list, use a for loop rather than printing the raw list.
Print a message each time telling us what order we should see the list in.
Print the list in its original order.
Print the list in alphabetical order.
Print the list in its original order.
Print the list in reverse alphabetical order.
Print the list in its original order.
Print the list in the reverse order from what it started.
Print the list in its original order
Permanently sort the list in alphabetical order, and then print it out.
Permanently sort the list in reverse alphabetical order, and then print it out.
Ordered Numbers
Make a list of 5 numbers, in a random order.
You are going to print out the list in a number of different orders.
Each time you print the list, use a for loop rather than printing the raw list.
Print a message each time telling us what order we should see the list in.
Print the numbers in the original order.
Print the numbers in increasing order.
Print the numbers in the original order.
Print the numbers in decreasing order.
Print the numbers in their original order.
Print the numbers in the reverse order from how they started.
Print the numbers in the original order.
Permanently sort the numbers in increasing order, and then print them out.
Permanently sort the numbers in descreasing order, and then print them out.
List Lengths
Copy two or three of the lists you made from the previous exercises, or make up two or three new lists.
Print out a series of statements that tell us how long each list is.
top
Removing Items from a List
Hopefully you can see by now that lists are a dynamic structure. We can define an empty list and then fill it up as information comes into our program. To become really dynamic, we need some ways to remove items from a list when we no longer need them. You can remove items from a list through their position, or through their value.
Removing items by position
If you know the position of an item in a list, you can remove that item using the del command. To use this approach, give the command del and the name of your list, with the index of the item you want to move in square brackets:
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
# Remove australian cattle dog from the list.
dogs.remove('australian cattle dog')
print(dogs)
"""
Explanation: Removing items by value
You can also remove an item from a list if you know its value. To do this, we use the remove() function. Give the name of the list, followed by the word remove with the value of the item you want to remove in parentheses. Python looks through your list, finds the first item with this value, and removes it.
End of explanation
"""
letters = ['a', 'b', 'c', 'a', 'b', 'c']
# Remove the letter a from the list.
letters.remove('a')
print(letters)
"""
Explanation: Be careful to note, however, that only the first item with this value is removed. If you have multiple items with the same value, you will have some items with this value left in your list.
End of explanation
"""
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
last_dog = dogs.pop()
print(last_dog)
print(dogs)
"""
Explanation: Popping items from a list
There is a cool concept in programming called "popping" items from a collection. Every programming language has some sort of data structure similar to Python's lists. All of these structures can be used as queues, and there are various ways of processing the items in a queue.
One simple approach is to start with an empty list, and then add items to that list. When you want to work with the items in the list, you always take the last item from the list, do something with it, and then remove that item. The pop() function makes this easy. It removes the last item from the list, and gives it to us so we can work with it. This is easier to show with an example:
End of explanation
"""
###highlight=[3]
dogs = ['border collie', 'australian cattle dog', 'labrador retriever']
first_dog = dogs.pop(0)
print(first_dog)
print(dogs)
"""
Explanation: This is an example of a first-in, last-out approach. The first item in the list would be the last item processed if you kept using this approach. We will see a full implementation of this approach later on, when we learn about while loops.
You can actually pop any item you want from a list, by giving the index of the item you want to pop. So we could do a first-in, first-out approach by popping the first iem in the list:
End of explanation
"""
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
for user in first_batch:
print(user.title())
"""
Explanation: <a id="Exercises-removing"></a>
Exercises
Famous People
Make a list that includes the names of four famous people.
Remove each person from the list, one at a time, using each of the four methods we have just seen:
Pop the last item from the list, and pop any item except the last item.
Remove one item by its position, and one item by its value.
Print out a message that there are no famous people left in your list, and print your list to prove that it is empty.
top
Want to see what functions are?
At this point, you might have noticed we have a fair bit of repetetive code in some of our examples. This repetition will disappear once we learn how to use functions. If this repetition is bothering you already, you might want to go look at Introducing Functions before you do any more exercises in this section.
Slicing a List
Since a list is a collection of items, we should be able to get any subset of those items. For example, if we want to get just the first three items from the list, we should be able to do so easily. The same should be true for any three items in the middle of the list, or the last three items, or any x items from anywhere in the list. These subsets of a list are called slices.
To get a subset of a list, we give the position of the first item we want, and the position of the first item we do not want to include in the subset. So the slice list[0:3] will return a list containing items 0, 1, and 2, but not item 3. Here is how you get a batch containing the first three items.
End of explanation
"""
###highlight=[5]
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[:3]
for user in first_batch:
print(user.title())
"""
Explanation: If you want to grab everything up to a certain position in the list, you can also leave the first index blank:
End of explanation
"""
###highlight=[7,8,9]
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab the first three users in the list.
first_batch = usernames[0:3]
# The original list is unaffected.
for user in usernames:
print(user.title())
"""
Explanation: When we grab a slice from a list, the original list is not affected:
End of explanation
"""
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab a batch from the middle of the list.
middle_batch = usernames[1:4]
for user in middle_batch:
print(user.title())
"""
Explanation: We can get any segment of a list we want, using the slice method:
End of explanation
"""
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Grab all users from the third to the end.
end_batch = usernames[2:]
for user in end_batch:
print(user.title())
"""
Explanation: To get all items from one position in the list to the end of the list, we can leave off the second index:
End of explanation
"""
usernames = ['bernice', 'cody', 'aaron', 'ever', 'dalia']
# Make a copy of the list.
copied_usernames = usernames[:]
print("The full copied list:\n\t", copied_usernames)
# Remove the first two users from the copied list.
del copied_usernames[0]
del copied_usernames[0]
print("\nTwo users removed from copied list:\n\t", copied_usernames)
# The original list is unaffected.
print("\nThe original list:\n\t", usernames)
"""
Explanation: Copying a list
You can use the slice notation to make a copy of a list, by leaving out both the starting and the ending index. This causes the slice to consist of everything from the first item to the last, which is the entire list.
End of explanation
"""
# Print out the first ten numbers.
numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for number in numbers:
print(number)
"""
Explanation: <a id="Exercises-slicing"></a>
Exercises
Alphabet Slices
Store the first ten letters of the alphabet in a list.
Use a slice to print out the first three letters of the alphabet.
Use a slice to print out any three letters from the middle of your list.
Use a slice to print out the letters from any point in the middle of your list, to the end.
Protected List
Your goal in this exercise is to prove that copying a list protects the original list.
Make a list with three people's names in it.
Use a slice to make a copy of the entire list.
Add at least two new names to the new copy of the list.
Make a loop that prints out all of the names in the original list, along with a message that this is the original list.
Make a loop that prints out all of the names in the copied list, along with a message that this is the copied list.
top
Numerical Lists
There is nothing special about lists of numbers, but there are some functions you can use to make working with numerical lists more efficient. Let's make a list of the first ten numbers, and start working with it to see how we can use numbers in a list.
End of explanation
"""
# Print the first ten numbers.
for number in range(1,11):
print(number)
"""
Explanation: The range() function
This works, but it is not very efficient if we want to work with a large set of numbers. The range() function helps us generate long lists of numbers. Here are two ways to do the same thing, using the range function.
End of explanation
"""
# Print the first ten odd numbers.
for number in range(1,21,2):
print(number)
"""
Explanation: The range function takes in a starting number, and an end number. You get all integers, up to but not including the end number. You can also add a step value, which tells the range function how big of a step to take between numbers:
End of explanation
"""
# Create a list of the first ten numbers.
numbers = list(range(1,11))
print(numbers)
"""
Explanation: If we want to store these numbers in a list, we can use the list() function. This function takes in a range, and turns it into a list:
End of explanation
"""
# Store the first million numbers in a list.
numbers = list(range(1,1000001))
# Show the length of the list:
print("The list 'numbers' has " + str(len(numbers)) + " numbers in it.")
# Show the last ten numbers:
print("\nThe last ten numbers in the list are:")
for number in numbers[-10:]:
print(number)
"""
Explanation: This is incredibly powerful; we can now create a list of the first million numbers, just as easily as we made a list of the first ten numbers. It doesn't really make sense to print the million numbers here, but we can show that the list really does have one million items in it, and we can print the last ten items to show that the list is correct.
End of explanation
"""
ages = [23, 16, 14, 28, 19, 11, 38]
youngest = min(ages)
oldest = max(ages)
total_years = sum(ages)
print("Our youngest reader is " + str(youngest) + " years old.")
print("Our oldest reader is " + str(oldest) + " years old.")
print("Together, we have " + str(total_years) + " years worth of life experience.")
"""
Explanation: There are two things here that might be a little unclear. The expression
str(len(numbers))
takes the length of the numbers list, and turns it into a string that can be printed.
The expression
numbers[-10:]
gives us a slice of the list. The index -1 is the last item in the list, and the index -10 is the item ten places from the end of the list. So the slice numbers[-10:] gives us everything from that item to the end of the list.
The min(), max(), and sum() functions
There are three functions you can easily use with numerical lists. As you might expect, the min() function returns the smallest number in the list, the max() function returns the largest number in the list, and the sum() function returns the total of all numbers in the list.
End of explanation
"""
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
new_square = number**2
squares.append(new_square)
# Show that our list is correct.
for square in squares:
print(square)
"""
Explanation: <a id="Exercises-numerical"></a>
Exercises
First Twenty
Use the range() function to store the first twenty numbers (1-20) in a list, and print them out.
Larger Sets
Take the first_twenty.py program you just wrote. Change your end number to a much larger number. How long does it take your computer to print out the first million numbers? (Most people will never see a million numbers scroll before their eyes. You can now see this!)
Five Wallets
Imagine five wallets with different amounts of cash in them. Store these five values in a list, and print out the following sentences:
"The fattest wallet has $ value in it."
"The skinniest wallet has $ value in it."
"All together, these wallets have $ value in them."
top
List Comprehensions
I thought carefully before including this section. If you are brand new to programming, list comprehensions may look confusing at first. They are a shorthand way of creating and working with lists. It is good to be aware of list comprehensions, because you will see them in other people's code, and they are really useful when you understand how to use them. That said, if they don't make sense to you yet, don't worry about using them right away. When you have worked with enough lists, you will want to use comprehensions. For now, it is good enough to know they exist, and to recognize them when you see them. If you like them, go ahead and start trying to use them now.
Numerical Comprehensions
Let's consider how we might make a list of the first ten square numbers. We could do it like this:
End of explanation
"""
###highlight=[8]
# Store the first ten square numbers in a list.
# Make an empty list that will hold our square numbers.
squares = []
# Go through the first ten numbers, square them, and add them to our list.
for number in range(1,11):
squares.append(number**2)
# Show that our list is correct.
for square in squares:
print(square)
"""
Explanation: This should make sense at this point. If it doesn't, go over the code with these thoughts in mind:
- We make an empty list called squares that will hold the values we are interested in.
- Using the range() function, we start a loop that will go through the numbers 1-10.
- Each time we pass through the loop, we find the square of the current number by raising it to the second power.
- We add this new value to our list squares.
- We go through our newly-defined list and print out each square.
Now let's make this code more efficient. We don't really need to store the new square in its own variable new_square; we can just add it directly to the list of squares. The line
new_square = number**2
is taken out, and the next line takes care of the squaring:
End of explanation
"""
###highlight=[2,3]
# Store the first ten square numbers in a list.
squares = [number**2 for number in range(1,11)]
# Show that our list is correct.
for square in squares:
print(square)
"""
Explanation: List comprehensions allow us to collapse the first three lines of code into one line. Here's what it looks like:
End of explanation
"""
# Make an empty list that will hold the even numbers.
evens = []
# Loop through the numbers 1-10, double each one, and add it to our list.
for number in range(1,11):
evens.append(number*2)
# Show that our list is correct:
for even in evens:
print(even)
"""
Explanation: It should be pretty clear that this code is more efficient than our previous approach, but it may not be clear what is happening. Let's take a look at everything that is happening in that first line:
We define a list called squares.
Look at the second part of what's in square brackets:
for number in range(1,11)
This sets up a loop that goes through the numbers 1-10, storing each value in the variable number. Now we can see what happens to each number in the loop:
number**2
Each number is raised to the second power, and this is the value that is stored in the list we defined. We might read this line in the following way:
squares = [raise number to the second power, for each number in the range 1-10]
Another example
It is probably helpful to see a few more examples of how comprehensions can be used. Let's try to make the first ten even numbers, the longer way:
End of explanation
"""
###highlight=[2,3]
# Make a list of the first ten even numbers.
evens = [number*2 for number in range(1,11)]
for even in evens:
print(even)
"""
Explanation: Here's how we might think of doing the same thing, using a list comprehension:
evens = [multiply each number by 2, for each number in the range 1-10]
Here is the same line in code:
End of explanation
"""
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = []
for student in students:
great_students.append(student.title() + " the great!")
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
"""
Explanation: Non-numerical comprehensions
We can use comprehensions with non-numerical lists as well. In this case, we will create an initial list, and then use a comprehension to make a second list from the first one. Here is a simple example, without using comprehensions:
End of explanation
"""
###highlight=[5,6]
# Consider some students.
students = ['bernice', 'aaron', 'cody']
# Let's turn them into great students.
great_students = [student.title() + " the great!" for student in students]
# Let's greet each great student.
for great_student in great_students:
print("Hello, " + great_student)
"""
Explanation: To use a comprehension in this code, we want to write something like this:
great_students = [add 'the great' to each student, for each student in the list of students]
Here's what it looks like:
End of explanation
"""
message = "Hello!"
for letter in message:
print(letter)
"""
Explanation: <a id="Exercises-comprehensions"></a>
Exercises
If these examples are making sense, go ahead and try to do the following exercises using comprehensions. If not, try the exercises without comprehensions. You may figure out how to use comprehensions after you have solved each exercise the longer way.
Multiples of Ten
Make a list of the first ten multiples of ten (10, 20, 30... 90, 100). There are a number of ways to do this, but try to do it using a list comprehension. Print out your list.
Cubes
We saw how to make a list of the first ten squares. Make a list of the first ten cubes (1, 8, 27... 1000) using a list comprehension, and print them out.
Awesomeness
Store five names in a list. Make a second list that adds the phrase "is awesome!" to each name, using a list comprehension. Print out the awesome version of the names.
Working Backwards
Write out the following code without using a list comprehension:
plus_thirteen = [number + 13 for number in range(1,11)]
top
Strings as Lists
Now that you have some familiarity with lists, we can take a second look at strings. A string is really a list of characters, so many of the concepts from working with lists behave the same with strings.
Strings as a list of characters
We can loop through a string using a for loop, just like we loop through a list:
End of explanation
"""
message = "Hello world!"
message_list = list(message)
print(message_list)
"""
Explanation: We can create a list from a string. The list will have one element for each character in the string:
End of explanation
"""
message = "Hello World!"
first_char = message[0]
last_char = message[-1]
print(first_char, last_char)
"""
Explanation: Slicing strings
We can access any character in a string by its position, just as we access individual items in a list:
End of explanation
"""
message = "Hello World!"
first_three = message[:3]
last_three = message[-3:]
print(first_three, last_three)
"""
Explanation: We can extend this to take slices of a string:
End of explanation
"""
message = "I like cats and dogs."
dog_present = 'dog' in message
print(dog_present)
"""
Explanation: Finding substrings
Now that you have seen what indexes mean for strings, we can search for substrings. A substring is a series of characters that appears in a string.
You can use the in keyword to find out whether a particular substring appears in a string:
End of explanation
"""
message = "I like cats and dogs."
dog_index = message.find('dog')
print(dog_index)
"""
Explanation: If you want to know where a substring appears in a string, you can use the find() method. The find() method tells you the index at which the substring begins.
End of explanation
"""
###highlight=[2]
message = "I like cats and dogs, but I'd much rather own a dog."
dog_index = message.find('dog')
print(dog_index)
"""
Explanation: Note, however, that this function only returns the index of the first appearance of the substring you are looking for. If the substring appears more than once, you will miss the other substrings.
End of explanation
"""
###highlight=[3,4]
message = "I like cats and dogs, but I'd much rather own a dog."
last_dog_index = message.rfind('dog')
print(last_dog_index)
"""
Explanation: If you want to find the last appearance of a substring, you can use the rfind() function:
End of explanation
"""
message = "I like cats and dogs, but I'd much rather own a dog."
message = message.replace('dog', 'snake')
print(message)
"""
Explanation: Replacing substrings
You can use the replace() function to replace any substring with another substring. To use the replace() function, give the substring you want to replace, and then the substring you want to replace it with. You also need to store the new string, either in the same string variable or in a new variable.
End of explanation
"""
message = "I like cats and dogs, but I'd much rather own a dog."
number_dogs = message.count('dog')
print(number_dogs)
"""
Explanation: Counting substrings
If you want to know how many times a substring appears within a string, you can use the count() method.
End of explanation
"""
message = "I like cats and dogs, but I'd much rather own a dog."
words = message.split(' ')
print(words)
"""
Explanation: Splitting strings
Strings can be split into a set of substrings when they are separated by a repeated character. If a string consists of a simple sentence, the string can be split based on spaces. The split() function returns a list of substrings. The split() function takes one argument, the character that separates the parts of the string.
End of explanation
"""
animals = "dog, cat, tiger, mouse, liger, bear"
# Rewrite the string as a list, and store it in the same variable
animals = animals.split(',')
print(animals)
"""
Explanation: Notice that the punctuation is left in the substrings.
It is more common to split strings that are really lists, separated by something like a comma. The split() function gives you an easy way to turn comma-separated strings, which you can't do much with in Python, into lists. Once you have your data in a list, you can work with it in much more powerful ways.
End of explanation
"""
colors = ('red', 'green', 'blue')
print("The first color is: " + colors[0])
print("\nThe available colors are:")
for color in colors:
print("- " + color)
"""
Explanation: Notice that in this case, the spaces are also ignored. It is a good idea to test the output of the split() function and make sure it is doing what you want with the data you are interested in.
One use of this is to work with spreadsheet data in your Python programs. Most spreadsheet applications allow you to dump your data into a comma-separated text file. You can read this file into your Python program, or even copy and paste from the text file into your program file, and then turn the data into a list. You can then process your spreadsheet data using a for loop.
Other string methods
There are a number of other string methods that we won't go into right here, but you might want to take a look at them. Most of these methods should make sense to you at this point. You might not have use for any of them right now, but it is good to know what you can do with strings. This way you will have a sense of how to solve certain problems, even if it means referring back to the list of methods to remind yourself how to write the correct syntax when you need it.
<a id="Exercises-strings-as-lists"></a>
Exercises
Listing a Sentence
Store a single sentence in a variable. Use a for loop to print each character from your sentence on a separate line.
Sentence List
Store a single sentence in a variable. Create a list from your sentence. Print your raw list (don't use a loop, just print the list).
Sentence Slices
Store a sentence in a variable. Using slices, print out the first five characters, any five consecutive characters from the middle of the sentence, and the last five characters of the sentence.
Finding Python
Store a sentence in a variable, making sure you use the word Python at least twice in the sentence.
Use the in keyword to prove that the word Python is actually in the sentence.
Use the find() function to show where the word Python first appears in the sentence.
Use the rfind() function to show the last place Python appears in the sentence.
Use the count() function to show how many times the word Python appears in your sentence.
Use the split() function to break your sentence into a list of words. Print the raw list, and use a loop to print each word on its own line.
Use the replace() function to change Python to Ruby in your sentence.
<a id="Challenges-strings-as-lists"></a>
Challenges
Counting DNA Nucleotides
Project Rosalind is a problem set based on biotechnology concepts. It is meant to show how programming skills can help solve problems in genetics and biology.
If you have understood this section on strings, you have enough information to solve the first problem in Project Rosalind, Counting DNA Nucleotides. Give the sample problem a try.
If you get the sample problem correct, log in and try the full version of the problem!
Transcribing DNA into RNA
You also have enough information to try the second problem, Transcribing DNA into RNA. Solve the sample problem.
If you solved the sample problem, log in and try the full version!
Complementing a Strand of DNA
You guessed it, you can now try the third problem as well: Complementing a Strand of DNA. Try the sample problem, and then try the full version if you are successful.
top
Tuples
Tuples are basically lists that can never be changed. Lists are quite dynamic; they can grow as you append and insert items, and they can shrink as you remove items. You can modify any element you want to in a list. Sometimes we like this behavior, but other times we may want to ensure that no user or no part of a program can change a list. That's what tuples are for.
Technically, lists are mutable objects and tuples are immutable objects. Mutable objects can change (think of mutations), and immutable objects can not change.
Defining tuples, and accessing elements
You define a tuple just like you define a list, except you use parentheses instead of square brackets. Once you have a tuple, you can access individual elements just like you can with a list, and you can loop through the tuple with a for loop:
End of explanation
"""
colors = ('red', 'green', 'blue')
colors.append('purple')
"""
Explanation: If you try to add something to a tuple, you will get an error:
End of explanation
"""
animal = 'dog'
print("I have a " + animal + ".")
"""
Explanation: The same kind of thing happens when you try to remove something from a tuple, or modify one of its elements. Once you define a tuple, you can be confident that its values will not change.
Using tuples to make strings
We have seen that it is pretty useful to be able to mix raw English strings with values that are stored in variables, as in the following:
End of explanation
"""
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a " + animal + ".")
"""
Explanation: This was especially useful when we had a series of similar statements to make:
End of explanation
"""
animal = 'dog'
print("I have a %s." % animal)
"""
Explanation: I like this approach of using the plus sign to build strings because it is fairly intuitive. We can see that we are adding several smaller strings together to make one longer string. This is intuitive, but it is a lot of typing. There is a shorter way to do this, using placeholders.
Python ignores most of the characters we put inside of strings. There are a few characters that Python pays attention to, as we saw with strings such as "\t" and "\n". Python also pays attention to "%s" and "%d". These are placeholders. When Python sees the "%s" placeholder, it looks ahead and pulls in the first argument after the % sign:
End of explanation
"""
animals = ['dog', 'cat', 'bear']
for animal in animals:
print("I have a %s." % animal)
"""
Explanation: This is a much cleaner way of generating strings that include values. We compose our sentence all in one string, and then tell Python what values to pull into the string, in the appropriate places.
This is called string formatting, and it looks the same when you use a list:
End of explanation
"""
animals = ['dog', 'cat', 'bear']
print("I have a %s, a %s, and a %s." % (animals[0], animals[1], animals[2]))
"""
Explanation: If you have more than one value to put into the string you are composing, you have to pack the values into a tuple:
End of explanation
"""
number = 23
print("My favorite number is " + number + ".")
"""
Explanation: String formatting with numbers
If you recall, printing a number with a string can cause an error:
End of explanation
"""
###highlight=[3]
number = 23
print("My favorite number is " + str(number) + ".")
"""
Explanation: Python knows that you could be talking about the value 23, or the characters '23'. So it throws an error, forcing us to clarify that we want Python to treat the number as a string. We do this by casting the number into a string using the str() function:
End of explanation
"""
###highlight=[3]
number = 23
print("My favorite number is %d." % number)
"""
Explanation: The format string "%d" takes care of this for us. Watch how clean this code is:
End of explanation
"""
numbers = [7, 23, 42]
print("My favorite numbers are %d, %d, and %d." % (numbers[0], numbers[1], numbers[2]))
"""
Explanation: If you want to use a series of numbers, you pack them into a tuple just like we saw with strings:
End of explanation
"""
###highlight=[3]
numbers = [7, 23, 42]
print("My favorite numbers are " + str(numbers[0]) + ", " + str(numbers[1]) + ", and " + str(numbers[2]) + ".")
"""
Explanation: Just for clarification, look at how much longer the code is if you use concatenation instead of string formatting:
End of explanation
"""
names = ['eric', 'ever']
numbers = [23, 2]
print("%s's favorite number is %d, and %s's favorite number is %d." % (names[0].title(), numbers[0], names[1].title(), numbers[1]))
"""
Explanation: You can mix string and numerical placeholders in any order you want.
End of explanation
"""
|
NYUDataBootcamp/Projects | MBA_S16/Freedman-CEO_Birthdays.ipynb | mit | import numpy as np #importing numpy
import pandas as pd #importing pandas
from bs4 import BeautifulSoup #importing Beautiful Soup
import requests
import html5lib #importing html5lib, as per Pandas read_html request
import re
"""
Explanation: Investigating the Relationship Between Birth Month and Business Success
By Alex Freedman
Malcolm Gladwell in his famous book, Outliers, postulated that, for youth hockey players in Canada, being born earlier in a given year provided an enormous step up compared to those youths born later in the year. In order to make teaching children efficient, children must be grouped into arbitrary sets so as to create a limit on the number of children per set. Typically, children are grouped into grades with a cutoff point of January 1st. These grades are then used to determine extracurricular groupings including sporting teams, summer camps, etc. This leads to those children born in earlier months of a given year with distinct mental and physical growth advantages as well as significantly larger experiential banks over their peers born in later months. Gladwell found evidence that these advantages lead to higher rates of success in youth hockey for those youths born earlier in a year than their younger counterparts born in the middle and end of that same year.
I am setting out to continue developing Mr. Gladwell’s theory, by determining whether earlier birth months in a year for an individual can indicate future success in business. Unlike youth hockey, where much of early success can be due to sheer physical advantages that an older child may have over a slightly younger one, success in business should not be determined by physical growth advantages. Additionally, for the purposes of this analysis, where “success in business” will be defined by a dataset of prominent fortune 500 CEO’s, success has occurred far into adulthood, when any developmental head start would not be as critical of a factor. Essentially, the question is, do developmental head starts, relative to your arbitrarily established peer group, snowball into a lifetime of relative advantage, and therefore a predisposition for success in business?
End of explanation
"""
path_to_bday_frequencies = '/Users/alexfreedman/Desktop/Stern/Data_Bootcamp/bday_frequencies 2.csv'
bday_densities = pd.read_csv(path_to_bday_frequencies)
bday_densities.sort_values('month_num')
"""
Explanation: The following data was gathered from http://www.statisticbrain.com/birth-month-statistics/ and averages out each month's number of births in the United States over the last 30 years. The csv was copied and was imported as a pandas dataframe. Later, we will discuss the results graphically.
End of explanation
"""
ceo = requests.get("http://en.wikipedia.org/wiki/List_of_chief_executive_officers") #use the
# requests package 'get' function to pull the html of the wikipedia page of CEO's
ceo_soup = BeautifulSoup(ceo.content)
ceo_table = ceo_soup.find('table', attrs={'class':'wikitable sortable'})
#wikitable sortable is the beginning of the table in the html (viewed with view source of chrome)
dfceo = pd.read_html(str(ceo_table),skiprows=0,encoding="utf-8")[0]
#the pandas read_html function reads the ceo_table html and turns it into a dataframe
dfceo.columns = dfceo.iloc[0] #make the first row the columns of the df
dfceo = dfceo.reindex(dfceo.index.drop(0)) #reindex the df to start at line 1
"""
Explanation: Using the help of my data scientist friend I was able to scrape a wikipedia table of prominent CEO's who will serve as my dataset for this project. The process underwent is as follows:
Use the requests.get function to pull the html of the relevant Wikipedia Table
Turn this into a BeautifulSoup object.
Using Chrome's inspect source feature, we can determine that we want the table html tag corresponding to the attribute class=wikitable sortable.
Use pandas's handy read_html function to convert the html table into a DataFrame.
Fix the first row as the column headers, and remove that row. Reindex the df from the new first row.
End of explanation
"""
def find_link(s, prestr = 'https://wikipedia.org'): #grab each wiki link from wiki table rows
return [prestr + ceo_soup.find('a', text=elem)['href'] for elem in s]
def remove_footnote(s): #use regex to remove footnotes (from links, but can be used elsewhere)
return [re.sub('\[.*?\]','',elem) for elem in s]
def find_birthday(wl): #span with class=bday
#relevancy is determined from Chrome's View Source
bdays = [BeautifulSoup(requests.get(elem).content)
.find('span', attrs={'class':'bday'}) for elem in wl]
bday_stage = ["" if elem is None else elem.text for elem in bdays]
bday_format = pd.to_datetime(bday_stage, format='%Y-%m-%d', errors='coerce') #format
#the string birthday's into pandas datetime objects. errors='coerce' is required
#to not throw errors when trying to read empty ("") birthdays, and instead convert to NaN
return bday_format
"""
Explanation: The next three functions will do the heavy lifting associated with:
- Finding the http link associated with each CEO row in the table
- Cleaning the links to remove unwanted footnotes with a regex. (Otherwise an error will be thrown.)
- Finding the birthday - if it is available - from the Wikipedia page corresponding to the standardized link.
Each function uses a list comprehension to return an array. They will be used in a pandas assign call below to append the scraped data to the CEO dataframe created above.
The original BeautifulSoup call is reused for these purposes.
End of explanation
"""
dfceo_final = (dfceo.assign(wiki_link = lambda x: find_link(s=remove_footnote(x.Executive)))
.assign(birth_date = lambda x: find_birthday(x.wiki_link))
.assign(birth_year = lambda x: x.birth_date.dt.year,
birth_quarter = lambda x: x.birth_date.dt.quarter,
birth_month = lambda x: x.birth_date.dt.month,
birth_day = lambda x: x.birth_date.dt.day))
dfceo_final
"""
Explanation: Using the functions defined above, create new (final) df called dfceo_final. find_birthday() converts the string birthday elements into pandas datetime objects in birth_date. Once they are formatted as such, creating birth_year, birth_quarter, birth_month, and birth_day variables are straightforward.
End of explanation
"""
%matplotlib inline
from ggplot import *
import seaborn as sns
import matplotlib.pyplot as plt
dfceo_final.columns
"""
Explanation: Per the advice of my data scientist friend, I used the ggplot package to plot the first two graphs instead of matplotlib. I reviewed the following websites to get to know ggplot:
http://ggplot.yhathq.com/
http://docs.ggplot2.org/current/geom_density.html
http://docs.ggplot2.org/current/geom_histogram.html
For the last graph, I used the seaborn package to create a violin plot.
End of explanation
"""
plot1_cols = ['month_num', 'freq']
df_plot1 = (bday_densities[plot1_cols]
.dropna())
plot1_breaks = (df_plot1
.month_num
.unique()
.astype(int)
.tolist())
ggplot(aes(x='month_num', y='freq'),
data=df_plot1) +\
geom_histogram(stat='identity',
fill="blue",
alpha=.8) +\
xlab('Birthday') + ylab('Count') + ggtitle('Histogram of USA Birthdays By Month')
"""
Explanation: The below graph shows that there is a relatively uniform distribution of births throughout a given year. As shown in the graph, if anything, there would be a slight liklihood for births occuring in the third quarter of a given year. This data was included to show that there is not a natural propensity for first quarter births.
End of explanation
"""
plot2_cols = ['birth_quarter']
df_plot2 = (dfceo_final[plot2_cols]
.dropna())
plot2_breaks = [1,2,3,4]
plot2_labels = ['Q1','Q2','Q3','Q4']
ggplot(aes(x='birth_quarter'),
data=df_plot2) +\
geom_density(colour="darkblue",
size=2,
fill="purple",
alpha=.2) +\
scale_x_continuous(breaks = plot2_breaks,
labels = plot2_labels) +\
xlab('Birthday') + ylab('Density') + ggtitle('Density Plot of CEO Birthday Quarters')
"""
Explanation: Below is a density plot for CEO birthdays by quarter using the scraped data from the wikipedia table. It is interesting to note that this dataset shows a disproportionate number of Q1 and Q2 births compared to the population plotted above. Further, Q3, which had the most births above, has the least births in the CEO dataset.
End of explanation
"""
plot3_cols = ['birth_quarter', 'birth_year']
df_plot3 = (dfceo_final[plot3_cols]
.dropna())
ax = sns.violinplot(x="birth_quarter", y="birth_year", data=df_plot3)
ax.set(xlabel='Birth Quarter', ylabel='Birth Year')
ax.set(xticklabels=['Q1','Q2','Q3','Q4'])
"""
Explanation: The violin plot below shows the distribution of CEO birth years across the quarters in which those CEO's were born. Q4 CEO's appear to be slightly younger, on average. Q2 has outliers, both young and old, because Q2 births make up the majority of the dataset.
End of explanation
"""
|
ShinjiKatoA16/UCSY-sw-eng | Python-tips-print.ipynb | mit | # print hh:mm:ss
hh = 1
mm = 2
ss = 3
# simple
print(hh,':',mm,':',ss, sep='')
# smarter
print(hh,mm,ss, sep=':')
# format method
print('{}:{}:{}'.format(hh,mm,ss))
# format method length=2 leading-0
print('{:02}:{:02}:{:02}'.format(hh,mm,ss))
# Print colxrow=col*row
col = 2
row = 3
# it's possible to specify the parameter
print('{1}x{0}={2:2}'.format(row, col, row*col))
"""
Explanation: Tips of print()
In order to print more than 1 data in 1 print() function, following method are possible.
sep=xxx and end=xxx option
format method of string object
End of explanation
"""
# Prepare 10000 random data
import random
rand_data = [random.randrange(1000) for i in range(10000)]
# print 1 by 1
def pr1(data):
outfile = open('/dev/null', 'w')
for x in data:
print(x, file=outfile)
outfile.close()
# print all in 1 print() function
def pr2(data):
outfile = open('/dev/null', 'w')
print('\n'.join(map(str,data)), file=outfile)
print(file=outfile) # for last new-line
outfile.close()
# measure execution time
%timeit pr1(rand_data)
%timeit pr2(rand_data)
"""
Explanation: I/O overhead
I/O system call is slow, and can be a bottle neck if print() function is called many time.
End of explanation
"""
# use comma as a separator, last element does not have tailing comma
print(','.join(('abc', '123', 'xyz'))) # 1 tuple argument
# print 1-10 without trailing space
print(' '.join(map(str, range(1,11))))
"""
Explanation: Printing multiple element with separator in 1 line
join() method of string object is a good choice. It takes iterable object of string as a parameter.
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst | quests/serverlessml/01_explore/solution/explore_data.ipynb | apache-2.0 | !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install tensorflow==2.1 --user
"""
Explanation: Explore and create ML datasets
In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected.
Learning Objectives
Access and explore a public BigQuery dataset on NYC Taxi Cab rides
Visualize your dataset using the Seaborn library
Inspect and clean-up the dataset for future ML model training
Create a benchmark to judge future ML model performance off of
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Let's start off with the Python imports that we need.
End of explanation
"""
from google.cloud import bigquery
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import shutil
"""
Explanation: Please ignore any compatibility warnings and errors
Make sure to <b>restart</b> your kernel to ensure this change has taken place.
End of explanation
"""
%%bigquery
SELECT
FORMAT_TIMESTAMP("%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude, dropoff_longitude,
dropoff_latitude, passenger_count, trip_distance, tolls_amount,
fare_amount, total_amount
FROM `nyc-tlc.yellow.trips`
LIMIT 10
"""
Explanation: <h3> Extract sample data from BigQuery </h3>
The dataset that we will use is <a href="https://console.cloud.google.com/bigquery?p=nyc-tlc&d=yellow&t=trips&page=table">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows.
Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.
End of explanation
"""
%%bigquery trips
SELECT
FORMAT_TIMESTAMP("%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
print(len(trips))
# We can slice Pandas dataframes as if they were arrays
trips[:10]
"""
Explanation: Let's increase the number of records so that we can do some neat graphs. There is no guarantee about the order in which records are returned, and so no guarantee about which records get returned if we simply increase the LIMIT. To properly sample the dataset, let's use the HASH of the pickup time and return 1 in 100,000 records -- because there are 1 billion records in the data, we should get back approximately 10,000 records if we do this.
We will also store the BigQuery result in a Pandas dataframe named "trips"
End of explanation
"""
ax = sns.regplot(x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
"""
Explanation: <h3> Exploring data </h3>
Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
End of explanation
"""
%%bigquery trips
SELECT
FORMAT_TIMESTAMP("%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime,
pickup_longitude, pickup_latitude,
dropoff_longitude, dropoff_latitude,
passenger_count,
trip_distance,
tolls_amount,
fare_amount,
total_amount
FROM
`nyc-tlc.yellow.trips`
WHERE
ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1
AND trip_distance > 0 AND fare_amount >= 2.5
print(len(trips))
ax = sns.regplot(x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips)
ax.figure.set_size_inches(10, 8)
"""
Explanation: Hmm ... do you see something wrong with the data that needs addressing?
It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50).
Note the extra WHERE clauses.
End of explanation
"""
tollrides = trips[trips['tolls_amount'] > 0]
tollrides[tollrides['pickup_datetime'] == '2012-02-27 09:19:10 UTC']
"""
Explanation: What's up with the streaks around 45 dollars and 50 dollars? Those are fixed-amount rides from JFK and La Guardia airports into anywhere in Manhattan, i.e. to be expected. Let's list the data to make sure the values look reasonable.
Let's also examine whether the toll amount is captured in the total amount.
End of explanation
"""
trips.describe()
"""
Explanation: Looking a few samples above, it should be clear that the total amount reflects fare amount, toll and tip somewhat arbitrarily -- this is because when customers pay cash, the tip is not known. So, we'll use the sum of fare_amount + tolls_amount as what needs to be predicted. Tips are discretionary and do not have to be included in our fare estimation tool.
Let's also look at the distribution of values within the columns.
End of explanation
"""
def showrides(df, numlines):
lats = []
lons = []
for iter, row in df[:numlines].iterrows():
lons.append(row['pickup_longitude'])
lons.append(row['dropoff_longitude'])
lons.append(None)
lats.append(row['pickup_latitude'])
lats.append(row['dropoff_latitude'])
lats.append(None)
sns.set_style("darkgrid")
plt.figure(figsize=(10,8))
plt.plot(lons, lats)
showrides(trips, 10)
showrides(tollrides, 10)
"""
Explanation: Hmm ... The min, max of longitude look strange.
Finally, let's actually look at the start and end of a few of the trips.
End of explanation
"""
def preprocess(trips_in):
trips = trips_in.copy(deep=True)
trips.fare_amount = trips.fare_amount + trips.tolls_amount
del trips['tolls_amount']
del trips['total_amount']
del trips['trip_distance'] # we won't know this in advance!
qc = np.all([\
trips['pickup_longitude'] > -78, \
trips['pickup_longitude'] < -70, \
trips['dropoff_longitude'] > -78, \
trips['dropoff_longitude'] < -70, \
trips['pickup_latitude'] > 37, \
trips['pickup_latitude'] < 45, \
trips['dropoff_latitude'] > 37, \
trips['dropoff_latitude'] < 45, \
trips['passenger_count'] > 0,
], axis=0)
return trips[qc]
tripsqc = preprocess(trips)
tripsqc.describe()
"""
Explanation: As you'd expect, rides that involve a toll are longer than the typical ride.
<h3> Quality control and other preprocessing </h3>
We need to do some clean-up of the data:
<ol>
<li>New York city longitudes are around -74 and latitudes are around 41.</li>
<li>We shouldn't have zero passengers.</li>
<li>Clean up the total_amount column to reflect only fare_amount and tolls_amount, and then remove those two columns.</li>
<li>Before the ride starts, we'll know the pickup and dropoff locations, but not the trip distance (that depends on the route taken), so remove it from the ML dataset</li>
<li>Discard the timestamp</li>
</ol>
We could do preprocessing in BigQuery, similar to how we removed the zero-distance rides, but just to show you another option, let's do this in Python. In production, we'll have to carry out the same preprocessing on the real-time input data.
This sort of preprocessing of input data is quite common in ML, especially if the quality-control is dynamic.
End of explanation
"""
shuffled = tripsqc.sample(frac=1)
trainsize = int(len(shuffled['fare_amount']) * 0.70)
validsize = int(len(shuffled['fare_amount']) * 0.15)
df_train = shuffled.iloc[:trainsize, :]
df_valid = shuffled.iloc[trainsize:(trainsize+validsize), :]
df_test = shuffled.iloc[(trainsize+validsize):, :]
df_train.head(n=1)
df_train.describe()
df_valid.describe()
df_test.describe()
"""
Explanation: The quality control has removed about 300 rows (11400 - 11101) or about 3% of the data. This seems reasonable.
Let's move on to creating the ML datasets.
<h3> Create ML datasets </h3>
Let's split the QCed data randomly into training, validation and test sets.
Note that this is not the entire data. We have 1 billion taxicab rides. This is just splitting the 10,000 rides to show you how it's done on smaller datasets. In reality, we'll have to do it on all 1 billion rides and this won't scale.
End of explanation
"""
def to_csv(df, filename):
outdf = df.copy(deep=False)
outdf.loc[:, 'key'] = np.arange(0, len(outdf)) # rownumber as key
# reorder columns so that target is first column
cols = outdf.columns.tolist()
cols.remove('fare_amount')
cols.insert(0, 'fare_amount')
print (cols) # new order of columns
outdf = outdf[cols]
outdf.to_csv(filename, header=False, index_label=False, index=False)
to_csv(df_train, 'taxi-train.csv')
to_csv(df_valid, 'taxi-valid.csv')
to_csv(df_test, 'taxi-test.csv')
!head -10 taxi-valid.csv
"""
Explanation: Let's write out the three dataframes to appropriately named csv files. We can use these csv files for local training (recall that these files represent only 1/100,000 of the full dataset) just to verify our code works, before we run it on all the data.
End of explanation
"""
!ls -l *.csv
"""
Explanation: <h3> Verify that datasets exist </h3>
End of explanation
"""
%%bash
head taxi-train.csv
"""
Explanation: We have 3 .csv files corresponding to train, valid, test. The ratio of file-sizes correspond to our split of the data.
End of explanation
"""
def distance_between(lat1, lon1, lat2, lon2):
# haversine formula to compute distance "as the crow flies". Taxis can't fly of course.
dist = np.degrees(np.arccos(np.minimum(1,np.sin(np.radians(lat1)) * np.sin(np.radians(lat2)) + np.cos(np.radians(lat1)) * np.cos(np.radians(lat2)) * np.cos(np.radians(lon2 - lon1))))) * 60 * 1.515 * 1.609344
return dist
def estimate_distance(df):
return distance_between(df['pickuplat'], df['pickuplon'], df['dropofflat'], df['dropofflon'])
def compute_rmse(actual, predicted):
return np.sqrt(np.mean((actual-predicted)**2))
def print_rmse(df, rate, name):
print ("{1} RMSE = {0}".format(compute_rmse(df['fare_amount'], rate*estimate_distance(df)), name))
FEATURES = ['pickuplon','pickuplat','dropofflon','dropofflat','passengers']
TARGET = 'fare_amount'
columns = list([TARGET])
columns.append('pickup_datetime')
columns.extend(FEATURES) # in CSV, target is the first column, after the features
columns.append('key')
df_train = pd.read_csv('taxi-train.csv', header=None, names=columns)
df_valid = pd.read_csv('taxi-valid.csv', header=None, names=columns)
df_test = pd.read_csv('taxi-test.csv', header=None, names=columns)
rate = df_train['fare_amount'].mean() / estimate_distance(df_train).mean()
print ("Rate = ${0}/km".format(rate))
print_rmse(df_train, rate, 'Train')
print_rmse(df_valid, rate, 'Valid')
print_rmse(df_test, rate, 'Test')
"""
Explanation: Looks good! We now have our ML datasets and are ready to train ML models, validate them and evaluate them.
<h3> Benchmark </h3>
Before we start building complex ML models, it is a good idea to come up with a very simple model and use that as a benchmark.
My model is going to be to simply divide the mean fare_amount by the mean trip_distance to come up with a rate and use that to predict. Let's compute the RMSE of such a model.
End of explanation
"""
validation_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
pickup_datetime,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
'unused' AS key
FROM `nyc-tlc.yellow.trips`
WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2
AND
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
client = bigquery.Client()
df_valid = client.query(validation_query).to_dataframe()
print_rmse(df_valid, 2.59988, 'Final Validation Set')
"""
Explanation: <h2>Benchmark on same dataset</h2>
The RMSE depends on the dataset, and for comparison, we have to evaluate on the same dataset each time. We'll use this query in later labs:
End of explanation
"""
|
davidthomas5412/PanglossNotebooks | MassLuminosityProject/IntegralsAndSamples_2017_03_11.ipynb | mit | % load_ext autoreload
% autoreload 2
% matplotlib inline
from bigmali.grid import Grid
from bigmali.likelihood import BiasedLikelihood
from bigmali.prior import TinkerPrior
from bigmali.hyperparameter import get
import pandas as pd
data = pd.read_csv('/Users/user/Code/PanglossNotebooks/MassLuminosityProject/mock_data.csv')
grid = Grid()
prior = TinkerPrior(grid)
lum_obs = data.lum_obs[:10 ** 4]
z = data.z[:10 ** 4]
bl = BiasedLikelihood(grid, prior, lum_obs, z)
"""
Explanation: Biased Likelihood Integrals and Samples
David Thomas 2017/03/10
Contents:
- Background
- Sample Size Sensitivity
- Single Integral Convergence
- Individual Biased Samples
- Large Contributors
- Changes at Higher Posterior Points
- Conclusion
Background
We have observed strange posterior sample behavior from our likelihood calculation and use this notebook to examine the individual integrals. We begin by fixing a set of hyperparameters and generating the distribution of integrals in the log likelihood calculation.
End of explanation
"""
hypers = get()
%time vals100 = map(lambda lum_obs,z: bl.single_integral(*(hypers + [lum_obs, z]), nsamples=100), lum_obs, z)
%time vals1000 = map(lambda lum_obs,z: bl.single_integral(*(hypers + [lum_obs, z]), nsamples=1000), lum_obs, z)
%time vals10000 = map(lambda lum_obs,z: bl.single_integral(*(hypers + [lum_obs, z]), nsamples=10000), lum_obs, z)
import matplotlib.pyplot as plt
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 16}
plt.rc('font', **font)
plt.hist(vals100, alpha=0.5, label='nsamples=100', bins=20)
plt.hist(vals1000, alpha=0.5, label='nsamples=1000', bins=20)
plt.hist(vals10000, alpha=0.5, label='nsamples=10000', bins=20)
plt.title('Distribution of Integral Values At Different Sample Sizes')
plt.ylabel('Density')
plt.xlabel('Value')
plt.gcf().set_size_inches((10,6))
plt.legend(loc=2);
"""
Explanation: Sample Size Sensitivity
Below we collect the values for the integrals corresponding to a subset of $10^4$ galaxies (with corresponding lum_obs and z). We collect these values when 100, 1,000, and 10,000 draws from the biased distribution are used in order to compare how sensitive the integrals are to the number of biased distribution samples. We also can examine the runtime and see that from 1,000 to 10,000 samples the runtime grows close to linearly.
End of explanation
"""
single_lum_obs0 = lum_obs[0]
single_z0 = z[0]
single_lum_obs1 = lum_obs[100]
single_z1 = z[100]
single_lum_obs2 = lum_obs[5000]
single_z2 = z[5000]
space = np.linspace(1, 1000, 1000)
single_vals = np.zeros((1000,3))
for i, nsamples in enumerate(space):
single_vals[i,0] = bl.single_integral(*(hypers + [single_lum_obs0, single_z0]))
single_vals[i,1] = bl.single_integral(*(hypers + [single_lum_obs1, single_z1]))
single_vals[i,2] = bl.single_integral(*(hypers + [single_lum_obs2, single_z2]))
plt.subplot(311)
plt.plot(space, single_vals[:,0])
plt.title('Integral 0')
plt.xlabel('Biased Distribution Samples')
plt.ylabel('Log-Likelihood')
plt.subplot(312)
plt.plot(space, single_vals[:,1])
plt.title('Integral 1')
plt.xlabel('Biased Distribution Samples')
plt.ylabel('Log-Likelihood')
plt.subplot(313)
plt.plot(space, single_vals[:,2])
plt.title('Integral 2')
plt.xlabel('Biased Distribution Samples')
plt.ylabel('Log-Likelihood')
plt.gcf().set_size_inches((10,6))
plt.tight_layout()
"""
Explanation: Single Integral Convergence
This result is a bit surprising. It suggests we may not need many samples to characterize an integral. To examine this further let's isolate a single integral and see how its value changes as we increase the number of samples. We do this for three different galaxies to limit the chance we are making conclusions on outliers
End of explanation
"""
single_lum_obs0 = lum_obs[0]
single_z0 = z[0]
internals = bl.single_integral_samples_and_weights(*(hypers + [single_lum_obs0, single_z0]))
print single_lum_obs0
print single_z0
internals
import seaborn as sns
cols = ['v1','v2','v3','v4','v5']
plt.title('Correlation Between Various Weights')
sns.heatmap(internals[cols].corr(), xticklabels=cols, yticklabels=cols);
"""
Explanation: Individual Biased Samples
A few things are concerning about these results. First, they bobble within a window and never converge. Second, even one sample provides a reasonable approximation. We will need to start examining the individual weights of the biased samples. Below we get the internal weights used in a single integral. The keys of the dataframe correspond to the distributions here:
\begin{align}
v1 &= \ln P(L_{obs}|L^s, \sigma_L)\
v2 &= \ln P(L^s|M^s, z, \alpha, S)\
v3 &= \ln P(M^s|z)\
v4 &= \ln Q(L^s|L_{obs}, \sigma_L)\
v5 &= \ln Q(M^s|L^s, z, \alpha, S_M)\
out &= logsumexp(v1 + v2 + v3 - v4 - v5) - log(nsamples)\
\end{align}
v1 values pass sanity check
v2 values pass sanity check
v3 values pass sanity check
v4 values pass sanity check
v5 values pass sanity check
End of explanation
"""
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 16}
z_z1 = z[z < 1]
z_z2 = z[z > 1]
lum_obs_z1 = lum_obs[z < 1]
lum_obs_z2 = lum_obs[z > 1]
vals100_z1 = np.array(vals100)[np.where(z < 1)]
vals100_z2 = np.array(vals100)[np.where(z > 1)]
plt.subplot(121)
plt.hist(z, bins=100, alpha=0.6)
plt.xlabel('Value')
plt.ylabel('Count')
plt.title('z Histogram')
plt.subplot(122)
plt.scatter(lum_obs_z1, vals100_z1, label='z < 1', alpha=0.1)
plt.scatter(lum_obs_z2, vals100_z2, label='z > 1', alpha=0.1, color='green')
plt.legend()
plt.ylabel('Log-Likelihood')
plt.xlabel('Luminosity')
plt.title('Log-Likelihood vs Luminosity')
plt.gcf().set_size_inches((14,5))
plt.tight_layout()
"""
Explanation: Large Contributors
End of explanation
"""
hypers2 = hypers
hypers2[-1] = 1
internals2 = bl.single_integral_samples_and_weights(*(hypers2 + [single_lum_obs0, single_z0]))
plt.scatter(np.log(internals['mass_samples']), np.log(internals['lum_samples']), color='blue', label='S = 0.155')
plt.scatter(np.log(internals2['mass_samples']), np.log(internals2['lum_samples']), color='green', label='S = 1')
plt.scatter(np.log(data['mass'][:100]), np.log(data['lum'][:100]), color='red', label='True')
plt.gca().axhline(np.log(single_lum_obs0), color='k', label='lum_obs')
plt.title('Scatter Plots for S=0.155, S=1')
plt.ylabel('Log-Luminosity')
plt.xlabel('Log-Mass')
plt.legend();
hypers3 = hypers
hypers3[-1] = 10
internals3 = bl.single_integral_samples_and_weights(*(hypers3 + [single_lum_obs0, single_z0]))
plt.scatter(np.log(internals['mass_samples']), np.log(internals['lum_samples']), color='blue', label='S = 0.155')
plt.scatter(np.log(internals3['mass_samples']), np.log(internals3['lum_samples']), color='green', label='S = 10')
plt.scatter(np.log(data['mass'][:100]), np.log(data['lum'][:100]), color='red', label='True')
plt.gca().axhline(np.log(single_lum_obs0), color='k', label='lum_obs')
plt.title('Scatter Plots for S=0.155, S=1')
plt.ylabel('Log-Luminosity')
plt.xlabel('Log-Mass')
plt.legend();
"""
Explanation: Hmmm ... have to think about this a bit.
Changes At Higher Posterior Points
The next question I want to explore is: How do our biased distributions change as we move towards more probable posterior samples?
End of explanation
"""
print np.sum(vals100)
print np.sum(vals100_s10)
font = {'family' : 'normal',
'weight' : 'normal',
'size' : 16}
vals100_s10 = map(lambda lum_obs,z: bl.single_integral(*(hypers3 + [lum_obs, z]), nsamples=100), lum_obs, z)
z_z1 = z[z < 1]
z_z2 = z[z > 1]
lum_obs_z1 = lum_obs[z < 1]
lum_obs_z2 = lum_obs[z > 1]
vals100_s10_z1 = np.array(vals100)[np.where(z < 1)]
vals100_s10_z2 = np.array(vals100)[np.where(z > 1)]
plt.subplot(121)
plt.hist(z, bins=100, alpha=0.6)
plt.xlabel('Value')
plt.ylabel('Count')
plt.title('z Histogram')
plt.subplot(122)
plt.scatter(lum_obs_z1, vals100_s10_z1, label='z < 1', alpha=0.2, s=1)
plt.scatter(lum_obs_z2, vals100_s10_z2, label='z > 1', alpha=0.2, color='green', s=2)
plt.scatter(lum_obs_z1, vals100_z1, label='z < 1', alpha=0.2, color='red', s=2)
plt.scatter(lum_obs_z2, vals100_z2, label='z > 1', alpha=0.2, color='orange', s=2)
plt.legend()
plt.ylabel('Log-Likelihood')
plt.xlabel('Luminosity')
plt.title('Log-Likelihood vs Luminosity')
plt.gcf().set_size_inches((14,5))
plt.tight_layout()
"""
Explanation: Need to think about handling out of bounds of prior more gracefully. Look forward to discussing with Phil ...
End of explanation
"""
internals.describe()
internals['v1'].mean() + internals['v2'].mean() + internals['v3'].mean() - internals['v4'].mean() - internals['v5'].mean()
from scipy.misc import logsumexp
logsumexp(internals['v1'] + internals['v2'] + internals['v3'] - internals['v4'] - internals['v5'])
internals3.describe()
internals3['v1'].mean() + internals3['v2'].mean() + internals3['v3'].mean() - internals3['v4'].mean() - internals3['v5'].mean()
from scipy.misc import logsumexp
logsumexp(internals3['v1'] + internals3['v2'] + internals3['v3'] - internals3['v4'] - internals3['v5'])
internals['arg'] = internals['v1'] + internals['v2'] + internals['v3'] - internals['v4'] - internals['v5']
internals3['arg'] = internals3['v1'] + internals3['v2'] + internals3['v3'] - internals3['v4'] - internals3['v5']
plt.title('The Value of logexpsum Arg')
plt.xlabel('Value')
plt.ylabel('Count')
plt.hist(internals['arg'], bins=20, alpha=0.5)
plt.hist(internals3['arg'], bins=20, alpha=0.5);
internals.describe()
internals3
"""
Explanation: Our posterior is chasing after low mass!?
End of explanation
"""
plt.hist(data['z'][:10000], normed=True, alpha=0.6)
plt.title('True Mass PDF')
plt.hist(np.log(data['mass'][:10000]) / np.log(10), normed=True, alpha=0.6)
plt.gca().set_yscale("log")
plt.xlabel('Log Mass')
plt.ylabel('Density')
plt.title('Prior Mass PDF Evaluations')
space = np.linspace(24, 30, 100)
vals = prior.pdf(np.exp(space), 1.0)
plt.plot(space, vals)
plt.gca().set_yscale("log")
plt.xlabel('Log Mass')
plt.ylabel('Density');
"""
Explanation: FOUND THE ISSUE!
Conclusion
Our current Tinker10 mass prior favors the lower mass points so heavily that it outweighs other components of the likelihood and dominates the posterior probability. In order to get meaningful results we will have to devise a way to get around this dilema.
End of explanation
"""
|
DB2-Samples/db2jupyter | Db2 Using Prepared Statements.ipynb | apache-2.0 | %run db2.ipynb
"""
Explanation: Db2 Jupyter: Using Prepared Statements
Normal the %sql magic command is used to execute SQL commands immediately to get a result. If this statement needs to be executed multiple times with different variables, the process is inefficient since the SQL statement must be recompiled every time.
The use of the PREPARE and EXECUTE command allow the user to optimize the statement once, and then re-execute the statement using different parameters.
In addition, the commit scope can be modified so that not every statement gets committed immediately. By managing the commit scope, overhead in the database engine can be avoided.
End of explanation
"""
%sql -sampledata
employees = %sql -r select * from employee
"""
Explanation: Autocommit and Commit Scope
By default, any SQL statements executed with the %sql magic command are immediately commited. This means that the log file has the transaction details and the results are committed to disk. In other words, you can't change your mind after the statement finishes execution.
This behavior is often referred to as AUTOCOMMIT and adds a level of overhead to statement execution because at the end of every statement the results must be "hardened". On the other hand, autocommit means you don't have to worry about explicitly committing work or causing potential locking issues because you are holding up resources. When a record is updated, no other user will be able to view it (unless using "dirty read") until you commit. Holding the resource in a lock means that other workloads may come to a halt while they wait for you to commit your work.
Here is a classic example of wanting a commit scope that is based on a series of statements:
withdrawal = 100
%sql update checking set balance = balance - withdrawal
%sql update savings set balance = balance + withdrawal
If autocommit is ON, you could have a problem with the transaction if the system failed after the first update statement. You would have taken money out of the checking account, but have not updated the savings account. To make sure that this transaction is run successfully:
%sql autocommit off
withdrawal = 100
%sql update checking set balance = balance - withdrawal
%sql update savings set balance = balance + withdrawal
%sql commit work
If the transaction fails before the COMMIT WORK, all changes to the database will be rolled back to its original state, thus protecting the integrity of the two tables.
AUTOCOMMIT
Autocommit can be turned on or off using the following syntax:
%sql AUTOCOMMIT ON | OFF
If you turn AUTOCOMMIT OFF then you need to make sure that you COMMIT work at the end of your code. If you don't there is possible you lose your work if the connection is lost to Db2.
COMMIT, ROLLBACK
To COMMIT all changes to the database you must use the following syntax:
%sql COMMIT [WORK | HOLD]
The command COMMIT or COMMIT WORK are identical and will commit all work to the database. Issuing a COMMIT command also closes all open cursors or statements that are open. If you had created a prepared statement (see section below) then the compiled statement will be no longer valid. By issuing a COMMIT you are releasing all of the resources and locks that your application may be holding.
COMMIT HOLD will allow you to commit your work to disk, but keeps all of the resources open for further execution. This is useful for situations where you are inserting or updating 1000's of records and do not want to tie up log space waiting for a commit to occur. The following pseudocode gives you an example how this would be used:
%sql autocommit off
for i = 1 to 1000
%sql insert into x values i
if (i / 100 == 0)
print i "Records inserted"
%sql commit work
end if
end for
%sql commit work
%sql autocommit on
You should always remember to turn AUTOCOMMIT ON at the end of any code block or you will have to issue COMMIT at the end of any SQL command to commit it to the database.
PREPARE and EXECUTE
The PREPARE and EXECUTE commands are useful in situations where you want to repeat an SQL statement multiple times while just changing the parameter values. There isn't any benefit from using these statements for simple tasks that may only run occassionally. The benefit of PREPARE/EXECUTE is more evident when dealing with a large number of transactions that are the same.
The PREPARE statement can be used against many types of SQL, but in this implementation, only the following SQL statements are supported:
* SELECT
* INSERT
* UPDATE
* DELETE
* MERGE
To prepare a statement, you must use the following syntax:
Python
stmt = %sql PREPARE sql ....
The PREPARE statement always returns a statement handle. You must assign the results of the PREPARE statement to a variable since it will be required when you EXECUTE the statement.
The SQL statement must have any variables replaced with a question mark ?. For instance, if you wanted to insert a single value into a table you would use the following syntax:
Python
stmt = %sql PREPARE insert into x values (?)
One important note with parameter markers. If you require the parameter to have a specific data type (say INTEGER) then you may want to place a CAST statement around it to force the proper conversion. Usually strings, integers, decimals, etc... convert fine when using this syntax, but occasionally you may run across a data type issue. For the previous example we could modify it to:
Python
stmt = %sql PREPARE insert into x values (CAST(? AS INTEGER))
Once you have prepared a statement, you can execute it using the following syntax:
Python
%sql EXECUTE :stmt USING :v1,:v2,:v3,....
You must provide the variable names with a colon : in front of them and separate each one with a comma. This allows the SQL parser to differentiate between a host variable and a column or SQL keyword. You can also use constants as part of the EXECUTE statement:
Python
%sql EXECUTE :stmt USING 3,'asdsa',24.5
Using variables are more convenient when dealing with strings that may contain single and double quotes.
Using Arrays and Multiple Parameters
When using the PREPARE statement, it can become cumbersome when dealing with many parameter markers. For instance, in order to insert 10 columns into a table the code would look similar to this:
stmt = %sql PREPARE INSERT INTO X VALUES (?,?,?,?,?,?,?,?,?,?)
The %sql command allows you to use the shortform ?*# where # is an integer representing the number of columns you want in the list. The above statement could be written as:
stmt = %sql PREPARE INSERT INTO X VALUES (?*10)
The syntax can also be used to create groups of parameter markers:
stmt = %sql PREPARE INSERT INTO X VALUES (?*3,?*7)
While this may seem a strange way of providing parameters, this becomes more useful when we use the EXECUTE command.
The EXECUTE command can use Python arrays (lists) as input arguments. For the previous example with 10 parameters you could issue the following command:
%sql EXECUTE :stmt USING :v1,:v2,:v3,:v4,:v5,:v6,:v7,:v8,:v9,:v10
If you placed all of these values into an array, you could also do the following:
%sql EXECUTE :stmt USING :v[0],:v[1],:v[2],:v[3],:v[4],:v[5],:v[6],:v[7],:v[8],:v[9]
That isn't much simpler but shows that you could use items within an array (one dimensional only). The easiest syntax is:
%sql EXECUTE :stmt USING :v
The only requirement is that the array v has exactly the number of values required to satisfy the parameter list required by the prepared statement.
When you split the argument list into groups, you can use multiple arrays to contain the data. Given the following prepare statement:
stmt = %sql PREPARE INSERT INTO X VALUES (?*3,?*7)
You could execute the statement using two arrays:
%sql EXECUTE :stmt USING :name, :details
This would work as long as the total number of parameters supplied by the name array and details array is equal to 10.
Performance Comparisons
The following examples will show the use of AUTOCOMMIT and PREPARE/EXECUTE when running SQL statements.
This first SQL statement will load the EMPLOYEE and DEPARTMENT tables (if they don't already exist) and then return an array of all of the employees in the company using a SELECT statement.
End of explanation
"""
print(employees[1])
"""
Explanation: The employees variable contains all of the employee data as a Python array. The next statement will retrieve the contents of the first row only (Remember that row 0 contains the name of the columns).
End of explanation
"""
%%sql -q
DROP TABLE EMPLOYEE2;
CREATE TABLE EMPLOYEE2 AS (SELECT * FROM EMPLOYEE) DEFINITION ONLY;
"""
Explanation: We now will create another EMPLOYEE table that is an exact duplicate of what we already have.
End of explanation
"""
%sql -q DELETE FROM EMPLOYEE2
print("Starting Insert")
start_time = time.time()
i = 0
for k in range(0,50):
for record in employees[1:]:
i += 1
empno,firstnme,midinit,lastname,workdept,phoneno,hiredate,job,edlevel,sex,birthdate,salary,bonus,comm = record
%sql -q INSERT INTO EMPLOYEE2 VALUES ( \
:empno,:firstnme,:midinit, \
:lastname,:workdept,:phoneno, \
:hiredate,:job,:edlevel, \
:sex,:birthdate,:salary, \
:bonus,:comm)
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
time_insert = end_time-start_time
"""
Explanation: Loop with INSERT Statements
One could always use SQL to insert into this table, but we will use a loop to execute insert statements. The loop will be timed so we can get a sense of the cost of running this code. In order to make the loop run a longer the insert block is run 50 times.
End of explanation
"""
%sql -q DELETE FROM EMPLOYEE2
print("Starting Insert")
start_time = time.time()
i = 0
prep = %sql prepare INSERT INTO EMPLOYEE2 VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)
for k in range(0,50):
for record in employees[1:]:
i += 1
empno,firstnme,midinit,lastname,workdept,phoneno,hiredate,job,edlevel,sex,birthdate,salary,bonus,comm = record
%sql execute :prep using :empno,:firstnme,:midinit,:lastname,:workdept,:phoneno,:hiredate,:job,:edlevel,:sex,:birthdate,:salary,:bonus,:comm
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
time_prepare = end_time-start_time
"""
Explanation: Loop with PREPARE Statement
An alternative method would be to use a prepared statement that allows us to compile the statement once in Db2 and then reuse the statement in Db2's memory. This method uses the individual column values as input into the EXECUTE statement.
End of explanation
"""
%sql -q DELETE FROM EMPLOYEE2
print("Starting Insert")
start_time = time.time()
i = 0
prep = %sql prepare INSERT INTO EMPLOYEE2 VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)
for k in range(0,50):
for record in employees[1:]:
i += 1
%sql execute :prep using :record
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
time_array = end_time-start_time
"""
Explanation: Loop with PREPARE Statement and Arrays
You will notice that it is a bit tedious to write out all of the columns that are required as part of an INSERT statement. A simpler option is to use a Python list or array to and assign it directly in the EXECUTE statement. So rather than:
%sql execute :prep using :empno, :firstnme, ...
We can just use the array variable generated as part of the for loop:
%sql execute :prep using :record
The following SQL demonstrates this approach.
End of explanation
"""
%sql -q DELETE FROM EMPLOYEE2
%sql autocommit off
print("Starting Insert")
start_time = time.time()
i = 0
prep = %sql prepare INSERT INTO EMPLOYEE2 VALUES (?*14)
for k in range(0,50):
for record in employees[1:]:
i += 1
%sql execute :prep using :record
%sql commit work
end_time = time.time()
print('Total load time for {:d} records is {:.2f} seconds'.format(i,end_time-start_time))
%sql autocommit on
time_commit = end_time-start_time
"""
Explanation: Loop with PREPARE Statement, Arrays and AUTOCOMMIT OFF
Finally, we can turn AUTOCOMMIT off and then commit the work at the end of the block to improve the total time required to insert the data. Note the use of the parameter shortform ?*14 in the code.
End of explanation
"""
%%sql -pb
WITH RESULT(RUN,ELAPSED) AS (
VALUES
('INSERT',CAST(:time_insert AS DEC(5,2))),
('PREPARE',CAST(:time_prepare AS DEC(5,2))),
('ARRAY ',CAST(:time_array AS DEC(5,2))),
('COMMIT ',CAST(:time_commit AS DEC(5,2)))
)
SELECT RUN, ELAPSED FROM RESULT
ORDER BY ELAPSED DESC
"""
Explanation: Performance Comparison
You may have noticed that the performance of the last method is substantially faster than the other examples. The primary reason for this is the COMMIT only occuring at the end of the code.
End of explanation
"""
|
alexandrnikitin/algorithm-sandbox | courses/DAT256x/Module03/03-02-Vector Multiplication.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import math
v = np.array([2,1])
w = 2 * v
print(w)
# Plot w
origin = [0], [0]
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *w, scale=10)
plt.show()
"""
Explanation: Vector Multiplication
Vector multiplication can be performed in three ways:
Scalar Multiplication
Dot Product Multiplication
Cross Product Multiplication
Scalar Multiplication
Let's start with scalar multiplication - in other words, multiplying a vector by a single numeric value.
Suppose I want to multiply my vector by 2, which I could write like this:
\begin{equation} \vec{w} = 2\vec{v}\end{equation}
Note that the result of this calculation is a new vector named w. So how would we calculate this?
Recall that v is defined like this:
\begin{equation}\vec{v} = \begin{bmatrix}2 \ 1 \end{bmatrix}\end{equation}
To calculate 2v, we simply need to apply the operation to each dimension value in the vector matrix, like this:
\begin{equation}\vec{w} = \begin{bmatrix}2 \cdot 2 \ 2 \cdot 1 \end{bmatrix}\end{equation}
Which gives us the following result:
\begin{equation}\vec{w} = \begin{bmatrix}2 \cdot 2 \ 2 \cdot 1 \end{bmatrix} = \begin{bmatrix}4 \ 2 \end{bmatrix}\end{equation}
In Python, you can apply these sort of matrix operations directly to numpy arrays, so we can simply calculate w like this:
End of explanation
"""
b = v / 2
print(b)
# Plot b
origin = [0], [0]
plt.axis('equal')
plt.grid()
plt.ticklabel_format(style='sci', axis='both', scilimits=(0,0))
plt.quiver(*origin, *b, scale=10)
plt.show()
"""
Explanation: The same approach is taken for scalar division.
Try it for yourself - use the cell below to calculate a new vector named b based on the following definition:
\begin{equation}\vec{b} = \frac{\vec{v}}{2}\end{equation}
End of explanation
"""
import numpy as np
v = np.array([2,1])
s = np.array([-3,2])
d = np.dot(v,s)
print (d)
"""
Explanation: Dot Product Multiplication
So we've seen how to multiply a vector by a scalar. How about multiplying two vectors together? There are actually two ways to do this depending on whether you want the result to be a scalar product (in other words, a number) or a vector product (a vector).
To get a scalar product, we calculate the dot product. This takes a similar approach to multiplying a vector by a scalar, except that it multiplies each component pair of the vectors and sums the results. To indicate that we are performing a dot product operation, we use the • operator:
\begin{equation} \vec{v} \cdot \vec{s} = (v_{1} \cdot s_{1}) + (v_{2} \cdot s_{2}) ... + \; (v_{n} \cdot s_{n})\end{equation}
So for our vectors v (2,1) and s (-3,2), our calculation looks like this:
\begin{equation} \vec{v} \cdot \vec{s} = (2 \cdot -3) + (1 \cdot 2) = -6 + 2 = -4\end{equation}
So the dot product, or scalar product, of v • s is -4.
In Python, you can use the numpy.dot** function to calculate the dot product of two vector arrays:
End of explanation
"""
import numpy as np
v = np.array([2,1])
s = np.array([-3,2])
d = v @ s
print (d)
"""
Explanation: In Python 3.5 and later, you can also use the @ operator to calculate the dot product:
End of explanation
"""
import math
import numpy as np
# define our vectors
v = np.array([2,1])
s = np.array([-3,2])
# get the magnitudes
vMag = np.linalg.norm(v)
sMag = np.linalg.norm(s)
# calculate the cosine of theta
cos = (v @ s) / (vMag * sMag)
# so theta (in degrees) is:
theta = math.degrees(math.acos(cos))
print(theta)
"""
Explanation: The Cosine Rule
An useful property of vector dot product multiplication is that we can use it to calculate the cosine of the angle between two vectors. We could write the dot products as:
$$ \vec{v} \cdot \vec{s} = \|\vec{v} \|\|\vec{s}\| \cos (\theta) $$
Which we can rearrange as:
$$ \cos(\theta) = \frac{\vec{v} \cdot \vec{s}}{\|\vec{v} \|\|\vec{s}\|} $$
So for our vectors v (2,1) and s (-3,2), our calculation looks like this:
$$ \cos(\theta) = \frac{(2 \cdot-3) + (-3 \cdot 2)}{\sqrt{2^{2} + 1^{2}} \times \sqrt{-3^{2} + 2^{2}}} $$
So:
$$\cos(\theta) = \frac{-4}{8.0622577483}$$
Which calculates to:
$$\cos(\theta) = -0.496138938357 $$
So:
$$\theta \approx 119.74 $$
Here's that calculation in Python:
End of explanation
"""
import numpy as np
p = np.array([2,3,1])
q = np.array([1,2,-2])
r = np.cross(p,q)
print (r)
"""
Explanation: Cross Product Multiplication
To get the vector product of multipying two vectors together, you must calculate the cross product. The result of this is a new vector that is at right angles to both the other vectors in 3D Euclidean space. This means that the cross-product only really makes sense when working with vectors that contain three components.
For example, let's suppose we have the following vectors:
\begin{equation}\vec{p} = \begin{bmatrix}2 \ 3 \ 1 \end{bmatrix}\;\; \vec{q} = \begin{bmatrix}1 \ 2 \ -2 \end{bmatrix}\end{equation}
To calculate the cross product of these vectors, written as p x q, we need to create a new vector (let's call it r) with three components (r<sub>1</sub>, r<sub>2</sub>, and r<sub>3</sub>). The values for these components are calculated like this:
\begin{equation}r_{1} = p_{2}q_{3} - p_{3}q_{2}\end{equation}
\begin{equation}r_{2} = p_{3}q_{1} - p_{1}q_{3}\end{equation}
\begin{equation}r_{3} = p_{1}q_{2} - p_{2}q_{1}\end{equation}
So in our case:
\begin{equation}\vec{r} = \vec{p} \times \vec{q} = \begin{bmatrix}(3 \cdot -2) - (1 \cdot 2) \ (1 \cdot 1) - (2 \cdot -2) \ (2 \cdot 2) - (3 \cdot 1) \end{bmatrix} = \begin{bmatrix}-6 - 2 \ 1 - -4 \ 4 - 3 \end{bmatrix} = \begin{bmatrix}-8 \ 5 \ 1 \end{bmatrix}\end{equation}
In Python, you can use the numpy.cross** function to calculate the cross product of two vector arrays:
End of explanation
"""
|
mayankjohri/LetsExplorePython | Section 1 - Core Python/Chapter 05 - Data Types Part - 2/5.2 Advance Data Types.ipynb | gpl-3.0 | import collections
# from collections import ChainMap
a = {'a': 'A', 'c': 'C'}
b = {'b': 'B', 'c': 'D'}
m = collections.ChainMap(a, b)
print('Individual Values')
print('a = {}'.format(m['a']))
print('b = {}'.format(m['b']))
print('c = {}'.format(m['c']))
print("-"*20)
print(type(m.keys()))
print('Keys = {}'.format(list(m.keys())))
print('Values = {}'.format(list(m.values())))
print("-"*20)
print('Items:')
for k, v in m.items():
print('{} = {}'.format(k, v))
print("-"*20)
print('"d" in m: {}'.format(('d' in m)))
a = {'a': 'A', 'c': 'C'}
b = {'b': 'B', 'c': 'D'}
m = collections.ChainMap(a, b)
lst = []
for v in m.keys():
lst.append(v)
for v in m.values():
lst.append(v)
print(lst)
"""
Explanation: Advance Data Types
This section will cover the following advance topics in data types
Collections
Collections
The collections module is a tresure trove of a built-in module that implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers.
This module implements specialized container datatypes providing alternatives to Python’s general purpose built-in containers, dict, list, set, and tuple.
| Name | Description|
|:-------------:|---------------|
|namedtuple() | factory function for creating tuple subclasses with named fields|
|deque |list-like container with fast appends and pops on either end|
|ChainMap |dict-like class for creating a single view of multiple mappings|
|Counter | dict subclass for counting hashable objects|
|OrderedDict | dict subclass that remembers the order entries were added|
|defaultdict | dict subclass that calls a factory function to supply missing values|
|UserDict | wrapper around dictionary objects for easier dict subclassing|
|UserList |wrapper around list objects for easier list subclassing|
|UserString | wrapper around string objects for easier string subclassing|
## ChainMap — Search Multiple Dictionaries
The ChainMap class manages a list of dictionaries, and can be used to searche through them in the order they are added to find values for associated keys.
It makes a good "context" container, as it can be visualised as a stack for which changes happen as soon as the stack grows, with these changes being discarded again as soon as the stack shrinks.
Treat it as a view table in DB, where actual values are still stored in their respective table and we can still perform all the operation on them.
Accessing Values
The ChainMap supports the same API as a regular dictionary for accessing existing values.
End of explanation
"""
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(a, b)
print(cm.maps)
print('c = {}\n'.format(cm['c']))
# reverse the list
cm.maps = list(reversed(cm.maps)) # m = collections.ChainMap(b, a)
print(cm.maps)
print('c = {}'.format(cm['c']))
"""
Explanation: The child mappings are searched in the order they are passed to the constructor, so the value reported for the key 'c' comes from the a dictionary.
Reordering
The ChainMap stores the list of mappings over which it searches in a list in its maps attribute. This list is mutable, so it is possible to add new mappings directly or to change the order of the elements to control lookup and update behavior.
End of explanation
"""
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
m = collections.ChainMap(a, b)
print('Before: {}'.format(m['c']))
a['c'] = '3.3'
print('After : {}'.format(m['c']))
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(b, a)
print(cm.maps)
print('Before: {}'.format(cm['c']))
a['c'] = '3.3'
print('After : {}'.format(cm['c']))
"""
Explanation: When the list of mappings is reversed, the value associated with 'c' changes.
Updating Values
A ChainMap does not cache the values in the child mappings. Thus, if their contents are modified, the results are reflected when the ChainMap is accessed.
End of explanation
"""
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(a, b)
print('Before: {}'.format(cm['c']))
cm['c'] = '3.3'
print('After : {}'.format(cm['c']))
print(a['c'])
print(b['c'])
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(b, a)
print('Before: {}'.format(cm['c']))
cm['c'] = '3.3'
print('After : {}'.format(cm['c']))
print(a['c'])
print(b['c'])
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
cm = collections.ChainMap(a, b)
print('Before: {}'.format(cm['c']))
cm['d'] = '3.3'
print('After : {}'.format(cm['c']))
print(cm.maps)
print(a)
print(b)
"""
Explanation: Changing the values associated with existing keys and adding new elements works the same way.
It is also possible to set values through the ChainMap directly, although only the first mapping in the chain is actually modified.
End of explanation
"""
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
m1 = collections.ChainMap(a, b)
m2 = m1.new_child()
print('m1 before:', m1)
print('m2 before:', m2)
m2['c'] = '3.3'
print('m1 after:', m1)
print('m2 after:', m2)
"""
Explanation: When the new value is stored using m, the a mapping is updated.
ChainMap provides a convenience method for creating a new instance with one extra mapping at the front of the maps list to make it easy to avoid modifying the existing underlying data structures.
This stacking behavior is what makes it convenient to use ChainMap instances as template or application contexts. Specifically, it is easy to add or update values in one iteration, then discard the changes for the next iteration.
End of explanation
"""
import collections
a = {'a': '1', 'c': '3'}
b = {'b': '2', 'c': '33'}
c = {'c': '333'}
m1 = collections.ChainMap(a, b)
m2 = m1.new_child(c)
print('m1["c"] = {}'.format(m1['c']))
print('m2["c"] = {}'.format(m2['c']))
print(m2)
#This is the equivalent of
m2_1 = collections.ChainMap(c, *m1.maps)
print(m2_1)
"""
Explanation: For situations where the new context is known or built in advance, it is also possible to pass a mapping to new_child().
End of explanation
"""
# Tally occurrences of words in a list
from collections import Counter
cnt = Counter()
for word in ['red', 'blue', 'red', 'green', 'blue', 'blue']:
cnt[word] += 1
Counter({'blue': 3, 'red': 2, 'green': 1})
# Find the ten most common words in Hamlet
import re
words = re.findall(r'\w+', open('hamlet.txt').read().lower())
Counter(words).most_common(10)
"""
Explanation: Counter
Counter is a dict subclass which helps count the hashable objects. It stores elements as dictionary keys and the counts of the objects as value. In other words , It is a container that keeps track of how many times equivalent values are present.
For example:
End of explanation
"""
l = [1 ,23 , 23, 44, 4, 44, 55, 555, 44, 32, 23, 44, 56, 64, 2, 1]
lstCounter = Counter(l)
print(lstCounter)
print(lstCounter.most_common(4))
"""
Explanation: Where as Counter can be used:
Counter() with lists
End of explanation
"""
sentance = "The collections module is a tresure trove of a built-in module that implements " + \
"specialized container datatypes providing alternatives to Python’s general purpose " + \
"built-in containers."
wordList = sentance.split(" ")
Counter(wordList).most_common(3)
"""
Explanation: Counter with Strings
End of explanation
"""
# find the most common words
# Methods with Counter()
c = Counter(wordList)
print(c.most_common(4))
print(c.items())
"""
Explanation: Counter methods
End of explanation
"""
d = {"a": 1, "b": 2}
print(d)
print(d['a'])
print(d['d'])
from collections import defaultdict
dd = defaultdict(object)
print(dd)
print(dd['one'])
print(dd)
dd['Two'] = 2
print(dd)
for d in dd:
print(d)
print(dd[d])
help(defaultdict)
# Initializing with default value
dd = defaultdict(1)
print(dd)
print(dd['one'])
print(dd)
dd['Two'] = 2
print(dd)
for d in dd:
print(d)
print(dd[d])
# Using factory function
import collections
def default_factory():
return 'default value'
d = collections.defaultdict(default_factory, india='new delhi')
print('d:', d)
print('india =>', d['india'])
print('bar =>', d['bar'])
print(d)
# Using factory function
import collections
def default_factory():
return 'Bhopal'
d = collections.defaultdict(default_factory,
{"india": 'new delhi',
"karnataka":"Bangaluru"})
print('d:', d)
print('india =>', d['india'])
print('MP =>', d['MP'])
print(d)
# Using factory function
# ---------------------------------------------------
# TODO: How can i pass value to the default function
# ---------------------------------------------------
import collections
def default_factory():
return 'default value'
d = collections.defaultdict(default_factory, foo='bar')
print('d:', d)
print('foo =>', d['foo'])
print('bar =>', d['bar'])
# Using list as the default_factory, it is easy to group a sequence of key-value pairs into a dictionary of lists:
from collections import defaultdict
countryList = [("India", "New Delhi"), ("Iceland", "Reykjavik"),
("Indonesia", "Jakarta"), ("Ireland", "Dublin"),
("Israel", "Jerusalem"), ("Italy", "Rome")]
d = defaultdict(list)
for country, capital in countryList:
d[country].append(capital)
print(d.items())
# Setting the default_factory to int makes the defaultdict useful for counting
quote = 'Vande Mataram'
dd = defaultdict(int)
print(dd)
for chars in quote:
dd[chars] += 1
print(dd.items())
print(dd['T'])
"""
Explanation: Default dict
The standard dictionary includes the method setdefault() for retrieving a value and establishing a default if the value does not exist. By contrast, defaultdict lets the caller specify the default up front when the container is initialized.
End of explanation
"""
import collections
d = collections.deque('Vande Mataram')
print('Deque:', d)
print('Length:', len(d))
print('Left end:', d[0])
print('Right end:', d[-1])
d.remove('e')
print('remove(e):', d)
"""
Explanation: deque — Double-Ended Queue
A double-ended queue, or deque, supports adding and removing elements from either end of the queue. The more commonly used stacks and queues are degenerate forms of deques, where the inputs and outputs are restricted to a single end.
End of explanation
"""
import collections
# Add to the right
d1 = collections.deque()
d1.extend('Vande')
print('extend :', d1)
for a in " Mataram":
d1.append(a)
d1.extend(" !!!")
print('append :', d1)
d1.extendleft(" #!* ")
print('append :', d1)
# Add to the left
d2 = collections.deque()
d2.extendleft(range(6))
print('extendleft:', d2)
d2.appendleft(6)
print('appendleft:', d2)
"""
Explanation: Adding
End of explanation
"""
fruitsCount = {}
fruitsCount["apple"] = 10
fruitsCount["grapes"] = 120
fruitsCount["mango"] = 200
fruitsCount["kiwi"] = 2000
fruitsCount["leeche"] = 20
print(fruitsCount)
for fruit in fruitsCount:
print(fruit)
# Now lets try this with OrderedDict
from collections import OrderedDict as OD
fruitsCount = OD()
fruitsCount["apple"] = 10
fruitsCount["grapes"] = 120
fruitsCount["mango"] = 200
fruitsCount["kiwi"] = 2000
fruitsCount["leeche"] = 20
print(fruitsCount)
for fruit in fruitsCount:
print(fruit)
"""
Explanation: Consuming
OrderedDict
It is a dictionary subclass that remembers the order in which its contents are added.
Lets start with a normal dictionary:
End of explanation
"""
from collections import namedtuple
Point = namedtuple("India", ['x', 'y', "z"]) # Defining the namedtuple
p = Point(10, y=20, z = 30) # Creating an object
print(p)
print(p.x + p.y + p.z)
p[0] + p[1] # Accessing the values in normal way
x, y, z = p # Unpacking the tuple
print(x)
print(y)
"""
Explanation: namedtuple
Named tuples helps to have meaning of each position in a tuple and allow us to code with better readability and self-documenting code. You can use them in any place where you are using tuples. In the example we will create a namedtuple to show hold information for points.
End of explanation
"""
|
GoogleCloudPlatform/cloudml-samples | notebooks/scikit-learn/custom-prediction-routine-scikit-learn.ipynb | apache-2.0 | PROJECT_ID = "<your-project-id>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
"""
Explanation: Creating a custom prediction routine with scikit-learn
<table align="left">
<td>
<a href="https://cloud.google.com/ml-engine/docs/scikit/custom-prediction-routine-scikit-learn">
<img src="https://cloud.google.com/_static/images/cloud/icons/favicons/onecloud/super_cloud.png"
alt="Google Cloud logo" width="32px"> Read on cloud.google.com
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/cloudml-samples/blob/main/notebooks/scikit-learn/custom-prediction-routine-scikit-learn.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/main/notebooks/scikit-learn/custom-prediction-routine-scikit-learn.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Beta
This is a beta release of custom prediction routines. This feature might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy.
Overview
This tutorial shows how to deploy a trained scikit-learn model to AI Platform and serve predictions using a custom prediction routine. This lets you customize how AI Platform responds to each prediction request.
In this example, you will use a custom prediction routine to preprocess
prediction input by scaling it, and to postprocess prediction output by converting class numbers to label strings.
The tutorial walks through several steps:
Training a simple scikit-learn model locally (in this notebook)
Creating and deploy a custom prediction routine to AI Platform
Serving prediction requests from that deployment
Dataset
This tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that
marks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica.
This tutorial uses the copy of the Iris dataset included in the
scikit-learn library.
Objective
The goal is to train a model that uses a flower's measurements as input to predict what type of iris it is.
This tutorial focuses more on using this model with AI Platform than on
the design of the model itself.
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
AI Platform
Cloud Storage
Learn about AI Platform
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Before you begin
You must do several things before you can train and deploy a model in AI Platform:
Set up your local development environment.
Set up a GCP project with billing and the necessary
APIs enabled.
Authenticate your GCP account in this notebook.
Create a Cloud Storage bucket to store your training package and your
trained model.
Set up your local development environment
If you are using Colab or AI Platform Notebooks, your environment already
meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter in a shell to install
Jupyter.
Run jupyter notebook in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.
Make sure that billing is enabled for your project.
Enable the AI Platform ("Cloud Machine Learning Engine") and Compute Engine
APIs.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
"""
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS '<path-to-your-service-account-key.json>'
"""
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
Machine Learning Engine > AI Platform Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "<your-bucket-name>" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
To deploy a custom prediction routine, you must upload your trained model
artifacts and your custom code to Cloud Storage.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
! pip install numpy>=1.16.0 scikit-learn==0.20.2
"""
Explanation: Building and training a scikit-learn model
Often, you can't use your data in its raw form to train a machine learning model. Even when you can, preprocessing the data before using it for training can sometimes improve your model.
Assuming that you expect the input for prediction to have the same format as your training data, you must apply identical preprocessing during training and prediction to ensure that your model makes consistent predictions.
In this section, create a preprocessing module and use it as part of training. Then export a preprocessor with characteristics learned during training to use later in your custom prediction routine.
Install dependencies for local training
Training locally (in the notebook) requires several dependencies:
End of explanation
"""
%%writefile preprocess.py
import numpy as np
class MySimpleScaler(object):
def __init__(self):
self._means = None
self._stds = None
def preprocess(self, data):
if self._means is None: # during training only
self._means = np.mean(data, axis=0)
if self._stds is None: # during training only
self._stds = np.std(data, axis=0)
if not self._stds.all():
raise ValueError('At least one column has standard deviation of 0.')
return (data - self._means) / self._stds
"""
Explanation: Write your preprocessor
Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
Create preprocess.py, which contains a class to do this scaling:
End of explanation
"""
import pickle
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from preprocess import MySimpleScaler
iris = load_iris()
scaler = MySimpleScaler()
X = scaler.preprocess(iris.data)
y = iris.target
model = RandomForestClassifier()
model.fit(X, y)
joblib.dump(model, 'model.joblib')
with open ('preprocessor.pkl', 'wb') as f:
pickle.dump(scaler, f)
"""
Explanation: Notice that an instance of MySimpleScaler saves the means and standard deviations of each feature column on first use. Then it uses these summary statistics to scale data it encounters afterward.
This lets you store characteristics of the training distribution and use them for identical preprocessing at prediction time.
Train your model
Next, use preprocess.MySimpleScaler to preprocess the iris data, then train a model using scikit-learn.
At the end, export your trained model as a joblib (.joblib) file and export your MySimpleScaler instance as a pickle (.pkl) file:
End of explanation
"""
%%writefile predictor.py
import os
import pickle
import numpy as np
from sklearn.datasets import load_iris
from sklearn.externals import joblib
class MyPredictor(object):
def __init__(self, model, preprocessor):
self._model = model
self._preprocessor = preprocessor
self._class_names = load_iris().target_names
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
preprocessed_inputs = self._preprocessor.preprocess(inputs)
if kwargs.get('probabilities'):
probabilities = self._model.predict_proba(preprocessed_inputs)
return probabilities.tolist()
else:
outputs = self._model.predict(preprocessed_inputs)
return [self._class_names[class_num] for class_num in outputs]
@classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.joblib')
model = joblib.load(model_path)
preprocessor_path = os.path.join(model_dir, 'preprocessor.pkl')
with open(preprocessor_path, 'rb') as f:
preprocessor = pickle.load(f)
return cls(model, preprocessor)
"""
Explanation: Deploying a custom prediction routine
To deploy a custom prediction routine to serve predictions from your trained model, do the following:
Create a custom predictor to handle requests
Package your predictor and your preprocessing module
Upload your model artifacts and your custom code to Cloud Storage
Deploy your custom prediction routine to AI Platform
Create a custom predictor
To deploy a custom prediction routine, you must create a class that implements
the Predictor interface. This tells AI Platform how to load your model and how to handle prediction requests.
Write the following code to predictor.py:
End of explanation
"""
%%writefile setup.py
from setuptools import setup
setup(
name='my_custom_code',
version='0.1',
scripts=['predictor.py', 'preprocess.py'])
"""
Explanation: Notice that, in addition to using the preprocessor that you defined during training, this predictor performs a postprocessing step that converts the prediction output from class indexes (0, 1, or 2) into label strings (the name of the flower type).
However, if the predictor receives a probabilities keyword argument with the value True, it returns a probability array instead, denoting the probability that each of the three classes is the correct label (according to the model). The last part of this tutorial shows how to provide a keyword argument during prediction.
Package your custom code
You must package predictor.py and preprocess.py as a .tar.gz source distribution package and provide the package to AI Platform so it can use your custom code to serve predictions.
Write the following setup.py to define your package:
End of explanation
"""
! python setup.py sdist --formats=gztar
"""
Explanation: Then run the following command to createdist/my_custom_code-0.1.tar.gz:
End of explanation
"""
! gsutil cp ./dist/my_custom_code-0.1.tar.gz gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz
! gsutil cp model.joblib preprocessor.pkl gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/
"""
Explanation: Upload model artifacts and custom code to Cloud Storage
Before you can deploy your model for serving, AI Platform needs access to the following files in Cloud Storage:
model.joblib (model artifact)
preprocessor.pkl (model artifact)
my_custom_code-0.1.tar.gz (custom code)
Model artifacts must be stored together in a model directory, which your
Predictor can access as the model_dir argument in its from_path class
method. The custom
code does not need to be in the same directory. Run the following commands to
upload your files:
End of explanation
"""
MODEL_NAME = 'IrisPredictor'
VERSION_NAME = 'v1'
"""
Explanation: Deploy your custom prediction routine
Create a model resource and a version resource to deploy your custom prediction routine. First define variables with your resource names:
End of explanation
"""
! gcloud ai-platform models create $MODEL_NAME \
--regions $REGION
"""
Explanation: Then create your model:
End of explanation
"""
# --quiet automatically installs the beta component if it isn't already installed
! gcloud --quiet beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.13 \
--python-version 3.5 \
--origin gs://$BUCKET_NAME/custom_prediction_routine_tutorial/model/ \
--package-uris gs://$BUCKET_NAME/custom_prediction_routine_tutorial/my_custom_code-0.1.tar.gz \
--prediction-class predictor.MyPredictor
"""
Explanation: Next, create a version. In this step, provide paths to the artifacts and custom code you uploaded to Cloud Storage:
End of explanation
"""
! pip install --upgrade google-api-python-client
"""
Explanation: Learn more about the options you must specify when you deploy a custom prediction routine.
Serving online predictions
Try out your deployment by sending an online prediction request. First, install the Google APIs Client Library for Python:
End of explanation
"""
import googleapiclient.discovery
instances = [
[6.7, 3.1, 4.7, 1.5],
[4.6, 3.1, 1.5, 0.2],
]
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = service.projects().predict(
name=name,
body={'instances': instances}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
"""
Explanation: Then send two instances of iris data to your deployed version:
End of explanation
"""
response = service.projects().predict(
name=name,
body={'instances': instances, 'probabilities': True}
).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
print(response['predictions'])
"""
Explanation: Note: This code uses the credentials you set up during the authentication step to make the online prediction request.
Sending keyword arguments
When you send a prediction request to a custom prediction routine, you can provide additional fields on your request body. The Predictor's predict method receives these as fields of the **kwargs dictionary.
The following code sends the same request as before, but this time it adds a probabilities field to the request body:
End of explanation
"""
# Delete version resource
! gcloud ai-platform versions delete $VERSION_NAME --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r gs://$BUCKET_NAME/custom_prediction_routine_tutorial
"""
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following commands:
End of explanation
"""
|
sbg/Mitty | docs/alignment-accuracy-mq-plots.ipynb | apache-2.0 | # From SO: https://stackoverflow.com/a/28073228/2512851
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to show/hide raw code."></form>''')
import json
import matplotlib.pyplot as plt
import bokeh
import pandas as pd
import numpy as np
from bokeh.plotting import figure, show
from bokeh.io import output_notebook, gridplot
from bokeh.models import ColumnDataSource, HoverTool
output_notebook()
%matplotlib inline
d = json.load(open('inputs.json'))
fname = d['csvA']
df = pd.DataFrame.from_csv(fname, index_col=None, sep=',')
"""
Explanation: Alignment report
End of explanation
"""
def aa_mq(df):
correct_0 = df[df['derr']=='d = 0'][['MQ', 'count']].groupby('MQ').sum()
correct_0.columns = ['correct_0']
correct_50 = df[(df['derr']=='0 < d <= 50') | (df['derr']=='d = 0')][['MQ', 'count']].groupby('MQ').sum()
correct_50.columns = ['correct_50']
total = df[['MQ', 'count']].groupby('MQ').sum()
total.columns = ['total']
data = pd.concat((correct_0, correct_50, total), axis=1)
data['perr_0'] = 1 - data['correct_0'] / data['total']
data['perr_50'] = 1 - data['correct_50'] / data['total']
data['perr_ideal'] = 10 ** (-data.index / 10)
return data
def plot_aa_mq(data):
max_y = 10 ** np.ceil(np.log10(data['perr_0'].max()))
min_y = 10 ** np.floor(np.log10(data['perr_50'].min()))
source = ColumnDataSource(data)
hover = HoverTool(tooltips=[
('perr 0', '@perr_0'),
('perr 50', '@perr_50'),
('perr ideal', '@perr_ideal'),
('Reads', '@total')
])
p = figure(plot_height=200, plot_width=500,
x_axis_label='MQ',
y_axis_label='p_err',
tools=[hover, 'reset'],
y_axis_type="log", y_range=[min_y, max_y])
h1 = p.circle(x='MQ', y='perr_0', size=2, source=source)
h1 = p.circle(x='MQ', y='perr_0', size=10, alpha=0.5, color='yellow', source=source)
h3 = p.line(x='MQ', y='perr_ideal', source=source)
return p
def plot_read_hist(data):
max_y = 10 ** np.ceil(np.log10(data['total'].max()))
min_y = 10 ** np.floor(np.log10(data['total'].min()))
source = ColumnDataSource(data)
hover = HoverTool(tooltips=[
('Reads', '@total')
])
p = figure(plot_height=200, plot_width=500,
x_axis_label='MQ',
y_axis_label='Reads',
tools=[hover, 'reset'],
y_axis_type="log", y_range=[min_y, max_y])
h1 = p.vbar(x='MQ', bottom=min_y, top='total', width=0.7, source=source)
return p
data = aa_mq(df)
s = [
[plot_aa_mq(data)],
[plot_read_hist(data)]
]
p = gridplot(s)
show(p)
# read_count_mq = df.groupby('MQ').sum()
# max_y = 10 ** np.ceil(np.log10(read_count_mq['count'].max()))
# min_y = 10 ** np.floor(np.log10(read_count_mq['count'].min()))
# source = ColumnDataSource(read_count_mq)
# tools = ["reset"]
# p = figure(plot_height=200, plot_width=500,
# x_axis_label='MQ',
# y_axis_label='Reads',
# tools=tools,
# y_axis_type="log", y_range=[min_y, max_y])
# h1 = p.vbar(x='MQ', bottom=min_y, top='count', width=0.7, source=source)
# p.add_tools(HoverTool(renderers=[h1], tooltips=[("reads", "@count")]))
# show(p)
"""
Explanation: Read distribution by MQ
End of explanation
"""
read_count_by_fate = df.groupby('derr').sum()
read_count_by_fate['y'] = read_count_by_fate.index
max_x = 10 ** np.ceil(np.log10(read_count_by_fate['count'].max()))
min_x = 10 ** np.floor(np.log10(read_count_by_fate['count'].min()))
source = ColumnDataSource(read_count_by_fate)
tools = ["reset"]
p = figure(plot_height=200, plot_width=500,
x_axis_label='Reads',
y_axis_label='Read fate',
tools=tools,
y_range=read_count_by_fate.index.tolist(),
x_axis_type="log",
x_range=[min_x, max_x])
h1 = p.hbar(y='y', left=min_x, right='count', height=0.7, source=source)
p.add_tools(HoverTool(renderers=[h1], tooltips=[("reads", "@count")]))
show(p)
"""
Explanation: Read distribution by alignment fate
End of explanation
"""
# Matplotlib version of the plots
def heng_li_plot(df, category, ax):
sub_df = df[df['category']==category]
#correct = sub_df[(sub_df['derr']=='0 < d <= 50') | (sub_df['derr']=='d = 0')][['MQ', 'count']].groupby('MQ').sum()
correct = sub_df[sub_df['derr']=='d = 0'][['MQ', 'count']].groupby('MQ').sum()
correct.columns = ['correct']
mapped = sub_df[sub_df['derr']!='unmapped'][['MQ', 'count']].groupby('MQ').sum()
mapped.columns = ['mapped']
total = sub_df[['MQ', 'count']].groupby('MQ').sum()
total.columns = ['total']
data = pd.concat((correct, mapped, total), axis=1)
x = np.zeros(61, dtype=float)
y = np.zeros(61, dtype=float)
for mq in range(61):
data_sub = data.iloc[mq:70]
x[mq] = 100 * data_sub['mapped'].sum() / total.sum()
y[mq] = 100 * data_sub['correct'].sum() / data_sub['mapped'].sum()
ax.plot(x, y)
ax.plot(x[0], y[0], 'ko')
plt.setp(ax,
xlim=(95, 101), xticks=range(96,101),
ylim=(79, 101), yticks=range(80,101, 5),
title=category)
def plot_heng_li_panels(df):
fig = plt.figure(figsize=(10, 20))
for n, cat in enumerate(['Ref', 'SNP', 'Multi',
'INS <= 10', 'INS 11-50', 'INS > 50',
'DEL <= 10', 'DEL 11-50', 'DEL > 50']):
ax = plt.subplot(4, 3, n + 1)
heng_li_plot(df, cat, ax)
#plt.setp(ax, yscale='log')
if n != 6:
plt.setp(ax, xticklabels=[], yticklabels=[])
else:
plt.setp(ax, xlabel='% Mapped', ylabel='% Correct')
#plot_heng_li_panels(df)
def heng_li_plot(df, category):
sub_df = df[df['category']==category]
#correct = sub_df[(sub_df['derr']=='0 < d <= 50') | (sub_df['derr']=='d = 0')][['MQ', 'count']].groupby('MQ').sum()
correct = sub_df[sub_df['derr']=='d = 0'][['MQ', 'count']].groupby('MQ').sum()
correct.columns = ['correct']
mapped = sub_df[sub_df['derr']!='unmapped'][['MQ', 'count']].groupby('MQ').sum()
mapped.columns = ['mapped']
total = sub_df[['MQ', 'count']].groupby('MQ').sum()
total.columns = ['total']
data = pd.concat((correct, mapped, total), axis=1)
x = np.zeros(61, dtype=float)
y = np.zeros(61, dtype=float)
for mq in range(61):
data_sub = data.iloc[mq:70]
x[mq] = 100 * data_sub['mapped'].sum() / total.sum()
y[mq] = 100 * data_sub['correct'].sum() / data_sub['mapped'].sum()
source = ColumnDataSource(data=dict(
mapped=x,
correct=y,
mq=range(61)
))
hover = HoverTool(tooltips=[
('MQ', '≥ @mq'),
('Map', '@mapped %'),
('Correct', '@correct %')
])
s1 = figure(width=250, plot_height=250,
tools=[hover, 'pan', 'reset'],
title=category)
s1.circle(x[0], y[0], size=10)
s1.line('mapped', 'correct', source=source)
return s1
def plot_heng_li_panels(df):
s = []
row_cnt = 3
row_s = []
for n, cat in enumerate(['Ref', 'SNP', 'Multi',
'INS <= 10', 'INS 11-50', 'INS > 50',
'DEL <= 10', 'DEL 11-50', 'DEL > 50']):
if n % row_cnt == 0:
if len(row_s):
s.append(row_s)
row_s = []
row_s.append(heng_li_plot(df, cat))
if len(row_s):
s.append(row_s)
p = gridplot(s)
show(p)
plot_heng_li_panels(df)
"""
Explanation: Mapped rate and Alignment accuracy parametrized by MQ
End of explanation
"""
|
antoniomezzacapo/qiskit-tutorial | community/games/Hello_Qiskit.ipynb | apache-2.0 | print("Hello! I'm a code cell")
"""
Explanation: <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Hello Qiskit
Click here to run this notebook in your browser using Binder.
The latest version of this notebook is available on https://github.com/qiskit/qiskit-tutorial.
Contributors
James R. Wootton, IBM Research
Chapter 1: Beginning with bits
This is a Jupyter notebook, in which you will run some puzzles and learn about quantum computing. Don't worry if you've never used a Jupyter notebook before. It just means you'll see lots of grey boxes with code in, like the one below. These are known as cells.
End of explanation
"""
print('Set up started...')
%matplotlib notebook
import sys
sys.path.append('game_engines')
import hello_quantum
print('Set up complete!')
"""
Explanation: You'll need to run the cells to use this tutorial. To run a cell, do the following.
For laptops and desktops, click on the cell and press Shift-Enter.
For mobile devices, tap on the icon that appears to the left of a cell.
Get started by doing this for the cell below (it will take a second or two to run).
End of explanation
"""
initialize = []
success_condition = {}
allowed_gates = {'0': {'NOT': 3}, '1': {}, 'both': {}}
vi = [[1], False, False]
qubit_names = {'0':'the only bit', '1':None}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: The rest of the cells in this notebook contain code that sets up puzzles for you to solve. To get the puzzles, just run the cells. To restart a puzzle, just rerun it.
If you want to know what the code is doing, or to create your own puzzles, see the guide here.
Level 1
Intro
Normal computers are made of bits, and quantum computers are made of qubits.
Qubits are basically an upgraded version of bits, so let's make sure we understand basic bit-based programming first.
The defining feature of a bit is that it has two possible output values.
These are often called 1 and 0, though we'll also be thinking of them as on and off.
We use bits represent and process information, but we typically need lots of them to do this.
To help you understand how to manipulate bits, we'll give you one to play with.
The simplest thing you can do to a bit (other than leave it alone) is the NOT operation.
Try it out and see what happens.
Exercise
Use the NOT command 3 times.
End of explanation
"""
initialize = []
success_condition = {}
allowed_gates = {'0': {}, '1': {'NOT': 0}, 'both': {}}
vi = [[], False, False]
qubit_names = {'0':'the bit on the left', '1':'the bit on the right'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
Here our bit was depicted by a circle that was either on (white) or off (black).
The effect of the NOT command was to turn it on and off.
This flips out bit between 0 and 1.
Level 2
Intro
Now let's do the same thing to a different bit.
This will look the same as before. But because it is a different bit, it'll be in a different place.
Exercise
Turn the other bit on.
End of explanation
"""
initialize = [['x', '0']]
success_condition = {'IZ': -1.0}
allowed_gates = {'0': {'CNOT': 0}, '1': {'CNOT': 0}, 'both': {}}
vi = [[], False, False]
qubit_names = {'0':'the bit on the left', '1':'the bit on the right'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
You've now mastered the NOT command: the most basic building block of computing.
Level 3
Intro
To really process information stored in bits, computers need more than just NOTs.
We need commands that let us manipulate some bits in a way that depends on other bits.
The simplest example is the controlled-NOT, or CNOT.
For this you need to choose one bit to be the target, and the other to be the control.
The CNOT then does a NOT on the target bit, but only if the control bit is on.
Exercise
Use the CNOT to turn on the bit on the right.
Note: The CNOT acts on both bits, but you still need to choose which will be the target bit.
End of explanation
"""
initialize = [['x', '0']]
success_condition = {'ZI': 1.0, 'IZ': -1.0}
allowed_gates = {'0': {'CNOT': 0}, '1': {'CNOT': 0}, 'both': {}}
vi = [[], False, False]
qubit_names = {'0':'the bit on the left', '1':'the bit on the right'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 4
Intro
Now let's make it so you need a couple of CNOTs.
Exercise
Use some CNOTs to turn the left bit off and the right bit on.
End of explanation
"""
initialize = [['h', '0']]
success_condition = {'IZ': 0.0}
allowed_gates = {'0': {'CNOT': 0}, '1': {'CNOT': 0}, 'both': {}}
vi = [[], False, False]
qubit_names = {'0':'the bit on the left', '1':'the bit on the right'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
Well done!
These kind of manipulations are what all computing compiles down to
With more bits and a controlled-controlled-NOT, you can do everything from Tetris to self-driving cars.
Level 5
Intro
Qubits have some similarities to random bit values, so let's take a few exercises to think about randomness.
For a bit that will give us a 0 or 1 with equal probability, we'll use a grey circle.
Exercise
Make the bit on the right random using a CNOT.
End of explanation
"""
initialize = [['h', '0']]
success_condition = {'ZZ': -1.0}
allowed_gates = {'0': {'NOT': 0, 'CNOT': 0}, '1': {'NOT': 0, 'CNOT': 0}, 'both': {}}
vi = [[], False, True]
qubit_names = {'0':'the bit on the left', '1':'the bit on the right'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
Well done!
If the left bit was off, the right bit stayed off. If the left bit was on, the right bit got switched on too.
So the random value of the left bit was 'copied' over to the right by the CNOT.
This means, despite the randomness, we know that both bits will always output the same result.
This is not the only way of having two random bits. We could also create cases where they are independently random, or always have different results.
How can we keep track of how our randomn bits are correlated?
Level 6
Intro
To keep track of correlations, we'll add another circle.
This doesn't represent a new bit. It simply tells us whether our two bits will agree or not.
It will be off when they agree, on when they don't, and grey when they aren't correlated (so agreements and disagreements are random).
Exercise
Make the two bits always disagree (that means making the middle circle white).
End of explanation
"""
initialize = [['h', '1']]
success_condition = {'IZ': -1.0}
allowed_gates = {'0': {'NOT': 0, 'CNOT': 0}, '1': {'NOT': 0, 'CNOT': 0}, 'both': {}}
vi = [[], False, True]
qubit_names = {'0':'the bit on the left', '1':'the bit on the right'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 7
Intro
Now you know pretty much need all you need to know about bits.
Let's have one more exercise before we move on.
Exercise
Turn on bit the bit on the right.
End of explanation
"""
initialize = [ ["x","0"] ]
success_condition = {"ZI":1.0}
allowed_gates = { "0":{"x":3}, "1":{}, "both":{} }
vi = [[1],True,True]
qubit_names = {'0':'the only qubit', '1':None}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
Time to move on to qubits!
Chapter 2: Basic single qubit operations
Level 1
Intro
There are many types of variables that you can have in a computer program.
Here we introduce you to a new one: the qubit.
Before we explain what they are, we'll give you one to play with.
Exercise
Use the x command 3 times, and see what happens
End of explanation
"""
initialize = [['x', '0']]
success_condition = {'ZI': 1.0}
allowed_gates = {'0': {'x': 0}, '1': {}, 'both': {}}
vi = [[1], True, True]
qubit_names = {'0':'the only qubit', '1':None}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 2
Intro
You have now mastered turning a circle on and off :)
But what does that actually mean? What is it useful for? To answer that, you need to know something about qubits.
Basically, they are quantum objects from which we can extract a simple bit value: 0 or 1.
There are many ways we can do this, and the result we get depends on the method we use.
The two circles in the last puzzle represent two different ways we could get a bit out of the same qubit.
They are usual called $Z$ measurements and $X$ measurements, but we'll simply call them the top and bottom outputs in this tutorial.
A black circle means that the corresponding output would give the value 0. A white one means we'd get a 1.
The x command has the effect of a NOT that acts only on the bottom output.
Exercise
Turn off the bottom circle.
End of explanation
"""
initialize = [['x', '1']]
success_condition = {'IZ': 1.0}
allowed_gates = {'0': {}, '1': {'x': 0}, 'both': {}}
vi = [[0], True, True]
qubit_names = {'0':None, '1':'the other qubit'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
The process of extracting a bit from a qubit is called 'measurement'. The measure command in Qiskit always extracts the bit from the bottom output.
The top output is acquired indirectly, using the h command that you will learn about soon.
Level 3
Intro
Now let's look at another qubit.
This will also have its inner workings represented by two circles.
But because it's a different qubit, these circles are in a different place.
Exercise
Turn the bottom circle of the other qubit off.
End of explanation
"""
initialize = []
success_condition = {'ZI': 0.0}
allowed_gates = {'0': {'h': 3}, '1': {}, 'both': {}}
vi = [[1], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
Just above you should see 'Your quantum program so far'.
This is the record of the moves you've made, written as a quantum program with Qiskit.
From now on, we'll start calling the qubits by the same names that the Qiskit program does
The one on the left will be qubit 0, and the one on the right will be qubit 1.
Level 4
Intro
Now it's time to try a new command: the h command.
This swaps the two circles of the qubit that it's applied to.
If you want to see this in a nice animated form, check out the Hello Quantum app.
But while you are here, test it out with the old trick of repeating three times.
Exercise
Use the h command 3 times.
End of explanation
"""
initialize = [['h', '1']]
success_condition = {'IZ': 1.0}
allowed_gates = {'0': {}, '1': {'x': 3, 'h': 0}, 'both': {}}
vi = [[0], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
This can be used to make our bottom circle become grey.
Grey circles represent an output that randomly gives a 0 or a 1.
Level 5
Intro
We know what x does to a circle that's fully off (it turns it on) or fully on (it turns it off).
But what does it do to one of these random grey ones?
By solving this exercise, you'll find out.
Exercise
Get the bottom circle fully off. You can use as many h commands as you like, but use x exactly 3 times.
End of explanation
"""
initialize = [['h', '0'], ['z', '0']]
success_condition = {'XI': 1.0}
allowed_gates = {'0': {'z': 0, 'h': 0}, '1': {}, 'both': {}}
vi = [[1], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
It turns out that a random result is just a random result, even if you flip it.
Level 6
Intro
Another important command is called z.
This works similar to x, except that it acts as a NOT on the top circle instead of the bottom.
Exercise
Turn the top circle off.
End of explanation
"""
initialize = []
success_condition = {'ZI': -1.0}
allowed_gates = {'0': {'z': 0, 'h': 0}, '1': {}, 'both': {}}
vi = [[1], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 7
Intro
The z, when combined with h, can be used to do the job of an x
Exercise
Turn on the bottom circle without using the x command
End of explanation
"""
initialize = [['h', '0']]
success_condition = {'IX': 1.0}
allowed_gates = {'0': {}, '1': {'z': 0, 'h': 0}, 'both': {}}
vi = [[0], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 8
Intro
You might notice that the top circles are always random when the bottom circles are fully on or off.
This is because qubits can never be simultaneously certain about each kind of output.
If they are certain about one, the other must be random.
Quantum computing is all about making sure that your certainty and your randomness in the right place.
Exercise
Move the off to the top and the randomness to the bottom.
End of explanation
"""
initialize = [['ry(pi/4)', '1']]
success_condition = {'IZ': -0.7071, 'IX': -0.7071}
allowed_gates = {'0': {}, '1': {'z': 0, 'h': 0}, 'both': {}}
vi = [[0], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 9
Intro
We can also share the limited certainty of our qubits between the two outputs.
Then both can be mostly certain about the output they give, even of they aren't fully certain.
In this exercise, the two circles for qubit 1 will start off dark grey. This means that both outputs would be highly likely, but not certain, to output a 0.
Exercise
Make the two circles for qubit 1 both light grey. This means that they'd be highly likely, but not certain, to output a 1.
End of explanation
"""
initialize = [['x', '1']]
success_condition = {'ZI': 0.0, 'IZ': 0.0}
allowed_gates = {'0': {'x': 0, 'z': 0, 'h': 0}, '1': {'x': 0, 'z': 0, 'h': 0}, 'both': {}}
vi = [[], True, False]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 10
Intro
Now you know the basic tools, you can tackle both qubits at once.
Exercise
Make both bottom outputs random.
End of explanation
"""
initialize = [['h','0'],['h','1']]
success_condition = {'ZZ': -1.0}
allowed_gates = {'0': {'x': 0, 'z': 0, 'h': 0}, '1': {'x': 0, 'z': 0, 'h': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
Each bottom output here would randomly output a 0 or a 1.
But will their outputs be correlated? Anti-correlated? Completely unrelated?
Just as we did with bits, we'll keep track of this information with some extra circles.
Level 11
Intro
In this puzzle you'll see two four new circles.
If you've done the tutorial on bits, you've already been introduced to the lower one.
It does the same job here, keeping track of whether the bottom output of one qubit will agree with the bottom ouput of the other.
If the two bottom outputs would definitely agree, this circle is off (black). If they'd disagree, it's on (white).
Exercise
Make the bottom outputs certain to disagree.
End of explanation
"""
initialize = [['x','0']]
success_condition = {'XX': 1.0}
allowed_gates = {'0': {'x': 0, 'z': 0, 'h': 0}, '1': {'x': 0, 'z': 0, 'h': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 12
Intro
The new circle at the very top has a similar job.
It keeps track of whether the top output from one qubit would agree with the top output from the other.
Exercise
Make the top outputs certain to agree.
End of explanation
"""
initialize = []
success_condition = {'XZ': -1.0}
allowed_gates = {'0': {'x': 0, 'z': 0, 'h': 0}, '1': {'x': 0, 'z': 0, 'h': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 13
Intro
The other new circles tell us whether the bottom output for one qubit would agree with the top one from the other.
Exercise
Make the top output for qubit 0 certain to disagree with the bottom output for qubit 1.
End of explanation
"""
initialize = [['ry(-pi/4)', '1'], ['ry(-pi/4)','0']]
success_condition = {'ZI': -0.7071, 'IZ': -0.7071}
allowed_gates = {'0': {'x': 0}, '1': {'x': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 14
Intro
Now the x commands don't just flip a single circle between black and white, but a whole column full of them
Exercise
Turn the two bottom outputs on as much as you can.
End of explanation
"""
initialize = [['x', '1'], ['x','0']]
success_condition = {'XI':1, 'IX':1}
allowed_gates = {'0': {'z': 0, 'h': 0}, '1': {'z': 0, 'h': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 15
Intro
The z commands affect the top columns in a similar way.
The h command swaps the bottom and top columns.
Exercise
Turn off the top cirlces.
End of explanation
"""
initialize = [['x', '0']]
success_condition = {'ZI': 1.0, 'IZ': -1.0}
allowed_gates = {'0': {'cx': 0}, '1': {'cx': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Chapter 3: Two qubit operations
Level 1
Introduction
In the exercises on bits, we used the CNOT operation.
This can be used on qubits too!
Since the x operation serves as our quantum NOT, we will use the cx command to do a CNOT on qubits.
Again, these have a control and a target: The bottom circle of the control qubit decides whether an x is applied to the target qubit.
When you apply this gate, the qubit you choose will serve as the target. The other qubit will then be the control.
Exercise
Use a cx or two to turn on the bottom circle of qubit 1, and turn off the bottom circle of qubit 0.
End of explanation
"""
initialize = [['h', '0'],['x', '1']]
success_condition = {'XI': -1.0, 'IZ': 1.0}
allowed_gates = {'0': {'h': 0, 'cz': 0}, '1': {'cx': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 2
Introduction
As well as a cx operation, we can also do a cz with qubits.
This applies a z to the target when the bottom circle of the control is on
Exercise
Turn on the top circle of qubit 0, and turn off the bottom circle of qubit 1.
End of explanation
"""
initialize = [['h', '0'],['x', '1'],['h', '1']]
success_condition = { }
allowed_gates = {'0':{'cz': 2}, '1':{'cz': 2}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Level 3
Introduction
There's another way that we can explain what a cz is doing.
It swaps the top circle of qubit 0 with the neighbouring circle above it.
It does the same with the top circle of qubit 1.
It also does something weird with the circle at the top of the grid, but we needn't think too much about that.
Exercise
Do the cz twice with each qubit as control, and see what happens.
End of explanation
"""
initialize = [['x', '0']]
success_condition = {'IZ': -1.0}
allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz': 0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
As you might have noticed, it doesn't matter which qubit you choose as control: the cz does the same thing in either case.
So from now on, choosing the control qubit for the cz is not required.
Level 4
Introduction
In a previous exercise, you've built an x from a z and some hs.
In the same way, it's possible to build a cx from a cz and some hs.
Exercise
Turn on the bottom circle of qubit 1.
End of explanation
"""
initialize = [['h', '0'],['h', '1']]
success_condition = {'XI': -1.0, 'IX': -1.0}
allowed_gates = {'0': {}, '1': {'z':0,'cx': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
Unlike the cz, the cx is not symmetric.
To make one with the target on qubit 0, you would have to do the hs on qubit 0 instead.
Level 5
Introduction
Because the cx isn't symmertic, it is fun to interpret it 'backwards'.
So instead of thinking of it as doing an x on the target depending on what the bottom circle of the control is doing...
... we can think of it as doing a z to the control depending on what the top circle of the target is doing.
In case you don't believe me, here's an exercise to test out this very property
Exercise
Turn on the top circle of qubit 0.
End of explanation
"""
initialize = []
success_condition = {'IZ': -1.0}
allowed_gates = {'0': {'x':0,'h':0,'cx':0}, '1': {'h':0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
So there's two different stories about how the cx works. Though they may seem to be contradictory, they are equally good descriptions.
This is a great example of the weird and wonderful nature of quantum operations.
Level 6
Introduction
These two interpretations of a cx can help us do something pretty useful: turning one around.
Suppose you need a cx with qubit 1 as target, but you can only do one with qubit 0 as target.
Can we somehow get the effect we need?
Exercise
Turn on the bottom circle of qubit 1.
End of explanation
"""
initialize = [['ry(-pi/4)','0'],['ry(-pi/4)','0'],['ry(-pi/4)','0'],['x','0'],['x','1']]
success_condition = {'ZI': -1.0,'XI':0,'IZ':0.7071,'IX':-0.7071}
allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz': 0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Outro
If you remember anything from these exercises, it should probably be this.
It is common for real qubit devices to limit which way around you can do the cx.
So the ability to turn them around comes in very handy.
Level 7
Introduction
The cz and cx operations can also be used to make a swap.
This does exactly what the name suggests: it swaps the states of two qubits.
Exercise
Swap the two qubits:
Make the bottom circle white and the top circle grey for qubit 0;
Make the bottom circle dark grey and the top circle light grey for qubit 1.
End of explanation
"""
initialize = [['x','0'],['h','1']]
success_condition = {'IX':1,'ZI':-1}
allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz':3}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names,shots=2000)
"""
Explanation: Outro
Your solution to this puzzle might not have been a general purpose swap.
Compare your solution to those for the next few puzzles, which also implement swaps.
Level 8
Intro
Another puzzle based on the swap.
Exercise
Swap the two qubits:
Make the top circle black for qubit 0.
Make the bottom white for qubit 1.
And do it with 3 cz operations
End of explanation
"""
initialize = [['x','1']]
success_condition = {'IZ':1.0,'ZI':-1.0}
allowed_gates = {'0': {'h':0}, '1': {'h':0}, 'both': {'cz':0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names,shots=2000)
"""
Explanation: Level 9
Intro
Another puzzle based on the swap.
Exercise
Swap the two qubits:
Turn off the bottom circle for qubit 0.
Turn on the bottom circle qubit 1.
End of explanation
"""
initialize = []
success_condition = {}
allowed_gates = {'0': {'ry(pi/4)': 4}, '1': {}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names)
"""
Explanation: Chapter 4: Beyond Clifford Operations
Level 1a
Introduction
The operations you've seen so far are called the 'Clifford operations'.
They are very important for moving and manipulating information in quantum computers
But to create algorithms that will outperform standard computers, we need more operations
This puzzle has one for you to try. Simply do it a few times, and see if you can work out what it does.
Exercise
Apply ry(pi/4) four times to qubit 0.
End of explanation
"""
initialize = [['x','0']]
success_condition = {'XX': 1.0}
allowed_gates = {'0': {'x': 0, 'z': 0, 'h': 0}, '1': {'x': 0, 'z': 0, 'h': 0}, 'both': {}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Outro
If you were able to work it out, you are some sort of genuis!
For those of us who are mere mortals, let's try something new to help us figure it out.
Level 1b
Introduction
To understand this operation, we need to slightly change the way we visualize the qubits.
From now now, an output that is certain to give 0 will be represented by a white line rather than a white circle.
And an output certain to give 1 will be a black line instead of a black circle.
For a random output, you'll see a line that's part white and part black instead of a grey circle.
Here's an old exercise to help you get used to this new visualization.
Exercise
Make the top outputs certain to agree.
End of explanation
"""
initialize = []
success_condition = {'ZI': -1.0}
allowed_gates = {'0': {'bloch':1, 'ry(pi/4)': 0}, '1':{}, 'both': {'unbloch':0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Level 1c
Introduction
Now you'll see a new command: bloch.
This doesn't actually do anything to the qubits. It just draws the two lines for each qubit on top of each other.
It also puts a point where their levels intersect.
Using bloch, you should hopefully be able to figure out how ry(pi/4) works.
Exercise
Turn the bottom line of qubit 0 fully on, and use the bloch command.
End of explanation
"""
initialize = [['h','0'],['h','1']]
success_condition = {'ZI': -1.0,'IZ': -1.0}
allowed_gates = {'0': {'bloch':0, 'ry(pi/4)': 0, 'ry(-pi/4)': 0}, '1': {'bloch':0, 'ry(pi/4)': 0, 'ry(-pi/4)': 0}, 'both': {'unbloch':0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Outro
As you probably noticed, this doesn't just combine the two lines for each qubit. It combines their whole columns.
If we follow the points, the effect of ry(pi/4) is to rotate them.
Each application moves it an eighth of the way around the circle, and moves the levels of the lines along with it.
The ry(pi/4) is the same, except the rotation is in the other direction
Level 2
Introduction
Now let's use these commands on the other qubit too.
Exercise
Turn the bottom lines fully on.
End of explanation
"""
initialize = [['h','0']]
success_condition = {'ZZ': 1.0}
allowed_gates = {'0': {}, '1': {'bloch':0, 'ry(pi/4)': 0, 'ry(-pi/4)': 0}, 'both': {'unbloch':0,'cz':0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Level 3
Introduction
Here's a puzzle you could solve with a simple cx, or a cz and some hs.
Unfortunately you have neither cx nor h.
So you'll need to work out how cz and rys can do the job.
Exercise
Make the bottom outputs agree.
End of explanation
"""
initialize = [['ry(pi/4)','0'],['ry(pi/4)','1']]
success_condition = {'ZI': 1.0,'IZ': 1.0}
allowed_gates = {'0': {'bloch':0, 'z':0, 'ry(pi/4)': 1}, '1': {'bloch':0, 'x':0, 'ry(pi/4)': 1}, 'both': {'unbloch':0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Level 4
Introduction
Using xs or zs you can effectively reflect an ry, to make it move in the opposite direction.
Exercise
Turn the bottom outputs fully off with just one ry(pi/4) on each.
End of explanation
"""
initialize = [['x','0'],['h','1']]
success_condition = {'IZ': 1.0}
allowed_gates = {'0': {}, '1': {'bloch':0, 'cx':0, 'ry(pi/4)': 1, 'ry(-pi/4)': 1}, 'both': {'unbloch':0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Level 5
Introduction
With the rys, we can make conditional operations that are more interesting than just cz and cx.
For example, we can make a controlled-h.
Exercise
Turn off the bottom output for qubit 1 using exactly one ry(pi/4) and ry(-pi/4) on that qubit.
End of explanation
"""
initialize = []
success_condition = {'IZ': 1.0,'IX': 1.0}
allowed_gates = {'0': {'bloch':0, 'x':0, 'z':0, 'h':0, 'cx':0, 'ry(pi/4)': 0, 'ry(-pi/4)': 0}, '1': {'bloch':0, 'x':0, 'z':0, 'h':0, 'cx':0, 'ry(pi/4)': 0, 'ry(-pi/4)': 0}, 'both': {'cz':0, 'unbloch':0}}
vi = [[], True, True]
qubit_names = {'0':'qubit 0', '1':'qubit 1'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Sandbox
You now know enough basic quantum operations to build fully powerful quantum programs. You'll get a taste of this in the next chapter. Until then, here is a grid with all the operations enabled, so you can have a play around.
End of explanation
"""
import random
def setup_variables ():
### Replace this section with anything you want ###
r = random.random()
A = r*(2/3)
B = r*(1/3)
### End of section ###
return A, B
"""
Explanation: Chapter 5: Proving the Uniqueness of Quantum Variables
Bell test for classical variables
Here we'll investigate how quantum variables (based on qubits) differ from standard ones (based on bits).
We'll do this by creating a pair of variables, which we will call A and B. We aren't going to put any conditions on what these can be, or how they are initialized. So there are a lot of possibilities:
They could be any kind of variable, such as
int
list
dict
...
They could be initialized by any kind of process, such as
left empty
filled with a given set of values
generated by a given random process
indepedently applied to A and B
applied to A and B together, allowing for correlations between their randomness
If the variables are initialized by a random process, it means they'll have different values every time we run our program. This is perfectly fine. The only rule we need to obey is that the process of generating the randomness is the same for every run.
We'll use the function below to set up these variables. This currently has A and B defined as to be partially correlated random floating point numbers. But you can change it to whatever you want.
End of explanation
"""
def hash2bit ( variable, hash ):
### Replace this section with anything you want ###
if hash=='V':
bit = (variable<0.5)
elif hash=='H':
bit = (variable<0.25)
bit = str(int(bit))
### End of section ###
return bit
"""
Explanation: Our next job is to define a hashing function. This simply needs to take one of the variables as input, and then give a bit value as an output.
This function must also be capable of performing two different types of hash. So it needs to be able to be able to chew on a variable and spit out a bit in to different ways. Another input to the function is therefore the kind of hash we want to use.
To be consistent with the rest of the program, the two possible hash types should be called 'H' and 'V'. Also, the output must be in the form of a single value bit string: either '0' or '1'.
As an example, I created bits by comparing A and B to a certain value. The output is '1' if they are under that value, and '0' otherwise. The type of hash determines the value used.
End of explanation
"""
shots = 8192
def calculate_P ( ):
P = {}
for hashes in ['VV','VH','HV','HH']:
# calculate each P[hashes] by sampling over `shots` samples
P[hashes] = 0
for shot in range(shots):
A, B = setup_variables()
a = hash2bit ( A, hashes[0] ) # hash type for variable `A` is the first character of `hashes`
b = hash2bit ( B, hashes[1] ) # hash type for variable `B` is the second character of `hashes`
P[hashes] += (a!=b) / shots
return P
"""
Explanation: Once these are defined, there are four quantities we wish to calculate: P['HH'], P['HV'], P['VH'] and P['VV'].
Let's focus on P['HV'] as an example. This is the probability that the bit value derived from an 'H' type hash on A is the same as that from a 'V' type has on B. We will estimate this probability by sampling many times and determining the fraction of samples for which the corresponding bit values agree.
The other probabilities are defined similarly: P['HH'] compares a 'H' type hash on both A and B, P['VV'] compares a V type hash on both, and P['VH'] compares a V type hash on A with a H type has on B.
These probabilities are calculated in the following function, which returns all the values of P in a dictionary. The parameter shots is the number of samples we'll use.
End of explanation
"""
P = calculate_P()
print(P)
"""
Explanation: Now let's actually calculate these values for the method we have chosen to set up and hash the variables.
End of explanation
"""
def bell_test (P):
sum_P = sum(P.values())
for hashes in P:
bound = sum_P - P[hashes]
print("The upper bound for P['"+hashes+"'] is "+str(bound))
print("The value of P['"+hashes+"'] is "+str(P[hashes]))
if P[hashes]<=bound:
print("The upper bound is obeyed :)\n")
else:
if P[hashes]-bound < 0.1:
print("This seems to have gone over the upper bound, but only by a little bit :S\nProbably just rounding errors or statistical noise.\n")
else:
print("!!!!! This has gone well over the upper bound :O !!!!!\n")
bell_test(P)
"""
Explanation: These values will vary slightly from one run to the next due to the fact that we only use a finite number of shots. To change them significantly, we need to change the way the variables are initiated, and/or the way the hash functions are defined.
No matter how these functions are defined, there are certain restrictions that the values of P will always obey.
For example, consider the case that P['HV'], P['VH'] and P['VV'] are all 0.0. The only way that this can be possible is for P['HH'] to also be 0.0.
To see why, we start by noting that P['HV']=0.0 is telling us that hash2bit ( A, H ) and hash2bit ( B, V ) were never different in any of the runs. So this means we can always expect them to be equal.
hash2bit ( A, H ) = hash2bit ( B, V ) (1)
From P['VV']=0.0 and P['VH']=0.0 we can similarly get
hash2bit ( A, V ) = hash2bit ( B, V ) (2)
hash2bit ( A, V ) = hash2bit ( B, H ) (3)
Putting (1) and (2) together implies that
hash2bit ( A, H ) = hash2bit ( A, V ) (4)
Combining this with (3) gives
hash2bit ( A, H ) = hash2bit ( B, H ) (5)
And if these values are always equal, we'll never see a run in which they are different. This is exactly what we set out to prove: P['HH']=0.0.
More generally, we can use the values of P['HV'], P['VH'] and P['VV'] to set an upper limit on what P['HH'] can be. By adapting the CHSH inequality we find that
$\,\,\,\,\,\,\,$ P['HH'] $\, \leq \,$ P['HV'] + P['VH'] + P['VV']
This is not just a special property of P['HH']. It's also true for all the others: each of these probabilities cannot be greater than the sum of the others.
To test whether this logic holds, we'll see how well the probabilities obey these inequalities. Note that we might get slight violations due to the fact that our the P values aren't exact, but are estimations made using a limited number of samples.
End of explanation
"""
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
def initialize_program ():
qubit = QuantumRegister(2)
A = qubit[0]
B = qubit[1]
bit = ClassicalRegister(2)
a = bit[0]
b = bit[1]
qc = QuantumCircuit(qubit, bit)
return A, B, a, b, qc
"""
Explanation: With the initialization and hash functions provided in this notebook, the value of P('HV') should be pretty much the same as the upper bound. Since the numbers are estimated statistically, and therefore are slightly approximate due to statistical noise, you might even see it go a tiny bit over. But you'll never see it significantly surpass the bound.
If you don't beleive me, try it for yourself. Change the way the variables are initialized, and how the hashes are calculated, and try to get one of the bounds to be significantly broken.
Bell test for quantum variables
Now we are going to do the same thing all over again, except our variables A and B will be quantum variables. Specifically, they'll be the simplest kind of quantum variable: qubits.
When writing quantum programs, we have to set up our qubits and bits before we can use them. This is done by the function below. It defines a register of two bits, and assigns them as our variables A and B. It then sets up a register of two bits to receive the outputs, and assigns them as a and b.
Finally it uses these registers to set up an empty quantum program. This is called qc.
End of explanation
"""
def hash2bit ( variable, hash, bit, qc ):
if hash=='H':
qc.h( variable )
qc.measure( variable, bit )
"""
Explanation: Before we start writing the quantum program to set up our variables, let's think about what needs to happen at the end of the program. This will be where we define the different hash functions, which turn our qubits into bits.
The simplest way to extract a bit from a qubit is through the measure command. This corresponds to the bottom circle of a qubit in the visualization we've been using. Let's use this as our V type hash.
For the output that corresponds to the top circle, there is no direct means of access. However, we can do it indirectly by first doing an h to swap the top and bottom circles, and then using the measure command. This will be our H type hash.
Note that this function has more inputs that its classical counterpart. We have to tell it the bit on which to write the result, and the quantum program, qc, on which we write the commands.
End of explanation
"""
initialize = []
success_condition = {'ZZ':-0.7071,'ZX':-0.7071,'XZ':-0.7071,'XX':-0.7071}
allowed_gates = {'0': {'bloch':0, 'x':0, 'z':0, 'h':0, 'cx':0, 'ry(pi/4)': 0, 'ry(-pi/4)': 0}, '1': {'bloch':0, 'x':0, 'z':0, 'h':0, 'cx':0, 'ry(pi/4)': 0, 'ry(-pi/4)': 0}, 'both': {'cz':0, 'unbloch':0}}
vi = [[], True, True]
qubit_names = {'0':'A', '1':'B'}
puzzle = hello_quantum.run_game(initialize, success_condition, allowed_gates, vi, qubit_names, mode='line')
"""
Explanation: Now its time to set up the variables A and B. To write this program, you can use the grid below. You can either follow the suggested exercise, or do whatever you like. Once you are ready, just move on. The cell containing the setup_variables() function, will then use the program you wrote with the grid.
Note that our choice of means that the probabilities P['HH'], P['HV'], P['VH'] and P['VV'] will explicitly correspond to circles on our grid. For example, the circle at the very top tells us how likely the two top outputs would be to disagree. If this is white, then P['HH']=1 , if it is black then P['HH']=0.
Exercise
Make it so that outputs from the bottom circles of both qubits are most likely to disagree, whereas all other combinations of outputs are most likely to agree.
End of explanation
"""
import numpy as np
def setup_variables ( A, B, qc ):
for line in puzzle.program:
eval(line)
"""
Explanation: Now the program as written above will be used to set up the quantum variables.
End of explanation
"""
shots = 8192
from qiskit import execute
def calculate_P ( backend ):
P = {}
program = {}
for hashes in ['VV','VH','HV','HH']:
A, B, a, b, program[hashes] = initialize_program ()
setup_variables( A, B, program[hashes] )
hash2bit ( A, hashes[0], a, program[hashes])
hash2bit ( B, hashes[1], b, program[hashes])
# submit jobs
job = execute( list(program.values()), backend, shots=shots )
# get the results
for hashes in ['VV','VH','HV','HH']:
stats = job.result().get_counts(program[hashes])
P[hashes] = 0
for string in stats.keys():
a = string[-1]
b = string[-2]
if a!=b:
P[hashes] += stats[string] / shots
return P
"""
Explanation: The values of P are calculated in the function below. This is done by sending jobs to IBM and getting results which tell us how many of the samples gave each possible result. The results are given as a bit string, string, which IBM numbers from right to left. This means that the value of a, which corresponds to bit[0] is the first from the right
a = string[-1]
and the value of b is right next to it at the second from the right
b = string[-2]
The number of samples for this bit string is provided by the dictionary of results, stats, as stats[string].
End of explanation
"""
device = 'qasm_simulator_py'
from qiskit import Aer, IBMQ
try:
IBMQ.load_accounts()
except:
pass
try:
backend = Aer.get_backend(device)
except:
backend = IBMQ.get_backend(device)
print(backend.status())
P = calculate_P( backend )
print(P)
bell_test( P )
"""
Explanation: Now its time to choose and set up the actually device we are going to use. By default, we'll use a simulator. But if you want to use the real device, just replace the first line below with device='ibmq_5_tenerife' or device='ibmq_16_melbourne'.
End of explanation
"""
|
fdmazzone/Ecuaciones_Diferenciales | examenes/.ipynb_checkpoints/GruposLie-checkpoint.ipynb | gpl-2.0 | from sympy import *
init_printing() #muestra símbolos más agradab
R=lambda n,d: Rational(n,d)
"""
Explanation: <h2> Ejercicios varios relacionados con grupos de Lie </h2>
End of explanation
"""
x,y,a,b,c,d,e,f=symbols('x,y,a,b,c,d,e,f',real=true)
#cargamos la función
F=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2
F
"""
Explanation: Ejercicio (1ª parcial 2018): Resolver $\frac{dy}{dx}=\frac{x y^{4}}{3} - \frac{2 y}{3 x} + \frac{1}{3 x^{3} y^{2}}$.
Intentaremos con la heuística $$\xi=ax+cy+e$$ y $$\eta=bx+dy+f$$ para encontrar las simetrías
End of explanation
"""
xi=a*x+c*y+e
eta=b*x+d*y+f
xi, eta
"""
Explanation: Hacemos $\xi=ax+cy+e$ y $\eta=bx+dy+f$
End of explanation
"""
Q=eta-xi*F
CondSim=Q.diff(x)+F*Q.diff(y)-F.diff(y)*Q
CondSim
CondSim=CondSim.factor()
CondSim
CondSim1,nosirvo=fraction(CondSim)
CondSim1
e1=CondSim1.coeff(x**7).coeff(y**7)
e1
"""
Explanation: Condición de simetría linealizada
End of explanation
"""
CondSim2=CondSim1.subs(f,0)
CondSim2
e2=CondSim2.coeff(x**7).coeff(y**8)
e2
"""
Explanation: debe ser $f=0$
End of explanation
"""
CondSim3=CondSim2.subs(d,-2*a/3)
CondSim3
"""
Explanation: Vemos que $d=-2/3a$.
End of explanation
"""
CondSim4=CondSim3.subs(c,0)
CondSim4
e3=CondSim4.coeff(x**8).coeff(y**7)
e3
"""
Explanation: debe ser $c=0$.
End of explanation
"""
CondSim5=CondSim4.subs(b,0)
CondSim5
"""
Explanation: Vemos que $b=0$.
End of explanation
"""
xi=xi.subs({c:0,f:0,e:0,a:1,b:0,d:-R(2,3)})
eta=eta.subs({c:0,f:0,e:0,a:1,b:0,d:-R(2,3)})
xi,eta
"""
Explanation: Se cumple si $e=0$.
End of explanation
"""
f=Function('f')(x)
xi2=xi.subs(y,f)
eta2=eta.subs(y,f)
dsolve(Eq(f.diff(x),eta2/xi2),f)
"""
Explanation: Puntos invariantes: $(0,0)$. Allí no tendremos coordenadas canónicas.
Para hallar la coordenada invariante resolvemos
$$y'=\frac{\eta}{\xi}.$$
End of explanation
"""
r=x**2*y**3
r
"""
Explanation: Esto nos indica que $r=x^{\frac{2}{3}}y$ es una solución. Como $H(r)$ también sirve cualquiera sea la $H$, con $H'\neq 0$. Eligiendo $F(r)=r^3$ podemos suponer $r= x^2y^3$.
End of explanation
"""
s=integrate(xi2**(-1),x)
s
"""
Explanation: Para hallar $s$ resolvemos
$$s=\int\frac{1}{\xi}dx.$$
End of explanation
"""
s=log(x)
r, s
"""
Explanation: Sympy no integra bien el logarítmo
End of explanation
"""
r1,s1=symbols('r1,s1')
( (s.diff(x)+s.diff(y)*F)/(r.diff(x)+r.diff(y)*F)).subs({x:exp(s1),y:r1**R(1,3)*exp(-R(2,3)*s1)}) .simplify()
"""
Explanation: Reemplacemos en la fórmula de cambios de variables
$$\frac{ds}{dr}=\left.\frac{s_x+s_y F}{r_x+r_y F}\right|_{x=e^s,y=r^{1/3}e^{-2/3s}}.$$
End of explanation
"""
|
yuhao0531/dmc | notebooks/week-2/03 - Introduction to Python - Functions and Objects.ipynb | apache-2.0 | def addFunction(inputNumber):
result = inputNumber + 2
return result
"""
Explanation: So far, we have seen how we can use variables in Python to store different kinds of data, and how we can use 'flow control' structures such as conditionals and loops to change the order or the way in which lines of code get executed. With only these tools we can already start to express some pretty complex logics. However, with only our current tools, any sufficiently complex script would start to get very long, since every time we wanted to do a certain process we would have to rewrite all of its code. This is where functions and classes come in. Functions allow us to encapsulate lines of code to create custom processes that can be reused anywhere throughout the script. Objects take this encapsulation one step further and wrap up not only a single process, but several related processes, as well as local variables that can keep track of that object's status.
4. Functions
We have already seen and used some functions such as type(), str(), .append(), .keys(), and range(). But what are functions really?
As in math, a function is a basic structure that can accept inputs, perform some processing on those inputs, and give back a result. Let's create a basic function that will add two to a given number and give us back the result:
End of explanation
"""
print addFunction(2)
"""
Explanation: On its own, this code will only define what the function does, but will not actually run any code. To execute the code inside the function you have to call it somewhere within the script and pass it the proper inputs:
End of explanation
"""
var = 2
print addFunction(var)
"""
Explanation: A function's definition begins with the keyword 'def'. After this is the function's name, which follows the same naming conventions as variables. Inside the parenthesis after the function name you can place any number of input variables, which will be passed to the function when it is called, and are available within the body of the function. When you call a function, you can either directly pass values or pass variables that have values stored inside of them. For example, this code will call the function in the same way:
End of explanation
"""
def addFunction(inputNumber):
if inputNumber < 0:
return 'Number must be positive!'
result = inputNumber + 2
return result
print addFunction(-2)
print addFunction(2)
"""
Explanation: Here the value of the 'var' variable, which in this case is 2, is being passed to the 'addFunction' function, and is then available within that function through the 'inputNumber' variable. Notice that the names of the two variables 'var' and 'inputNumber' don't have to match. When a value gets passed to a function it forms a direct connection between the two sets of parenthesis which carries the data. In this case 'var' is a global variable that stores the value '2' in the main script, while 'inputNumber' is a local variable which stores that value only for the duration of that function. In this way functions 'wrap up' specific tasks and all the data that is necessary to execute that task to limit the number of global variables necessary in the main function.
The first line declaring the function and its inputs ends with a colon, which should be familiar by now, with the rest of the function body inset from the first line. Optionally, if you want to return a value from the function back to the main script, you can end the function with the keyword 'return', followed by the value or variable you want the function to return to the user. Once the function hits on a return statement, it will skip over the rest of the body and return the associated value. This can be used to create more complex behavior within the function:
End of explanation
"""
def addTwoNumbers(inputNumber1, inputNumber2):
result = inputNumber1 + inputNumber2
return result
print addTwoNumbers(2, 3)
"""
Explanation: You can see that in this case, if the input is less than zero the conditional will be met, which causes the first return statement to run, skipping the rest of the code in the function.
You can pass any number of inputs into a function, but the number of inputs must always match between what is defined in the function, and what is passed into it when the function is called. For example, we can expand our simple addition function to accept two numbers to be added:
End of explanation
"""
def twoNumbers(inputNumber1, inputNumber2):
addition = inputNumber1 + inputNumber2
multiplication = inputNumber1 * inputNumber2
return [addition, multiplication]
result = twoNumbers(2, 3)
print 'addition: ', result[0]
print 'multiplication: ', result[1]
"""
Explanation: You can also return multiple values by building them into a list, and then extracting them from the returned list. Let's expand our function to return both the addition and multiplication of two numbers:
End of explanation
"""
add, mult = twoNumbers(2, 3)
print 'addition: ', str(add)
print 'multiplication: ', str(mult)
"""
Explanation: If you don't want to use a list you can also ask for the results as an ordered set of new variables separated by a comma:
End of explanation
"""
class CounterClass:
count = 0
def addToCounter(self, inputValue):
self.count += inputValue
def getCount(self):
return self.count
"""
Explanation: These kinds of functions are extremely useful for creating efficient and readable code. By wrapping up certain functionalities into custom modules, they allow you (and possibly others) to reuse code in a very efficient way, and also force you to be explicit about the various sets of operations happening in your code. You can see that the basic definition of functions is quite simple, however you can quickly start to define more advanced logics, where functions call each other and pass around inputs and returns in highly complex ways (you can even pass a function as an input into another function!). This kind of programming, which uses functions to encapsulate discrete logics in a program is called functional programming.
5. Classes
A step beyond functional programming is object-oriented programming or OOP. In OOP, programs are defined not as a list of procedures to be executed one at a time, but as a collection of interacting objects. In the traditional procedural approach, a program is executed and finishes once all the procedures are run. With OOP, the program is running continuously, with objects interacting and triggering different behaviors based on events occurring in real time.
Although we will not get too deep into OOP within the confines of this course, many of the technologies we build on will be inherently based on OOP. So it is important to at least get familiar with what objects are, and how we can use them in a very basic sense. An object in Python is called a class, but the two words are often used interchangeably. You can think of a class as a structure that encapsulates a set of related functions (functions belonging to specific objects are often called that object's 'methods') with a set of local variables that keep track of that class' state. Together, these variables and methods define the 'behavior' of the object, and dictate how it interacts with other objects in the programming 'environment'.
Let's think about this in everyday terms. For an animal, an example of a method might be 'running'. Lots of things can run, so the definition of running as a function would be general, and would not necessarily relate to who is doing the running. On the other hand, an example of a class might be 'dog', which would have an instance of the 'running' method, as well as other methods related to being a dog such as 'eating' and 'barking'. It would also have a set of variable for storing information about a given dog, such as its age, breed or weight. Another class might be 'human', which would store different variables, and would have it's own particular version of methods such as 'running' and 'eating' (but hopefully not 'barking').
Let's define a very basic class to see how it works. We will use an example of a counter, which will store a value, and increment that value based on user requests:
End of explanation
"""
myCounter = CounterClass()
"""
Explanation: Notice we are again using the '+=' shorthand to increment the value of the object's count variable by the input value. To use this class, we first need to create an instance of it, which we will store in a variable just like any other piece of data:
End of explanation
"""
myCounter.addToCounter(2)
print myCounter.getCount()
"""
Explanation: Once we create an instance of a class (this is called 'instantiation'), we can run that instance's methods, and query it's variables. Note that the general class definition is only a construct. All variables within the class only apply to a particular instance, and the methods can only be run as they relate to that instance. For example:
End of explanation
"""
myCounter.count
"""
Explanation: Right away, you will notice a few differences between how we define functions and classes. First of all, no variables are passed on the first line of the definition since the 'class' keyword only defines the overall structure of the class. After the first line you will find a list of variables that are the local variables of that class, and keep track of data for individual instances. After this you will have a collection of local methods (remember 'methods' are simply functions that belong to a particular class) that define the class functionality. These methods are defined the same way as before, except you see that the first input is always the keyword 'self'. This represents the object instance, and is always passed as the first input into each method in a class. This allows you to query the local variables of the instance, as you can see us doing with the 'count' variable.
To call a method within a class, you use the name of the variable that is storing the instance, and use the dot '.' notation to call the method. The dot is basically your way into the instance and all of it's data and functionality. We have seen the dot before, for example when we called the .append() function on a list. This is because a list is actually a class itself! When you define a list you are actually creating an instance of the list class, which inherits all of the functionalities of that class (crazy right?). Actually there is only a small collection of primitive data types in Python (ints, floats, booleans, and a few others), with everything else defined as classes in the OOP framework. Even strings are special classes which store a collection of characters.
By the way, it is also possible to use the '.' syntax to query the local variables of the class instance. For example, if we want to find the value of myCounter's count variable, we can just ask it by typing:
End of explanation
"""
class CounterClass:
count = 0
def __init__(self, inputValue):
self.count = inputValue
def addToCounter(self, inputValue):
self.count += inputValue
def getCount(self):
return self.count
"""
Explanation: However, this is discouraged because it reveals the true name of the local variables to the end user. In a production environment this would pose severe security risks, but it is considered bad practice even in private uses. Instead, you are encouraged to create special 'accessor' methods to pull variable values from the instance, as we have done with the 'getCount()' method in our example. Another advantage of this practice (which is called encapsulation) is that the code is easier to maintain. You are free to make any changes within the class definition, including changing the names of the local variables and what they do. As long as you maintain the accessor functions and they return the expected result, you do not have to update anything in the main code.
As far as naming classes goes, you can follow the same rule as naming variables or functions, however the standard practice is to capitalize every word, including the first one.
Finally, in the example above every instance we make of the CounterClass will start the counter at 0. However, what if we want to specify what this count should be when we make an instance of the class? For this we can implement the __init__() method (those are two underscores on each side of 'init'):
End of explanation
"""
myNewCounter = CounterClass(10)
myNewCounter.addToCounter(2)
#this should now return 12
print myNewCounter.getCount()
"""
Explanation: Now we can create a new instance of the counter, but this time pass in a starting value for the count.
End of explanation
"""
|
kriete/cie5703_notebooks | week_7_spatial_variograms.ipynb | mit | from rpy2.robjects.packages import importr
from rpy2.robjects import r
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: This is a python / R implementation for spatial analysis of radar rainfall fields. All courtesy for the R code implementation goes to Marc Schleiss
Notes before running:
o) make sure to have installed R properly
o) install the python library rpy2
linux/mac users should be fine by just pip install rpy2
for windows, consider using anaconda and install rpy2
o) install the R libraries sp, gstat and intamap inside the R environment (best as sudo/adminstrator):
install.packages("sp")
install.packages("gstat")
install.packages("intamap")
Import python / R interface packages
End of explanation
"""
sp = importr('sp')
gstat = importr('gstat')
intamap = importr('intamap')
"""
Explanation: Inside the R environment import these geospatial packages
End of explanation
"""
r('jet.colors <- c("#00007F","blue","#007FFF","cyan","#7FFF7F","yellow","#FF7F00","red","#7F0000")')
r('col.palette <- colorRampPalette(jet.colors)')
"""
Explanation: Set colors within the R environment
End of explanation
"""
coords = pd.read_csv('./radar_xy.csv', header=None)
coords.columns = ['x', 'y']
coords.head()
"""
Explanation: Read pandas coordinates of the radar grid
End of explanation
"""
rainfall = pd.read_csv('./radar_sent/radar_snap_24h_2011_08_05-00_00.csv', header=None)
rainfall = pd.DataFrame(rainfall.iloc[0,5::])
rainfall.index = np.arange(0,len(rainfall),1)
rainfall.columns = ['R']
rainfall['x'] = coords['x']
rainfall['y'] = coords['y']
rainfall.head()
"""
Explanation: Read the 24h dataset and (re)arange the pandas DataFrame
End of explanation
"""
from rpy2.robjects import pandas2ri
pandas2ri.activate()
"""
Explanation: Activate the pandas to R conversion interface
End of explanation
"""
mask = rainfall.R>0
rainfall = rainfall[mask]
r_df = pandas2ri.py2ri(rainfall)
r.assign('mydata', r_df)
"""
Explanation: Select only gridpoints > 0mm rain (wet mask) and assign it in the R environment
End of explanation
"""
r('''
mydata <- data.frame(mydata)
coordinates(mydata) <- ~x+y
''')
"""
Explanation: Transform dataset in R to geospatial dataset
End of explanation
"""
r('''
RAD24 <- read.table("./radar_sent/radar_snap_24h_2011_08_05-00_00.csv",sep=",",colClasses="numeric")
RAD24 <- as.numeric(as.vector(RAD24))
RAD24 <- RAD24[6:length(RAD24)]
png("map_24h.png",height=900,width=900)
ncuts <- 20
cuts <- seq(min(RAD24,na.rm=TRUE),max(RAD24,na.rm=TRUE),length=ncuts)
print(spplot(mydata["R"],xlab="East [m]",ylab="North [m]",key.space="right",cuts=cuts,region=TRUE,col.regions=col.palette(ncuts),main="Rainfall [mm]",scales=list(draw=TRUE)))
dev.off()
''')
"""
Explanation: Bypass plot R map
End of explanation
"""
p_myiso = r('myiso <- variogram(R~1,mydata,width=2,cutoff=100)')
p_myiso.head()
plt.plot(p_myiso['dist'], p_myiso['gamma'], '-o')
"""
Explanation: Generate a isotropic variogram (2km separated lags, max 100km)
End of explanation
"""
p_myiso_map = r('myisomap <- variogram(R~1,mydata,width=2,cutoff=50,map=TRUE)')
r('''
png("myvariogram_map_24h.png",height=600,width=600)
print(plot(myisomap))
dev.off()
''')
"""
Explanation: Generate and save the 2D variogram map
End of explanation
"""
rain_sorted = rainfall.sort_values('R', ascending=False)
rain_sorted = rain_sorted.iloc[0:1499]
rs_df = pandas2ri.py2ri(rain_sorted)
r.assign('data_sorted', rs_df)
r('''
data_sorted <- data.frame(data_sorted)
coordinates(data_sorted) <- ~x+y
''')
"""
Explanation: Investigate the (an)isotropy of the dataset
Only possible with up to 1499 values. Therefore we sort the rainfall values descendingly and assign the sorted dataset to the R environment.
End of explanation
"""
r('''
hat.anis <- estimateAnisotropy(data_sorted,"R")
anis <- c(90-hat.anis$direction,1/hat.anis$ratio)
''')
"""
Explanation: Returns the direction of the minumum variablity clockwise from North and the anisotropy ratio
End of explanation
"""
dir_var = r('directional_variograms <- variogram(R~1,mydata,width=2,cutoff=100,alpha=c(99.9,189.9),tol.hor=5)')
dir_1 = dir_var['dir.hor']==99.9
plt.figure()
plt.subplot(121)
plt.plot(dir_var.dist[dir_1], dir_var.gamma[dir_1])
plt.subplot(122)
plt.plot(dir_var.dist[~dir_1], dir_var.gamma[~dir_1])
"""
Explanation: Compute directional variograms with anisotropy direction
End of explanation
"""
r('initial_vario_sph <- vgm(psill=500,model="Sph",range=40,nugget=0)')
sph_fitted = r('fitted_vario_sph <- fit.variogram(myiso,initial_vario_sph)')
print(sph_fitted)
"""
Explanation: Fit initial spherical variogram to isotropic variogram
End of explanation
"""
r('''
png("fitted_isotropic_variogram_sph_24h.png",height=600,width=900)
print(plot(myiso,fitted_vario_sph))
dev.off()
''')
"""
Explanation: Save image of fitted variogram
End of explanation
"""
r('range <- fitted_vario_sph$range[2]')
"""
Explanation: fitted range
End of explanation
"""
r('nugget <- fitted_vario_sph$psill[1]')
"""
Explanation: fitted nugget
End of explanation
"""
r('sill <- sum(fitted_vario_sph$psill)')
"""
Explanation: fitted sill
End of explanation
"""
r('SSErr_sph <- attributes(fitted_vario_sph)$SSErr')
"""
Explanation: sum of squared errors
End of explanation
"""
r('initial_vario_exp <- vgm(psill=500,model="Exp",range=40/3,nugget=0)')
exp_fitted = r('fitted_vario_exp <- fit.variogram(myiso,initial_vario_exp)')
"""
Explanation: Fit exponential model
End of explanation
"""
r('SSErr_exp <- attributes(fitted_vario_exp)$SSErr')
"""
Explanation: sum of squared errors
End of explanation
"""
r('''
png("fitted_isotropic_variogram_exp_24h.png",height=600,width=900)
print(plot(myiso,fitted_vario_exp))
dev.off()
''')
"""
Explanation: Save image
End of explanation
"""
rain_15_min_1 = pd.read_csv('./radar_sent/radar_snap_2011_08_01-00_00.csv', header=None)
rain_15_min_1 = pd.DataFrame(rain_15_min_1.iloc[0,5::])
rain_15_min_1.index = np.arange(0,len(rain_15_min_1),1)
rain_15_min_1.columns = ['R']
rain_15_min_1['x'] = coords['x']
rain_15_min_1['y'] = coords['y']
rain_15_min_2 = pd.read_csv('./radar_sent/radar_snap_2011_08_05-16_00.csv', header=None)
rain_15_min_2 = pd.DataFrame(rain_15_min_2.iloc[0,5::])
rain_15_min_2.index = np.arange(0,len(rain_15_min_2),1)
rain_15_min_2.columns = ['R']
rain_15_min_2['x'] = coords['x']
rain_15_min_2['y'] = coords['y']
rain_3h = pd.read_csv('./radar_sent/radar_snap_3h_2011_08_05-15_00.csv', header=None)
rain_3h = pd.DataFrame(rain_3h.iloc[0,5::])
rain_3h.index = np.arange(0,len(rain_3h),1)
rain_3h.columns = ['R']
rain_3h['x'] = coords['x']
rain_3h['y'] = coords['y']
mask = rain_15_min_1.R>0
rain_15_min_1 = rain_15_min_1[mask]
r_df_15_1 = pandas2ri.py2ri(rain_15_min_1)
r.assign('mydata_15_1', r_df_15_1)
mask = rain_15_min_2.R>0
rain_15_min_2 = rain_15_min_2[mask]
r_df_15_2 = pandas2ri.py2ri(rain_15_min_2)
r.assign('mydata_15_2', r_df_15_2)
mask = rain_3h.R>0
rain_3h = rain_3h[mask]
r_df_3h = pandas2ri.py2ri(rain_3h)
r.assign('mydata_3h', r_df_3h)
r('''
mydata_15_1 <- data.frame(mydata_15_1)
coordinates(mydata_15_1) <- ~x+y
''')
p_myiso_1 = r('myiso_15_1 <- variogram(R~1,mydata_15_1,width=2,cutoff=100)')
r('''
mydata_15_2 <- data.frame(mydata_15_2)
coordinates(mydata_15_2) <- ~x+y
''')
p_myiso_2 = r('myiso_15_2 <- variogram(R~1,mydata_15_2,width=2,cutoff=100)')
r('''
mydata_3h <- data.frame(mydata_3h)
coordinates(mydata_3h) <- ~x+y
''')
p_myiso_3 = r('myiso_3h <- variogram(R~1,mydata_3h,width=2,cutoff=100)')
print(p_myiso_1.head())
print(p_myiso_2.head())
print(p_myiso_3.head())
"""
Explanation: Repeat everything with 15 minute and 3h field(s)
End of explanation
"""
r('''
RAD24 <- read.table("./radar_sent/radar_snap_2011_08_01-00_00.csv",sep=",",colClasses="numeric")
RAD24 <- as.numeric(as.vector(RAD24))
RAD24 <- RAD24[6:length(RAD24)]
png("map_15_1.png",height=900,width=900)
ncuts <- 20
cuts <- seq(min(RAD24,na.rm=TRUE),max(RAD24,na.rm=TRUE),length=ncuts)
print(spplot(mydata_15_1["R"],xlab="East [m]",ylab="North [m]",key.space="right",cuts=cuts,region=TRUE,col.regions=col.palette(ncuts),main="Rainfall [mm]",scales=list(draw=TRUE)))
dev.off()
RAD24 <- read.table("./radar_sent/radar_snap_2011_08_05-16_00.csv",sep=",",colClasses="numeric")
RAD24 <- as.numeric(as.vector(RAD24))
RAD24 <- RAD24[6:length(RAD24)]
png("map_15_2.png",height=900,width=900)
ncuts <- 20
cuts <- seq(min(RAD24,na.rm=TRUE),max(RAD24,na.rm=TRUE),length=ncuts)
print(spplot(mydata_15_2["R"],xlab="East [m]",ylab="North [m]",key.space="right",cuts=cuts,region=TRUE,col.regions=col.palette(ncuts),main="Rainfall [mm]",scales=list(draw=TRUE)))
dev.off()
RAD24 <- read.table("./radar_sent/radar_snap_3h_2011_08_05-15_00.csv",sep=",",colClasses="numeric")
RAD24 <- as.numeric(as.vector(RAD24))
RAD24 <- RAD24[6:length(RAD24)]
png("map_3h.png",height=900,width=900)
ncuts <- 20
cuts <- seq(min(RAD24,na.rm=TRUE),max(RAD24,na.rm=TRUE),length=ncuts)
print(spplot(mydata_3h["R"],xlab="East [m]",ylab="North [m]",key.space="right",cuts=cuts,region=TRUE,col.regions=col.palette(ncuts),main="Rainfall [mm]",scales=list(draw=TRUE)))
dev.off()
''')
"""
Explanation: Bypass plot R map
End of explanation
"""
plt.plot(p_myiso_1['dist'], p_myiso_1['gamma'], '-o')
plt.plot(p_myiso_2['dist'], p_myiso_2['gamma'], '-o')
plt.legend(['08-01 00:00', '08-05 16:00'])
plt.figure()
plt.plot(p_myiso_3['dist'], p_myiso_3['gamma'], '-o')
plt.legend(['08-05 3h accum'])
"""
Explanation: Isotropic variograms
End of explanation
"""
r('''
myiso_15_1_map <- variogram(R~1,mydata_15_1,width=2,cutoff=50,map=TRUE)
png("myvariogram_map_15_1.png",height=600,width=600)
print(plot(myiso_15_1_map))
dev.off()
myiso_15_2_map <- variogram(R~1,mydata_15_2,width=2,cutoff=50,map=TRUE)
png("myvariogram_map_15_2.png",height=600,width=600)
print(plot(myiso_15_2_map))
dev.off()
myiso_3h_map <- variogram(R~1,mydata_3h,width=2,cutoff=50,map=TRUE)
png("myvariogram_map_3h.png",height=600,width=600)
print(plot(myiso_3h_map))
dev.off()
''')
"""
Explanation: Plot R variogram map
End of explanation
"""
rain_sorted_15_1 = rain_15_min_1.sort_values('R', ascending=False)
rain_sorted_15_1 = rain_sorted_15_1.iloc[0:1499]
rs_df_15_1 = pandas2ri.py2ri(rain_sorted_15_1)
r.assign('data_sorted_15_1', rs_df_15_1)
rain_sorted_15_2 = rain_15_min_2.sort_values('R', ascending=False)
rain_sorted_15_2 = rain_sorted_15_2.iloc[0:1499]
rs_df_15_2 = pandas2ri.py2ri(rain_sorted_15_2)
r.assign('data_sorted_15_2', rs_df_15_2)
rain_sorted_3h = rain_3h.sort_values('R', ascending=False)
rain_sorted_3h = rain_sorted_3h.iloc[0:1499]
rs_df_3h = pandas2ri.py2ri(rain_sorted_3h)
r.assign('data_sorted_3h', rs_df_3h)
r('''
data_sorted_15_1 <- data.frame(data_sorted_15_1)
coordinates(data_sorted_15_1) <- ~x+y
data_sorted_15_2 <- data.frame(data_sorted_15_2)
coordinates(data_sorted_15_2) <- ~x+y
data_sorted_3h <- data.frame(data_sorted_3h)
coordinates(data_sorted_3h) <- ~x+y
''')
r('hat.anis <- estimateAnisotropy(data_sorted_15_1,"R")')
anis_15_1 = r('anis <- c(90-hat.anis$direction,1/hat.anis$ratio)')
print(anis_15_1)
r('hat.anis <- estimateAnisotropy(data_sorted_15_2,"R")')
anis_15_2 = r('anis <- c(90-hat.anis$direction,1/hat.anis$ratio)')
print(anis_15_2)
r('hat.anis <- estimateAnisotropy(data_sorted_3h,"R")')
anis_3h = r('anis <- c(90-hat.anis$direction,1/hat.anis$ratio)')
print(anis_3h)
dir_var_15_1 = r('directional_variograms_15_1 <- variogram(R~1,mydata_15_1,width=2,cutoff=100,alpha=c(83.6,173.6),tol.hor=5)')
dir_var_15_2 = r('directional_variograms_15_2 <- variogram(R~1,mydata_15_2,width=2,cutoff=100,alpha=c(119.45,209.45),tol.hor=5)')
dir_var_3h = r('directional_variograms_3h <- variogram(R~1,mydata_3h,width=2,cutoff=100,alpha=c(130.27,220.27),tol.hor=5)')
dir_var_15_1['dir.hor'] = np.around(dir_var_15_1['dir.hor'], decimals=2)
dir_15_1 = dir_var_15_1['dir.hor']==83.6
plt.figure()
plt.subplot(121)
plt.plot(dir_var_15_1.dist[dir_15_1], dir_var_15_1.gamma[dir_15_1], '-o')
plt.subplot(122)
plt.plot(dir_var_15_1.dist[~dir_15_1], dir_var_15_1.gamma[~dir_15_1], '-o')
dir_var_15_2['dir.hor'] = np.around(dir_var_15_2['dir.hor'], decimals=2)
dir_15_2 = dir_var_15_2['dir.hor']==119.45
plt.figure()
plt.subplot(121)
plt.plot(dir_var_15_2.dist[dir_15_2], dir_var_15_2.gamma[dir_15_2], '-o')
plt.subplot(122)
plt.plot(dir_var_15_2.dist[~dir_15_2], dir_var_15_2.gamma[~dir_15_2], '-o')
dir_var_3h['dir.hor'] = np.around(dir_var_3h['dir.hor'], decimals=2)
dir_3h = dir_var_3h['dir.hor']==130.27
plt.figure()
plt.subplot(121)
plt.plot(dir_var_3h.dist[dir_3h], dir_var_3h.gamma[dir_3h], '-o')
plt.subplot(122)
plt.plot(dir_var_3h.dist[~dir_3h], dir_var_3h.gamma[~dir_3h], '-o')
"""
Explanation: (An)isotropy and directional variogram
Sort and convert to R spatial dataframe
End of explanation
"""
r('''
initial_vario_sph_15_1 <- vgm(psill=30,model="Sph",range=60,nugget=0)
initial_vario_sph_15_2 <- vgm(psill=17,model="Sph",range=30,nugget=0)
initial_vario_sph_3h <- vgm(psill=500,model="Sph",range=40,nugget=0)
''')
fitted_vario_sph_15_1 = r('fitted_vario_sph_15_1 <- fit.variogram(myiso_15_1,initial_vario_sph_15_1)')
sph_15_1_sserr = r('SSErr_sph <- attributes(fitted_vario_sph_15_1)$SSErr')
fitted_vario_sph_15_2 = r('fitted_vario_sph_15_2 <- fit.variogram(myiso_15_2,initial_vario_sph_15_2)')
sph_15_2_sserr = r('SSErr_sph <- attributes(fitted_vario_sph_15_2)$SSErr')
fitted_vario_sph_3h = r('fitted_vario_sph_3h <- fit.variogram(myiso_3h,initial_vario_sph_3h)')
sph_3h_sserr = r('SSErr_sph <- attributes(fitted_vario_sph_3h)$SSErr')
r('''
png("fitted_isotropic_variogram_sph_15_1.png",height=600,width=900)
print(plot(myiso_15_1,fitted_vario_sph_15_1))
dev.off()
png("fitted_isotropic_variogram_sph_15_2.png",height=600,width=900)
print(plot(myiso_15_2,fitted_vario_sph_15_2))
dev.off()
png("fitted_isotropic_variogram_sph_3h.png",height=600,width=900)
print(plot(myiso_3h,fitted_vario_sph_3h))
dev.off()
''')
print('15_1:')
print(fitted_vario_sph_15_1.range[2])
print(fitted_vario_sph_15_1.psill[1])
print(sum(fitted_vario_sph_15_1.psill))
print(sph_15_1_sserr)
print('----------')
print('15_2:')
print(fitted_vario_sph_15_2.range[2])
print(fitted_vario_sph_15_2.psill[1])
print(sum(fitted_vario_sph_15_2.psill))
print(sph_15_2_sserr)
print('----------')
print('3h:')
print(fitted_vario_sph_3h.range[2])
print(fitted_vario_sph_3h.psill[1])
print(sum(fitted_vario_sph_3h.psill))
print(sph_3h_sserr)
"""
Explanation: Spherical
End of explanation
"""
r('''
initial_vario_exp_15_1 <- vgm(psill=30,model="Exp",range=60/3,nugget=0)
initial_vario_exp_15_2 <- vgm(psill=17,model="Exp",range=30/3,nugget=0)
initial_vario_exp_3h <- vgm(psill=500,model="Exp",range=40/3,nugget=0)
''')
fitted_vario_exp_15_1 = r('fitted_vario_exp_15_1 <- fit.variogram(myiso_15_1,initial_vario_exp_15_1)')
exp_15_1_sserr = r('SSErr_exp <- attributes(fitted_vario_exp_15_1)$SSErr')
fitted_vario_exp_15_2 = r('fitted_vario_exp_15_2 <- fit.variogram(myiso_15_2,initial_vario_exp_15_2)')
exp_15_2_sserr = r('SSErr_exp <- attributes(fitted_vario_exp_15_2)$SSErr')
fitted_vario_exp_3h = r('fitted_vario_exp_3h <- fit.variogram(myiso_3h,initial_vario_exp_3h)')
exp_3h_sserr = r('SSErr_exp <- attributes(fitted_vario_exp_3h)$SSErr')
r('''
png("fitted_isotropic_variogram_exp_15_1.png",height=600,width=900)
print(plot(myiso_15_1,fitted_vario_exp_15_1))
dev.off()
png("fitted_isotropic_variogram_exp_15_2.png",height=600,width=900)
print(plot(myiso_15_2,fitted_vario_exp_15_2))
dev.off()
png("fitted_isotropic_variogram_exp_3h.png",height=600,width=900)
print(plot(myiso_3h,fitted_vario_exp_3h))
dev.off()
''')
print('15_1:')
print(fitted_vario_exp_15_1.range[2])
print(fitted_vario_exp_15_1.psill[1])
print(sum(fitted_vario_exp_15_1.psill))
print(exp_15_1_sserr)
print('----------')
print('15_2:')
print(fitted_vario_exp_15_2.range[2])
print(fitted_vario_exp_15_2.psill[1])
print(sum(fitted_vario_exp_15_2.psill))
print(exp_15_2_sserr)
print('----------')
print('3h:')
print(fitted_vario_exp_3h.range[2])
print(fitted_vario_exp_3h.psill[1])
print(sum(fitted_vario_exp_3h.psill))
print(exp_3h_sserr)
"""
Explanation: Exponential
End of explanation
"""
coords_gauges = pd.read_csv('./gauge_xy.csv', header=None)
coords_gauges.columns = ['x', 'y']
coords_gauges.head()
from scipy.spatial.distance import pdist
gauge_distances = pdist(coords_gauges)
gauge_hist = plt.hist(gauge_distances,bins=100)
plt.ylabel('N')
plt.xlabel('km')
"""
Explanation: Gauge distances
End of explanation
"""
|
manipopopo/tensorflow | tensorflow/contrib/eager/python/examples/pix2pix/pix2pix_eager.ipynb | apache-2.0 | # Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import os
import time
import numpy as np
import matplotlib.pyplot as plt
import PIL
from IPython.display import clear_output
"""
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Pix2Pix: An example with tf.keras and eager
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/pix2pix/pix2pix_eager.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/pix2pix/pix2pix_eager.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook demonstrates image to image translation using conditional GAN's, as described in Image-to-Image Translation with Conditional Adversarial Networks. Using this technique we can colorize black and white photos, convert google maps to google earth, etc. Here, we convert building facades to real buildings. We use tf.keras and eager execution to achieve this.
In example, we will use the CMP Facade Database, helpfully provided by the Center for Machine Perception at the Czech Technical University in Prague. To keep our example short, we will use a preprocessed copy of this dataset, created by the authors of the paper above.
Each epoch takes around 58 seconds on a single P100 GPU.
Below is the output generated after training the model for 200 epochs.
Import TensorFlow and enable eager execution
End of explanation
"""
path_to_zip = tf.keras.utils.get_file('facades.tar.gz',
cache_subdir=os.path.abspath('.'),
origin='https://people.eecs.berkeley.edu/~tinghuiz/projects/pix2pix/datasets/facades.tar.gz',
extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'facades/')
BUFFER_SIZE = 400
BATCH_SIZE = 1
IMG_WIDTH = 256
IMG_HEIGHT = 256
def load_image(image_file, is_train):
image = tf.read_file(image_file)
image = tf.image.decode_jpeg(image)
w = tf.shape(image)[1]
w = w // 2
real_image = image[:, :w, :]
input_image = image[:, w:, :]
input_image = tf.cast(input_image, tf.float32)
real_image = tf.cast(real_image, tf.float32)
if is_train:
# random jittering
# resizing to 286 x 286 x 3
# method = 2 indicates using "ResizeMethod.NEAREST_NEIGHBOR"
input_image = tf.image.resize_images(input_image, [286, 286],
align_corners=True, method=2)
real_image = tf.image.resize_images(real_image, [286, 286],
align_corners=True, method=2)
# randomly cropping to 256 x 256 x 3
stacked_image = tf.stack([input_image, real_image], axis=0)
cropped_image = tf.random_crop(stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])
input_image, real_image = cropped_image[0], cropped_image[1]
if np.random.random() > 0.5:
# random mirroring
input_image = tf.image.flip_left_right(input_image)
real_image = tf.image.flip_left_right(real_image)
else:
input_image = tf.image.resize_images(input_image, size=[IMG_HEIGHT, IMG_WIDTH],
align_corners=True, method=2)
real_image = tf.image.resize_images(real_image, size=[IMG_HEIGHT, IMG_WIDTH],
align_corners=True, method=2)
# normalizing the images to [-1, 1]
input_image = (input_image / 127.5) - 1
real_image = (real_image / 127.5) - 1
return input_image, real_image
"""
Explanation: Load the dataset
You can download this dataset and similar datasets from here. As mentioned in the paper we apply random jittering and mirroring to the training dataset.
* In random jittering, the image is resized to 286 x 286 and then randomly cropped to 256 x 256
* In random mirroring, the image is randomly flipped horizontally i.e left to right.
End of explanation
"""
train_dataset = tf.data.Dataset.list_files(PATH+'train/*.jpg')
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.map(lambda x: load_image(x, True))
train_dataset = train_dataset.batch(1)
test_dataset = tf.data.Dataset.list_files(PATH+'test/*.jpg')
test_dataset = test_dataset.map(lambda x: load_image(x, False))
test_dataset = test_dataset.batch(1)
"""
Explanation: Use tf.data to create batches, map(do preprocessing) and shuffle the dataset
End of explanation
"""
OUTPUT_CHANNELS = 3
class Downsample(tf.keras.Model):
def __init__(self, filters, size, apply_batchnorm=True):
super(Downsample, self).__init__()
self.apply_batchnorm = apply_batchnorm
initializer = tf.random_normal_initializer(0., 0.02)
self.conv1 = tf.keras.layers.Conv2D(filters,
(size, size),
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False)
if self.apply_batchnorm:
self.batchnorm = tf.keras.layers.BatchNormalization()
def call(self, x, training):
x = self.conv1(x)
if self.apply_batchnorm:
x = self.batchnorm(x, training=training)
x = tf.nn.leaky_relu(x)
return x
class Upsample(tf.keras.Model):
def __init__(self, filters, size, apply_dropout=False):
super(Upsample, self).__init__()
self.apply_dropout = apply_dropout
initializer = tf.random_normal_initializer(0., 0.02)
self.up_conv = tf.keras.layers.Conv2DTranspose(filters,
(size, size),
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False)
self.batchnorm = tf.keras.layers.BatchNormalization()
if self.apply_dropout:
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, x1, x2, training):
x = self.up_conv(x1)
x = self.batchnorm(x, training=training)
if self.apply_dropout:
x = self.dropout(x, training=training)
x = tf.nn.relu(x)
x = tf.concat([x, x2], axis=-1)
return x
class Generator(tf.keras.Model):
def __init__(self):
super(Generator, self).__init__()
initializer = tf.random_normal_initializer(0., 0.02)
self.down1 = Downsample(64, 4, apply_batchnorm=False)
self.down2 = Downsample(128, 4)
self.down3 = Downsample(256, 4)
self.down4 = Downsample(512, 4)
self.down5 = Downsample(512, 4)
self.down6 = Downsample(512, 4)
self.down7 = Downsample(512, 4)
self.down8 = Downsample(512, 4)
self.up1 = Upsample(512, 4, apply_dropout=True)
self.up2 = Upsample(512, 4, apply_dropout=True)
self.up3 = Upsample(512, 4, apply_dropout=True)
self.up4 = Upsample(512, 4)
self.up5 = Upsample(256, 4)
self.up6 = Upsample(128, 4)
self.up7 = Upsample(64, 4)
self.last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS,
(4, 4),
strides=2,
padding='same',
kernel_initializer=initializer)
@tf.contrib.eager.defun
def call(self, x, training):
# x shape == (bs, 256, 256, 3)
x1 = self.down1(x, training=training) # (bs, 128, 128, 64)
x2 = self.down2(x1, training=training) # (bs, 64, 64, 128)
x3 = self.down3(x2, training=training) # (bs, 32, 32, 256)
x4 = self.down4(x3, training=training) # (bs, 16, 16, 512)
x5 = self.down5(x4, training=training) # (bs, 8, 8, 512)
x6 = self.down6(x5, training=training) # (bs, 4, 4, 512)
x7 = self.down7(x6, training=training) # (bs, 2, 2, 512)
x8 = self.down8(x7, training=training) # (bs, 1, 1, 512)
x9 = self.up1(x8, x7, training=training) # (bs, 2, 2, 1024)
x10 = self.up2(x9, x6, training=training) # (bs, 4, 4, 1024)
x11 = self.up3(x10, x5, training=training) # (bs, 8, 8, 1024)
x12 = self.up4(x11, x4, training=training) # (bs, 16, 16, 1024)
x13 = self.up5(x12, x3, training=training) # (bs, 32, 32, 512)
x14 = self.up6(x13, x2, training=training) # (bs, 64, 64, 256)
x15 = self.up7(x14, x1, training=training) # (bs, 128, 128, 128)
x16 = self.last(x15) # (bs, 256, 256, 3)
x16 = tf.nn.tanh(x16)
return x16
class DiscDownsample(tf.keras.Model):
def __init__(self, filters, size, apply_batchnorm=True):
super(DiscDownsample, self).__init__()
self.apply_batchnorm = apply_batchnorm
initializer = tf.random_normal_initializer(0., 0.02)
self.conv1 = tf.keras.layers.Conv2D(filters,
(size, size),
strides=2,
padding='same',
kernel_initializer=initializer,
use_bias=False)
if self.apply_batchnorm:
self.batchnorm = tf.keras.layers.BatchNormalization()
def call(self, x, training):
x = self.conv1(x)
if self.apply_batchnorm:
x = self.batchnorm(x, training=training)
x = tf.nn.leaky_relu(x)
return x
class Discriminator(tf.keras.Model):
def __init__(self):
super(Discriminator, self).__init__()
initializer = tf.random_normal_initializer(0., 0.02)
self.down1 = DiscDownsample(64, 4, False)
self.down2 = DiscDownsample(128, 4)
self.down3 = DiscDownsample(256, 4)
# we are zero padding here with 1 because we need our shape to
# go from (batch_size, 32, 32, 256) to (batch_size, 31, 31, 512)
self.zero_pad1 = tf.keras.layers.ZeroPadding2D()
self.conv = tf.keras.layers.Conv2D(512,
(4, 4),
strides=1,
kernel_initializer=initializer,
use_bias=False)
self.batchnorm1 = tf.keras.layers.BatchNormalization()
# shape change from (batch_size, 31, 31, 512) to (batch_size, 30, 30, 1)
self.zero_pad2 = tf.keras.layers.ZeroPadding2D()
self.last = tf.keras.layers.Conv2D(1,
(4, 4),
strides=1,
kernel_initializer=initializer)
@tf.contrib.eager.defun
def call(self, inp, tar, training):
# concatenating the input and the target
x = tf.concat([inp, tar], axis=-1) # (bs, 256, 256, channels*2)
x = self.down1(x, training=training) # (bs, 128, 128, 64)
x = self.down2(x, training=training) # (bs, 64, 64, 128)
x = self.down3(x, training=training) # (bs, 32, 32, 256)
x = self.zero_pad1(x) # (bs, 34, 34, 256)
x = self.conv(x) # (bs, 31, 31, 512)
x = self.batchnorm1(x, training=training)
x = tf.nn.leaky_relu(x)
x = self.zero_pad2(x) # (bs, 33, 33, 512)
# don't add a sigmoid activation here since
# the loss function expects raw logits.
x = self.last(x) # (bs, 30, 30, 1)
return x
# The call function of Generator and Discriminator have been decorated
# with tf.contrib.eager.defun()
# We get a performance speedup if defun is used (~25 seconds per epoch)
generator = Generator()
discriminator = Discriminator()
"""
Explanation: Write the generator and discriminator models
Generator
The architecture of generator is a modified U-Net.
Each block in the encoder is (Conv -> Batchnorm -> Leaky ReLU)
Each block in the decoder is (Transposed Conv -> Batchnorm -> Dropout(applied to the first 3 blocks) -> ReLU)
There are skip connections between the encoder and decoder (as in U-Net).
Discriminator
The Discriminator is a PatchGAN.
Each block in the discriminator is (Conv -> BatchNorm -> Leaky ReLU)
The shape of the output after the last layer is (batch_size, 30, 30, 1)
Each 30x30 patch of the output classifies a 70x70 portion of the input image (such an architecture is called a PatchGAN).
Discriminator receives 2 inputs.
Input image and the target image, which it should classify as real.
Input image and the generated image (output of generator), which it should classify as fake.
We concatenate these 2 inputs together in the code (tf.concat([inp, tar], axis=-1))
Shape of the input travelling through the generator and the discriminator is in the comments in the code.
To learn more about the architecture and the hyperparameters you can refer the paper.
End of explanation
"""
LAMBDA = 100
def discriminator_loss(disc_real_output, disc_generated_output):
real_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.ones_like(disc_real_output),
logits = disc_real_output)
generated_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.zeros_like(disc_generated_output),
logits = disc_generated_output)
total_disc_loss = real_loss + generated_loss
return total_disc_loss
def generator_loss(disc_generated_output, gen_output, target):
gan_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels = tf.ones_like(disc_generated_output),
logits = disc_generated_output)
# mean absolute error
l1_loss = tf.reduce_mean(tf.abs(target - gen_output))
total_gen_loss = gan_loss + (LAMBDA * l1_loss)
return total_gen_loss
generator_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0.5)
discriminator_optimizer = tf.train.AdamOptimizer(2e-4, beta1=0.5)
"""
Explanation: Define the loss functions and the optimizer
Discriminator loss
The discriminator loss function takes 2 inputs; real images, generated images
real_loss is a sigmoid cross entropy loss of the real images and an array of ones(since these are the real images)
generated_loss is a sigmoid cross entropy loss of the generated images and an array of zeros(since these are the fake images)
Then the total_loss is the sum of real_loss and the generated_loss
Generator loss
It is a sigmoid cross entropy loss of the generated images and an array of ones.
The paper also includes L1 loss which is MAE (mean absolute error) between the generated image and the target image.
This allows the generated image to become structurally similar to the target image.
The formula to calculate the total generator loss = gan_loss + LAMBDA * l1_loss, where LAMBDA = 100. This value was decided by the authors of the paper.
End of explanation
"""
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,
discriminator_optimizer=discriminator_optimizer,
generator=generator,
discriminator=discriminator)
"""
Explanation: Checkpoints (Object-based saving)
End of explanation
"""
EPOCHS = 200
def generate_images(model, test_input, tar):
# the training=True is intentional here since
# we want the batch statistics while running the model
# on the test dataset. If we use training=False, we will get
# the accumulated statistics learned from the training dataset
# (which we don't want)
prediction = model(test_input, training=True)
plt.figure(figsize=(15,15))
display_list = [test_input[0], tar[0], prediction[0]]
title = ['Input Image', 'Ground Truth', 'Predicted Image']
for i in range(3):
plt.subplot(1, 3, i+1)
plt.title(title[i])
# getting the pixel values between [0, 1] to plot it.
plt.imshow(display_list[i] * 0.5 + 0.5)
plt.axis('off')
plt.show()
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for input_image, target in dataset:
with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:
gen_output = generator(input_image, training=True)
disc_real_output = discriminator(input_image, target, training=True)
disc_generated_output = discriminator(input_image, gen_output, training=True)
gen_loss = generator_loss(disc_generated_output, gen_output, target)
disc_loss = discriminator_loss(disc_real_output, disc_generated_output)
generator_gradients = gen_tape.gradient(gen_loss,
generator.variables)
discriminator_gradients = disc_tape.gradient(disc_loss,
discriminator.variables)
generator_optimizer.apply_gradients(zip(generator_gradients,
generator.variables))
discriminator_optimizer.apply_gradients(zip(discriminator_gradients,
discriminator.variables))
if epoch % 1 == 0:
clear_output(wait=True)
for inp, tar in test_dataset.take(1):
generate_images(generator, inp, tar)
# saving (checkpoint) the model every 20 epochs
if epoch % 20 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print ('Time taken for epoch {} is {} sec\n'.format(epoch + 1,
time.time()-start))
train(train_dataset, EPOCHS)
"""
Explanation: Training
We start by iterating over the dataset
The generator gets the input image and we get a generated output.
The discriminator receives the input_image and the generated image as the first input. The second input is the input_image and the target_image.
Next, we calculate the generator and the discriminator loss.
Then, we calculate the gradients of loss with respect to both the generator and the discriminator variables(inputs) and apply those to the optimizer.
Generate Images
After training, its time to generate some images!
We pass images from the test dataset to the generator.
The generator will then translate the input image into the output we expect.
Last step is to plot the predictions and voila!
End of explanation
"""
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
"""
Explanation: Restore the latest checkpoint and test
End of explanation
"""
# Run the trained model on the entire test dataset
for inp, tar in test_dataset:
generate_images(generator, inp, tar)
"""
Explanation: Testing on the entire test dataset
End of explanation
"""
|
fantasycheng/udacity-deep-learning-project | image-classification/dlnd_image_classification.ipynb | mit | """
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
"""
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
"""
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
"""
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
normalized_x = x / 255.
return normalized_x
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
"""
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
"""
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
nb_classes = 10
return np.eye(nb_classes)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
"""
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
"""
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, *image_shape], name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, shape=[None, n_classes], name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
"""
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
"""
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
weights = tf.Variable(tf.truncated_normal([*conv_ksize, x_tensor.get_shape().as_list()[3], conv_num_outputs],
stddev = 0.1))
bias = tf.Variable(tf.zeros(conv_num_outputs))
padding = 'SAME'
convex_layer = tf.nn.bias_add(tf.nn.conv2d(x_tensor, weights, [1, *conv_strides, 1], padding), bias)
convex_layer = tf.nn.relu(convex_layer)
max_pool_layer = tf.nn.max_pool(convex_layer, [1, *pool_ksize, 1], [1, *pool_strides, 1], padding)
return max_pool_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
"""
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
"""
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
shape = x_tensor.get_shape().as_list()
dim = np.prod(shape[1:])
flatten_x_tensor = tf.reshape(x_tensor, [-1, dim])
return flatten_x_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
"""
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
input_dim = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([input_dim, num_outputs], stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
fully_conn_layer = tf.add(tf.matmul(x_tensor, weight), bias)
fully_conn_layer = tf.nn.relu(fully_conn_layer)
return fully_conn_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
"""
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
"""
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
input_dim = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal([input_dim, num_outputs], stddev = 0.1))
bias = tf.Variable(tf.zeros(num_outputs))
output_layer = tf.add(tf.matmul(x_tensor, weight), bias)
return output_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
"""
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
"""
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = (5, 5)
conv_stride = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
conv2d_maxpool_layer = conv2d_maxpool(x, 64, conv_ksize, conv_stride, pool_ksize, pool_strides)
conv2d_maxpool_layer = conv2d_maxpool(conv2d_maxpool_layer, 128,
conv_ksize, conv_stride, pool_ksize, pool_strides)
conv2d_maxpool_layer = conv2d_maxpool(conv2d_maxpool_layer, 256,
conv_ksize, conv_stride, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_layer = flatten(conv2d_maxpool_layer)
# flatten_layer = tf.nn.dropout(flatten_layer, keep_prob)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_conn_layer = fully_conn(flatten_layer, 1024)
fully_conn_layer = tf.nn.dropout(fully_conn_layer, keep_prob)
# fully_conn_layer = fully_conn(flatten_layer, 128)
# fully_conn_layer = tf.nn.dropout(fully_conn_layer, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_conn_layer, 10)
# TODO: return output
return output_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
"""
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
"""
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability}
session.run(optimizer, feed_dict)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
"""
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
"""
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# TODO: Implement Function
loss = session.run(cost, feed_dict = {x: feature_batch,
y: label_batch,
keep_prob: 1.0})
valid_acc = session.run(accuracy, feed_dict = {x: valid_features,
y: valid_labels,
keep_prob: 1.0})
print('Loss: {:>10.4f} Validation Accuracy: {:.6f}'.format(loss, valid_acc))
"""
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
"""
# TODO: Tune Parameters
epochs = 10
batch_size = 128
keep_probability = 0.75
"""
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
"""
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
"""
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
"""
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
"""
|
ethen8181/machine-learning | keras/text_classification/keras_pretrained_embedding.ipynb | mit | # code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import time
import numpy as np
import pandas as pd
from typing import List, Tuple, Dict
from sklearn.model_selection import train_test_split
from keras import layers
from keras.models import Model
from keras.preprocessing.text import Tokenizer
from keras.utils.np_utils import to_categorical
from keras.preprocessing.sequence import pad_sequences
# prevent scientific notations
pd.set_option('display.float_format', lambda x: '%.3f' % x)
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,keras
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Leveraging-Pre-trained-Word-Embedding-for-Text-Classification" data-toc-modified-id="Leveraging-Pre-trained-Word-Embedding-for-Text-Classification-1"><span class="toc-item-num">1 </span>Leveraging Pre-trained Word Embedding for Text Classification</a></span><ul class="toc-item"><li><span><a href="#Data-Preparation" data-toc-modified-id="Data-Preparation-1.1"><span class="toc-item-num">1.1 </span>Data Preparation</a></span></li><li><span><a href="#Glove" data-toc-modified-id="Glove-1.2"><span class="toc-item-num">1.2 </span>Glove</a></span></li><li><span><a href="#Model" data-toc-modified-id="Model-1.3"><span class="toc-item-num">1.3 </span>Model</a></span><ul class="toc-item"><li><span><a href="#Model-with-Pretrained-Embedding" data-toc-modified-id="Model-with-Pretrained-Embedding-1.3.1"><span class="toc-item-num">1.3.1 </span>Model with Pretrained Embedding</a></span></li><li><span><a href="#Model-without-Pretrained-Embedding" data-toc-modified-id="Model-without-Pretrained-Embedding-1.3.2"><span class="toc-item-num">1.3.2 </span>Model without Pretrained Embedding</a></span></li></ul></li><li><span><a href="#Submission" data-toc-modified-id="Submission-1.4"><span class="toc-item-num">1.4 </span>Submission</a></span></li><li><span><a href="#Summary" data-toc-modified-id="Summary-1.5"><span class="toc-item-num">1.5 </span>Summary</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
"""
data_dir = 'data'
submission_dir = 'submission'
input_path = os.path.join(data_dir, 'word2vec-nlp-tutorial', 'labeledTrainData.tsv')
df = pd.read_csv(input_path, delimiter='\t')
print(df.shape)
df.head()
raw_text = df['review'].iloc[0]
raw_text
import re
def clean_str(string: str) -> str:
string = re.sub(r"\\", "", string)
string = re.sub(r"\'", "", string)
string = re.sub(r"\"", "", string)
return string.strip().lower()
from bs4 import BeautifulSoup
def clean_text(df: pd.DataFrame,
text_col: str,
label_col: str) -> Tuple[List[str], List[int]]:
texts = []
labels = []
for raw_text, label in zip(df[text_col], df[label_col]):
text = BeautifulSoup(raw_text).get_text()
cleaned_text = clean_str(text)
texts.append(cleaned_text)
labels.append(label)
return texts, labels
text_col = 'review'
label_col = 'sentiment'
texts, labels = clean_text(df, text_col, label_col)
print('sample text: ', texts[0])
print('corresponding label:', labels[0])
random_state = 1234
val_split = 0.2
labels = to_categorical(labels)
texts_train, texts_val, y_train, y_val = train_test_split(
texts, labels,
test_size=val_split,
random_state=random_state)
print('labels shape:', labels.shape)
print('train size: ', len(texts_train))
print('validation size: ', len(texts_val))
max_num_words = 20000
tokenizer = Tokenizer(num_words=max_num_words, oov_token='<unk>')
tokenizer.fit_on_texts(texts_train)
print('Found %s unique tokens.' % len(tokenizer.word_index))
max_sequence_len = 1000
sequences_train = tokenizer.texts_to_sequences(texts_train)
x_train = pad_sequences(sequences_train, maxlen=max_sequence_len)
sequences_val = tokenizer.texts_to_sequences(texts_val)
x_val = pad_sequences(sequences_val, maxlen=max_sequence_len)
sequences_train[0][:5]
"""
Explanation: Leveraging Pre-trained Word Embedding for Text Classification
There are two main ways to obtain word embeddings:
Learn it from scratch: We specify a neural network architecture and learn the word embeddings jointly with the main task at our hand (e.g. sentiment classification). i.e. we would start off with some random word embeddings, and it would update itself along with the word embeddings.
Transfer Learning: The whole idea behind transfer learning is to avoid reinventing the wheel as much as possible. It gives us the capability to transfer knowledge that was gained/learned in some other task and use it to improve the learning of another related task. In practice, one way to do this is for the embedding part of the neural network architecture, we load some other embeddings that were trained on a different machine learning task than the one we are trying to solve and use that to bootstrap the process.
One area that transfer learning shines is when we have little training data available and using our data alone might not be enough to learn an appropriate task specific embedding/features for our vocabulary. In this case, leveraging a word embedding that captures generic aspect of the language can prove to be beneficial from both a performance and time perspective (i.e. we won't have to spend hours/days training a model from scratch to achieve a similar performance). Keep in mind that, as with all machine learning application, everything is still all about trial and error. What makes a embedding good depends heavily on the task at hand: The word embedding for a movie review sentiment classification model may look very different from a legal document classification model as the semantic of the corpus varies between these two tasks.
Data Preparation
We'll use the movie review sentiment analysis dataset from Kaggle for this example. It's a binary classification problem with AUC as the ultimate evaluation metric. The next few code chunk performs the usual text preprocessing, build up the word vocabulary and performing a train/test split.
End of explanation
"""
import requests
from tqdm import tqdm
def download_glove(embedding_type: str='glove.6B.zip'):
"""
download GloVe word vector representations, this step may take a while
Parameters
----------
embedding_type : str, default 'glove.6B.zip'
Specifying different glove embeddings to download if not already there.
{'glove.6B.zip', 'glove.42B.300d.zip', 'glove.840B.300d.zip', 'glove.twitter.27B.zip'}
Be wary of the size. e.g. 'glove.6B.zip' is a 822 MB zipped, 2GB unzipped
"""
base_url = 'http://nlp.stanford.edu/data/'
if not os.path.isfile(embedding_type):
url = base_url + embedding_type
# the following section is a pretty generic http get request for
# saving large files, provides progress bars for checking progress
response = requests.get(url, stream=True)
response.raise_for_status()
content_len = response.headers.get('Content-Length')
total = int(content_len) if content_len is not None else 0
with tqdm(unit='B', total=total) as pbar, open(embedding_type, 'wb') as f:
for chunk in response.iter_content(chunk_size=1024):
if chunk:
pbar.update(len(chunk))
f.write(chunk)
if response.headers.get('Content-Type') == 'application/zip':
from zipfile import ZipFile
with ZipFile(embedding_type, 'r') as f:
f.extractall(embedding_type.strip('.zip'))
download_glove()
"""
Explanation: Glove
There are many different pretrained word embeddings online. The one we'll be using is from Glove. Others include but not limited to FastText, bpemb.
If we look at the project's wiki page, we can find any different pretrained embeddings available for us to experiment.
<img src="img/pretrained_weights.png" width="100%" height="100%">
End of explanation
"""
def get_embedding_lookup(embedding_path) -> Dict[str, np.ndarray]:
embedding_lookup = {}
with open(embedding_path) as f:
for line in f:
values = line.split()
word = values[0]
coef = np.array(values[1:], dtype=np.float32)
embedding_lookup[word] = coef
return embedding_lookup
def get_pretrained_embedding(embedding_path: str,
index2word: Dict[int, str],
max_features: int) -> np.ndarray:
embedding_lookup = get_embedding_lookup(embedding_path)
pretrained_embedding = np.stack(list(embedding_lookup.values()))
embedding_dim = pretrained_embedding.shape[1]
embeddings = np.random.normal(pretrained_embedding.mean(),
pretrained_embedding.std(),
(max_features, embedding_dim)).astype(np.float32)
# we track how many tokens in our vocabulary exists in the pre-trained embedding,
# i.e. how many tokens has a pre-trained embedding from this particular file
n_found = 0
# the loop starts from 1 due to keras' Tokenizer reserves 0 for padding index
for i in range(1, max_features):
word = index2word[i]
embedding_vector = embedding_lookup.get(word)
if embedding_vector is not None:
embeddings[i] = embedding_vector
n_found += 1
print('number of words found:', n_found)
return embeddings
glove_path = os.path.join('glove.6B', 'glove.6B.100d.txt')
max_features = max_num_words + 1
pretrained_embedding = get_pretrained_embedding(glove_path, tokenizer.index_word, max_features)
pretrained_embedding.shape
"""
Explanation: The way we'll leverage the pretrained embedding is to first read it in as a dictionary lookup, where the key is the word and the value is the corresponding word embedding. Then for each token in our vocabulary, we'll lookup this dictionary to see if there's a pretrained embedding available, if there is, we'll use the pretrained embedding, if there isn't, we'll leave the embedding for this word in its original randomly initialized form.
The format for this particular pretrained embedding is for every line, we have a space delimited values, where the first token is the word, and the rest are its corresponding embedding values. e.g. the first line from the line looks like:
the -0.038194 -0.24487 0.72812 -0.39961 0.083172 0.043953 -0.39141 0.3344 -0.57545 0.087459 0.28787 -0.06731 0.30906 -0.26384 -0.13231 -0.20757 0.33395 -0.33848 -0.31743 -0.48336 0.1464 -0.37304 0.34577 0.052041 0.44946 -0.46971 0.02628 -0.54155 -0.15518 -0.14107 -0.039722 0.28277 0.14393 0.23464 -0.31021 0.086173 0.20397 0.52624 0.17164 -0.082378 -0.71787 -0.41531 0.20335 -0.12763 0.41367 0.55187 0.57908 -0.33477 -0.36559 -0.54857 -0.062892 0.26584 0.30205 0.99775 -0.80481 -3.0243 0.01254 -0.36942 2.2167 0.72201 -0.24978 0.92136 0.034514 0.46745 1.1079 -0.19358 -0.074575 0.23353 -0.052062 -0.22044 0.057162 -0.15806 -0.30798 -0.41625 0.37972 0.15006 -0.53212 -0.2055 -1.2526 0.071624 0.70565 0.49744 -0.42063 0.26148 -1.538 -0.30223 -0.073438 -0.28312 0.37104 -0.25217 0.016215 -0.017099 -0.38984 0.87424 -0.72569 -0.51058 -0.52028 -0.1459 0.8278 0.27062
End of explanation
"""
def simple_text_cnn(max_sequence_len: int,
max_features: int,
num_classes: int,
optimizer: str='adam',
metrics: List[str]=['acc'],
pretrained_embedding: np.ndarray=None) -> Model:
sequence_input = layers.Input(shape=(max_sequence_len,), dtype='int32')
if pretrained_embedding is None:
embedded_sequences = layers.Embedding(max_features, 100,
name='embedding')(sequence_input)
else:
embedded_sequences = layers.Embedding(max_features, pretrained_embedding.shape[1],
weights=[pretrained_embedding],
name='embedding',
trainable=False)(sequence_input)
conv1 = layers.Conv1D(128, 5, activation='relu')(embedded_sequences)
pool1 = layers.MaxPooling1D(5)(conv1)
conv2 = layers.Conv1D(128, 5, activation='relu')(pool1)
pool2 = layers.MaxPooling1D(5)(conv2)
conv3 = layers.Conv1D(128, 5, activation='relu')(pool2)
pool3 = layers.MaxPooling1D(35)(conv3)
flatten = layers.Flatten()(pool3)
dense = layers.Dense(128, activation='relu')(flatten)
preds = layers.Dense(num_classes, activation='softmax')(dense)
model = Model(sequence_input, preds)
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=metrics)
return model
"""
Explanation: Model
To train our text classifier, we specify a 1D convolutional network. Our embedding layer can either be initialized randomly or loaded from a pre-trained embedding. Note that for the pre-trained embedding case, apart from loading the weights, we also "freeze" the embedding layer, i.e. we set its trainable attribute to False. This idea is often times used in transfer learning, where when parts of a model are pre-trained (in our case, only our Embedding layer), and parts of it are randomly initialized, the pre-trained part should ideally not be trained together with the randomly initialized part. The rationale behind it is that a large gradient update triggered by the randomly initialized layer would become very disruptive to those pre-trained weights.
Once we train the randomly initialized weights for a few iterations, we can then go about un-freezing the layers that were loaded with pre-trained weights, and do an update on the weight for the entire thing. The keras documentation also provides an example of how to do this, although the example is for image models, the same idea can also be applied here, and can be something that's worth experimenting.
End of explanation
"""
num_classes = 2
model1 = simple_text_cnn(max_sequence_len, max_features, num_classes,
pretrained_embedding=pretrained_embedding)
model1.summary()
"""
Explanation: Model with Pretrained Embedding
End of explanation
"""
df_model_layers = pd.DataFrame(
[(layer.name, layer.trainable, layer.count_params()) for layer in model1.layers],
columns=['layer', 'trainable', 'n_params']
)
df_model_layers
# time : 70
# test performance : auc 0.93212
start = time.time()
history1 = model1.fit(x_train, y_train,
validation_data=(x_val, y_val),
batch_size=128,
epochs=8)
end = time.time()
elapse1 = end - start
elapse1
"""
Explanation: We can confirm whether our embedding layer is trainable by looping through each layer and checking the trainable attribute.
End of explanation
"""
num_classes = 2
model2 = simple_text_cnn(max_sequence_len, max_features, num_classes)
model2.summary()
# time : 86 secs
# test performance : auc 0.92310
start = time.time()
history1 = model2.fit(x_train, y_train,
validation_data=(x_val, y_val),
batch_size=128,
epochs=8)
end = time.time()
elapse1 = end - start
elapse1
"""
Explanation: Model without Pretrained Embedding
End of explanation
"""
input_path = os.path.join(data_dir, 'word2vec-nlp-tutorial', 'testData.tsv')
df_test = pd.read_csv(input_path, delimiter='\t')
print(df_test.shape)
df_test.head()
def clean_text_without_label(df: pd.DataFrame, text_col: str) -> List[str]:
texts = []
for raw_text in df[text_col]:
text = BeautifulSoup(raw_text).get_text()
cleaned_text = clean_str(text)
texts.append(cleaned_text)
return texts
texts_test = clean_text_without_label(df_test, text_col)
sequences_test = tokenizer.texts_to_sequences(texts_test)
x_test = pad_sequences(sequences_test, maxlen=max_sequence_len)
len(x_test)
def create_submission(ids, predictions, ids_col, prediction_col, submission_path) -> pd.DataFrame:
df_submission = pd.DataFrame({
ids_col: ids,
prediction_col: predictions
}, columns=[ids_col, prediction_col])
if submission_path is not None:
# create the directory if need be, e.g. if the submission_path = submission/submission.csv
# we'll create the submission directory first if it doesn't exist
directory = os.path.split(submission_path)[0]
if (directory != '' or directory != '.') and not os.path.isdir(directory):
os.makedirs(directory, exist_ok=True)
df_submission.to_csv(submission_path, index=False, header=True)
return df_submission
ids_col = 'id'
prediction_col = 'sentiment'
ids = df_test[ids_col]
models = {
'pretrained_embedding': model1,
'without_pretrained_embedding': model2
}
for model_name, model in models.items():
print('generating submission for: ', model_name)
submission_path = os.path.join(submission_dir, '{}_submission.csv'.format(model_name))
predictions = model.predict(x_test, verbose=1)[:, 1]
df_submission = create_submission(ids, predictions, ids_col, prediction_col, submission_path)
# sanity check to make sure the size and the output of the submission makes sense
print(df_submission.shape)
df_submission.head()
"""
Explanation: Submission
For the submission section, we read in and preprocess the test data provided by the competition, then generate the predicted probability column for both the model that uses pretrained embedding and one that doesn't to compare their performance.
End of explanation
"""
|
d-meiser/cold-atoms | examples/Doppler Cooling.ipynb | gpl-3.0 | class GaussianBeam(object):
"""A laser beam with a Gaussian intensity profile."""
def __init__(self, S0, x0, k, sigma):
"""Construct a Gaussian laser beam from position, direction, and width.
S0 -- Peak intensity (in units of the saturation intensity).
x0 -- A location on the center of the beam.
k -- Propagation direction of the beam (need not be normalized).
sigma -- 1/e width of the beam."""
self.S0 = S0
self.x0 = np.copy(x0)
self.k_hat = k / np.linalg.norm(k)
self.sigma = sigma
def intensities(self, x):
xp = x - self.x0
xperp = xp - np.outer(xp.dot(self.k_hat[:, np.newaxis]), self.k_hat)
return self.S0 * np.exp(-np.linalg.norm(xperp, axis=1)**2/self.sigma**2)
"""
Explanation: Introduction
Doppler cooling is one of the most important experimental techniques in cold atom science. Perhaps it's indicative of the impact of this technology that at least five of the names associated with the early days of laser cooling (Wineland, Hansch, Chu, Tannoudji, and Phillips) went on to earn Nobel Prizes in Physics, most of them directly for laser cooling and others for the lifetime contributions to cold atom science. William D. Phillips' Nobel lecture gives a good overview of some of the ways in which laser cooling has had an impact in low energy physics and other areas of physics research.
In this notebook we explore how Doppler cooling, the simplest form of laser cooling, can be modeled in the coldatoms library.
Central to laser cooling is the radiation pressure force generated by photons resonantly scattering off of atoms. For a two level atom the scattering rate is given by
$$
\dot{n} = S\frac{\gamma_0}{2\pi}\frac{\left(\gamma_0/2\right)^2}{(\gamma_0/2)^2(1+2S)+\Delta^2(\mathbf{v})}
$$
In this equation, $S$ is the intensity of the laser in units of the saturation intesnity, $2\pi/\gamma_0$ is the excited state lifetime, and $\Delta$ is the detuning of the laser frequency from the atomic resonance frequency.
Each time the atom absorbs and reemits a photon it receives a recoil kick. Very absorbed photon comes out the the beam and therefore has a well defined momentum ($\hbar \mathbf{k}$ where $\mathbf{k}$ is the wavevector of the laser). The emitted photons travel in more or less random directions and the force due to them therefore averages approximately to zero. The net result is a forc eon the atom along the direction of propagagation of the laser beam and a fluctuating force that is more or less isotropic.
Now comes the imporatnt part that allows us to use this force for cooling: The detuning $\Delta$ of the laser from the atomic transition is velocity dependent due to the Doppler shift. In free space we have
$$
\Delta(\mathbf{v}) = \omega_L - \omega_A - \mathbf{k}\cdot\mathbf{v}
$$
If we then "red detune" the laser i.e. we choose a laser frequency such that $\omega_L<\omega_A$, it is easy to see that atoms moving towards the laser with $\mathbf{k}\cdot\mathbf{v}<0$, will scatter more photons than atoms moving away from the laser. They will hence experience a decelerating force if they're moving towards the laser beam.
"Wait a second" you say. "That's all good and fine. The atoms are slowed down if they're moving in the opposite direction as the laser's propagation direction. But what if they move in the direction of propagation of the laser. Won't they get accelerated in that case?"
You are correct! One laser beam can only slow down the atoms if they're going one way. To slow them down going either direction we need a second laser that is propagating in the opposite direction. By combining six such lasers, a pair for each Cartesian direction, we can achieve cooling of the atoms' motion in all three spatial directions.
We have neglected the fluctuating force due to the emitted photons so far. Unfortunately these recoil kicks never cancel completely because they are uncorrelated with one another. The recoil kicks make th atoms undergo a random walk in momentum space. They are a heating mechanism.
The balance between the cooling rate due to the coeherent friction force from absorption and the heating due to the incoherent emission events corresponds to the lowest temperature that can be achieved by Doppler cooling. This temperature is called the Doppler temperature. We will determine it computationally later in this notebook.
It may be worth mentioning that many laser cooling schemes exist that are able to achieve temperatures lower than the Doppler limit. We will not consider these so called sub-Doppler schemes here.
Doppler cooling in coldatoms
So how dow we simulate Doppler cooling using the coldatoms library? The answer is we use the RadiationPressure force. This force mimicks the momentum recoil due to absorption and emission of photons in resonance fluorescence. The RadiationPressure force is completely determined by the excited state decay rate $\gamma_0$, the driving laser's wavevector $\mathbf{k}$, the laser intensity, and the detuning of the laser from the atomic transition frequency. The intensity is a function of the atomic position because the laser intensity varies throughout space. The detuning depends on the atomic position and velocity. It depends on position because external fields may lead to shifts of the atomic transition frequency (e.g. magnetic fields leading to Zeeman shifts) and it depends on velocity via the Doppler shift.
In this notebook we consider a well collimated Gaussian laser beam. It has an intensity profile given by
$$
S(\mathbf{x})=S_0e^{-x_\perp^2/\sigma^2}
$$
where $S_0$ is the peak intensity of the beam, $\sigma$ is the width of the beam, and $x_\perp=\mathbf{x}-\mathbf{x_0} - (\mathbf{x}-\mathbf{x_0})\cdot \mathbf{k}/k$ is the distance of the atom from the center of the beam. We represent the intensity by the following Python class:
End of explanation
"""
beam = GaussianBeam(1.0, np.array([1.0,1.0,0.0]),np.array([1.0, 1.0, 0.0]), 1.0)
num_pts = 10
x_min = -3
x_max = 3
y_min = -3
y_max = 3
x = np.linspace(x_min, x_max, num_pts)
y = np.linspace(x_min, x_max, num_pts)
pts = np.array([[x[i], y[j], 0] for i in range(num_pts) for j in range(num_pts)])
intensities = beam.intensities(pts).reshape(num_pts, num_pts)
xx, yy = np.meshgrid(x, y)
plt.contour(xx, yy, intensities)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
"""
Explanation: The following figure shows a contour plot of such a beam originating at $\mathbf{x}_0=(1,1,0)^T$ and propagating in the $\mathbf{k}=(1, 1, 0)^T$ direction.
End of explanation
"""
class DopplerDetuning(object):
def __init__(self, Delta0, k):
self.Delta0 = Delta0
self.k = np.copy(k)
def detunings(self, x, v):
return self.Delta0 - np.inner(self.k, v)
detuning = DopplerDetuning(-0.5, (1, 0, 0))
"""
Explanation: Besides the intensity we also need to tell RadiationPressure what the laser-atom detuning is. Here we assume that we only need to account for Doppler shifts and thus we have
$$
\Delta(\mathbf{x}, \mathbf{v}) = \Delta_0-\mathbf{k}\cdot\mathbf{v}.
$$
The frequency $\Delta_0$ is the detuning between laser and atomic transition when the atom is at rest. Here is a class to represent this detuning:
End of explanation
"""
ensemble = coldatoms.Ensemble(num_ptcls=1)
ensemble.ensemble_properties['mass'] = 87*1.67e-27
wavelength = 780.0e-9
k = 2.0 * np.pi / wavelength
gamma = 2.0 * np.pi * 6.1e6
hbar = 1.0e-34
sigma = 1.0e-3
"""
Explanation: One dimensional laser cooling
As a first example we consider a single atom being laser cooled along the $x$ dimension. We take ${}^{87}\rm{Rb}$:
End of explanation
"""
one_d_mot = [
coldatoms.RadiationPressure(gamma, np.array([hbar * k, 0.0, 0.0]),
GaussianBeam(S0=0.1, x0=np.array([0.0, 0.0, 0.0]), k=np.array([k, 0.0, 0.0]), sigma=sigma),
DopplerDetuning(-0.5 * gamma, np.array([k, 0.0, 0.0]))),
coldatoms.RadiationPressure(gamma, np.array([-hbar * k, 0.0, 0.0]),
GaussianBeam(S0=0.1, x0=np.array([0.0, 0.0, 0.0]), k=np.array([-k, 0.0, 0.0]), sigma=sigma),
DopplerDetuning(-0.5 * gamma, np.array([-k, 0.0, 0.0])))]
"""
Explanation: To represent the 1D MOT we need to add two radiation pressure forces. One for the laser propagating along the $+x$ direction and one propagating along the $-x$ direction. We use a an intensity of $S=0.1S_0$ and a beam width of $\sigma = 1{\rm mm}$:
End of explanation
"""
velocities = []
times = []
# Loop over time step sizes.
for i in range(3):
# Reset positions and velocities to initial conditions.
ensemble.x *= 0.0
ensemble.v *= 0.0
ensemble.v[0, 0] = 5.0e0
# The time step size.
dt = 1.0e-5 / 2**i
v = []
t = []
# Now do the time integration and record velocities and times.
for i in range(2000 * 2**i):
v.append(ensemble.v[0, 0])
t.append(i * dt)
coldatoms.drift_kick(dt=dt, ensemble=ensemble, forces=one_d_mot)
velocities.append(v)
times.append(t)
"""
Explanation: We have the atom start with a velocity of $v_x = 5\rm{m}/\rm{s}$ and we simulate its evolution with three different time step sizes to check if our simulation is converged:
End of explanation
"""
plt.figure()
for i in range(3):
plt.plot(1.0e3 * np.array(times[i]), velocities[i])
plt.xlabel(r'$t/\rm{ms}$')
plt.ylabel(r'$v_x/(\rm{m}/\rm{s})$')
"""
Explanation: The following plot shows the evolution of the particle's velocity.
End of explanation
"""
three_d_mot = [
coldatoms.RadiationPressure(gamma, hbar * kp,
GaussianBeam(S0=0.1, x0=np.array([0.0, 0.0, 0.0]), k=kp, sigma=1.0e-3),
DopplerDetuning(-0.5 * gamma, kp)) for kp in [
np.array([k, 0.0, 0.0]),
np.array([-k, 0.0, 0.0]),
np.array([0.0, k, 0.0]),
np.array([0.0, -k, 0.0]),
np.array([0.0, 0.0, k]),
np.array([0.0, 0.0, -k]),
]]
"""
Explanation: After about $5\rm{ms}$ the particle has been brought to an almost complete stop. The non-exponential shape of the velocity evolution is due to the finite capture range of the cooling lasers. When the atomic velocity is too large the laser is too far detuned from the atomic transition. The atom then doesn't scatter any photons and hence does not get decelerated.
The three simulation traces agree well with one another indicating that the numerical integration is converged.
3D optical molasses
Next we consider fully three dimensional laser cooling as is often used in magneto-optical traps. To obtain cooling in all three dimensions we need three pairs of lasers:
End of explanation
"""
velocities_3d = []
times_3d = []
# Loop over time step sizes.
for i in range(3):
# Reset positions and velocities to initial conditions.
ensemble.x *= 0.0
ensemble.v *= 0.0
ensemble.v[0, 0] = 5.0e0
# The time step size.
dt = 1.0e-5 / 2**i
v = []
t = []
# Now do the time integration and record velocities and times.
for i in range(3000 * 2**i):
v.append(ensemble.v[0, 0])
t.append(i * dt)
coldatoms.drift_kick(dt=dt, ensemble=ensemble, forces=three_d_mot)
velocities_3d.append(v)
times_3d.append(t)
"""
Explanation: The integration of the atoms' equations of motion is virtually unchanged. We merely have to use the forces due to the three dimensional mot:
End of explanation
"""
plt.figure()
for i in range(3):
plt.plot(1.0e3 * np.array(times_3d[i]), velocities_3d[i])
plt.xlabel(r'$t/\rm{ms}$')
plt.ylabel(r'$v_x/(\rm{m}/\rm{s})$')
"""
Explanation: When we now consider the $x$ component of the atomic velocity we see that the residual velocity fluctuations are larger. This is because now noise from two additional pairs of lasers feed into the $x$ component:
End of explanation
"""
tmin = [500, 1000, 2000]
for i in range(3):
print(np.std(np.array(velocities_3d[i][tmin[i]:])))
"""
Explanation: The residual velocity fluctuations correspond to the so-called Doppler temperature. They are caused by the atoms absorbing a photon from the "wrong" beam. We find for the standard deviation of the $x$-component of the velocity:
End of explanation
"""
np.sqrt(hbar * gamma / ensemble.ensemble_properties['mass'] / 2.0 / 3.0)
"""
Explanation: We can compare that with the value that theory predicts based on the Doppler temperature $\sqrt{\langle v_x^2\rangle}$:
End of explanation
"""
|
tmadlener/phys_utils | python/docs/cov_signal_from_mixed.ipynb | gpl-3.0 | import sympy as sp
sp.init_printing()
C_S = sp.Symbol('C_S')
C_B = sp.Symbol('C_B')
C = sp.Symbol('C')
p = sp.Symbol('p', real=True)
mu = sp.Symbol('mu', commutative=False)
muT = sp.Symbol('mu^T', commutative=False)
mu_B = sp.Symbol('mu_B', commutative=False)
mu_BT = sp.Symbol('mu_B^T', commutative=False)
"""
Explanation: Problem
The problem at hand is to calculate the covarince resp. the correlation matrix of a signal sample, which has been mixed with a background sample and the true knowledge of which events belong to signal and which belong to background is not present. If it is possible to determine the fraction of signal in the mixed sample as well as the means and the covariance of the background sample the presented way allows to estimate the covariance (and thus the correlation) of the signal sample from the mixed sample.
A problem where all these statements apply is one where a discriminating variable can be used to split the mixed sample into one or more regions with pure background and into a region where signal and background are mixed. One sort of problems that go into this direction are in physics, where often the mass is used as discriminating variable, which allows to define signal regions and mass sidebands. It is often possible to describe the shape of the background as a function of the mass in this kind of problems, which allows to do in many cases some sort of background subtraction. Two possible ways of doing such a background subtraction are:
To get the signal distribution of a given variable the distribution of the variable in the signal sample is determined and from this distribution the distribution of the background (determined in the mass sidebands) is subtracted with the appropriate weight, which is in this case the fraction of background in the signal region.
To get the signal distribution of a given variable all events are used, but the events in the mass sidebands are weighted negatively, where the weight again has to be chosen appropriately, taking into account how many background events are expected in the signal region.
Both procedures are in principle equivalent and can be used to determine any (joint) distribution of any variable(s) of the signal. However, there are some caveates when it comes to determine correlations between the variables. The main one is that both require a binning of the variables that should be considered, which can lead to only sparsely populated bins if the number of available events is low. Furthermore it is possible that some bins have negative entries, thus the distribution is no longer real, but this can be mitigated by manually setting these bins to 0, as their number should be low (given that the weights have been correctly computed).
The method presented in the following allows to calculate the correlation of the variables in the signal using only the knowledge of the fraction of signal in the mixed sample and the covariance and means of the background sample. It circumvents the problem of having to use negative weights in the calculation of a weighted covariance, which is probably not well defined.
Symbolic calculations
Definition of symbols
Define some symbols, denote the $\mu$ vectors as non-commutative for the correct behaviour. $C_S$ and $C_B$ are symmetric matrices, thus they need not be declared as non-commutative ($A = A^T$ for symmetric matrices).
In principle sympy should also support some symbolic computations using matrices, but as far as I understood the documentation that only works with fixed dimensions (not really a problem here) and the support for simplification, etc. is not as far as is for "normal" symbols.
The defined symbols are:
- $p = \frac{n_S}{n}$, the fraction of signal events, with $(1-p) = \frac{n_B}{n}$ the fraction of background events in the mixed sample
- $\mu_{S},\mu_B,\mu$, the vector of the means of the signal, the background, resp. the mixed sample
- $C_{S}, C_B$, the covariance matrices of the signal, resp. background sample
The $\mu$ vectors have to be defined twice, to have a transposed version as well.
End of explanation
"""
mu_S = 1/p * mu - (1-p)/p * mu_B
mu_ST = 1/p * muT - (1-p)/p * mu_BT
"""
Explanation: $\mu_S$ can be calculated from $\mu$ and $\mu_B$ since $\mu:=E[x]=p\cdot\mu_S + (1-p)\cdot\mu_B \rightarrow \mu_S = \frac{1}{p}(\mu - (1-p)\mu_B)$
End of explanation
"""
cov = p * (C_S + mu_S*mu_ST) + (1-p)*(C_B + mu_B*mu_BT) - (p * mu_S + (1 - p) * mu_B) * (p * mu_ST + (1 - p) * mu_BT)
cov
arguments = (C_S, C_B, mu*muT, mu_B*mu_BT, mu * mu_BT, mu_B * muT) # define list of arguments to collect
"""
Explanation: Covariance matrix of the mixed sample
$$C = p\cdot(C_S + \mu_S\mu_S^T) + (1-p)\cdot(C_B + \mu_B\,u_B^T) - (p\mu_S + (1-p)\mu_B)(p\mu_S^T + (1-p)\mu_B^T)$$
End of explanation
"""
C_coll = sp.collect(sp.expand(cov), arguments)
C_coll
"""
Explanation: After doing some expanding and collecting of terms, the result looks like this:
End of explanation
"""
T1 = sp.collect(sp.expand(p**2 * mu_S * mu_ST), arguments)
T2 = sp.collect(sp.expand(p * mu_S * mu_ST), arguments)
T3 = sp.collect(sp.expand(p * mu_S * (1 - p) * mu_BT), arguments)
T4 = sp.collect(sp.expand((1 - p)*p * mu_B * mu_ST), arguments)
T1
T2
T3
T4
FT = p*C_S + T2 + (1-p) * C_B + (1-p) * mu_B*mu_BT - T1 - T3 - T4 - (1-p)**2 * mu_B * mu_BT
sp.collect(sp.expand(FT), arguments)
"""
Explanation: Cross check calculation
To check if sympy does the calculation correctly, it is repeated here in a step by step fashion, that is easier to follow along manually. First all terms containing $\mu_S$ are expanded and then simplified as far as possible. After that a manually expanded $C$ is used into which these terms are inserted.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
"""
Explanation: This is the same result as above, with the only difference that the intermediate terms have been manually checked.
Covariance of the signal
Unfortunately sympy (or more probably me in combination with sympy) can't solve for $C_S$, so that exercise has to be done manually, which is not all that hard luckily:
$$C_S = \frac{1}{p}[C - (1-p)C_B - \frac{1-p}{p}\mu\mu^T - \frac{1-p}{p}\mu_B\mu_B^T - \frac{p-1}{p}\mu\mu_B^T - \frac{p-1}{p}\mu_B\mu^T] $$
Toy Test
To check if the calculations from above make any sense a quick toy test is performed. Two samples are randomly drawn from a multivariate normal distribution, with different means, and covariance matrices.
End of explanation
"""
mu_sig = [0, 2]
c_sig = [[3,-0.5],[-0.5, 2]]
n_sig = 12000
mu_bkg = [2, 3]
c_bkg = [[2, 0.5], [0.5, 1]]
n_bkg = 25000
p = float(n_sig) / (n_sig + n_bkg)
ssig = np.random.multivariate_normal(mu_sig, c_sig, (n_sig))
sbkg = np.random.multivariate_normal(mu_bkg, c_bkg, (n_bkg))
"""
Explanation: Parameters of the two samples
End of explanation
"""
sample = np.append(ssig, sbkg, axis=0)
fig = plt.figure(1, figsize=(15,5))
plt.subplot(131)
_,_,_,h2d_bkg = plt.hist2d(ssig[:,0], ssig[:,1], bins=25)
plt.title('signal')
plt.subplot(132)
_,_,_,h2d_sig = plt.hist2d(sbkg[:,0], sbkg[:,1], bins=25)
plt.title('background')
plt.subplot(133)
plt.title('mixed')
_,_,_,h2d_smp = plt.hist2d(sample[:,0], sample[:,1], bins=25)
"""
Explanation: Create mixed sample by appending the two samples from above to each other
End of explanation
"""
from utils.statistics import sig_cov, sig_corr
"""
Explanation: Define function that calculates the signal covariance matrix from the mixed sample using only information that can be inferred from the mixed sample and the knowledge of the means and the covariance matrix of the background sample and $p$.
End of explanation
"""
cov_sig_sample = sig_cov(sample.T, np.average(sbkg.T, axis=1), np.cov(sbkg.T), p)
"""
Explanation: Calculate covariance matrix of signal sample from mixed sample and the necessary information of the background sample
End of explanation
"""
cov_sig_sample - np.cov(ssig.T)
np.cov(ssig.T)
cov_sig_sample
"""
Explanation: Check the deviation from the covariance matrix of the signal sample
End of explanation
"""
np.corrcoef(ssig.T)
sig_corr(sample.T, np.average(sbkg.T, axis=1), np.cov(sbkg.T), p)
"""
Explanation: Correlation coefficients
The correlation coefficients are calculated from the covariance matrix via
$$ \text{corr}(X) = \text{diag}(C)^{-1/2}C\text{diag}(C)^{-1/2} $$
where $\text{diag}(C)$ is the matrix of the diagonal elements of $C$ and $C$ is the covariance matrix
End of explanation
"""
|
jseabold/statsmodels | examples/notebooks/tsa_arma_1.ipynb | bsd-3-clause | %matplotlib inline
import numpy as np
import pandas as pd
from statsmodels.graphics.tsaplots import plot_predict
from statsmodels.tsa.arima_process import arma_generate_sample
from statsmodels.tsa.arima.model import ARIMA
np.random.seed(12345)
"""
Explanation: Autoregressive Moving Average (ARMA): Artificial data
End of explanation
"""
arparams = np.array([.75, -.25])
maparams = np.array([.65, .35])
"""
Explanation: Generate some data from an ARMA process:
End of explanation
"""
arparams = np.r_[1, -arparams]
maparams = np.r_[1, maparams]
nobs = 250
y = arma_generate_sample(arparams, maparams, nobs)
"""
Explanation: The conventions of the arma_generate function require that we specify a 1 for the zero-lag of the AR and MA parameters and that the AR parameters be negated.
End of explanation
"""
dates = pd.date_range('1980-1-1', freq="M", periods=nobs)
y = pd.Series(y, index=dates)
arma_mod = ARIMA(y, order=(2, 0, 2), trend='n')
arma_res = arma_mod.fit()
print(arma_res.summary())
y.tail()
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(10,8))
fig = plot_predict(arma_res, start='1999-06-30', end='2001-05-31', ax=ax)
legend = ax.legend(loc='upper left')
"""
Explanation: Now, optionally, we can add some dates information. For this example, we'll use a pandas time series.
End of explanation
"""
|
tclaudioe/Scientific-Computing | SC1/07_Polynomial_Interpolation_1D.ipynb | bsd-3-clause | import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
from functools import reduce
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
%matplotlib inline
from ipywidgets import interact, fixed, IntSlider
"""
Explanation: <center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Polynomial Interpolation: Vandermonde, Lagrange, Newton, Chebyshev </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.27</h2>
</center>
Table of Contents
Introduction
Vandermonde Matrix
Lagrange Interpolation
Runge Phenomenon
Newton's Divided Difference
Interpolation Error
Chebyshev Interpolation
Python Modules and Functions
Acknowledgements
End of explanation
"""
def Y(D, xi):
# Function that evaluates the xi's points in the polynomial
if D['M']=='Vandermonde':
P = lambda i: i**np.arange(len(D['P']))
elif D['M']=='Lagrange':
P = lambda i: [np.prod(i - np.delete(D['x'],j)) for j in range(len(D['x']))]
elif D['M']=='Newton':
P = lambda i: np.append([1],[np.prod(i-D['x'][:j]) for j in range(1,len(D['P']))])
return [np.dot(D['P'], P(i)) for i in xi]
def Interpolation_Plot(D,ylim=None):
# Function that shows the data points and the function that interpolates them.
xi = np.linspace(min(D['x']),max(D['x']),1000)
yi = Y(D,xi)
plt.figure(figsize=(8,8))
plt.plot(D['x'],D['y'],'ro',label='Interpolation points')
plt.plot(xi,yi,'b-',label='$P(x)$')
plt.xlim(min(xi)-0.5, max(xi)+0.5)
if ylim:
plt.ylim(ylim[0], ylim[1])
else:
plt.ylim(min(yi)-0.5, max(yi)+0.5)
plt.grid(True)
plt.legend(loc='best')
plt.xlabel('$x$')
#plt.ylabel('$P(x)$')
plt.show()
"""
Explanation: <div id='intro' />
Introduction
Hello! In this notebook we will learn how to interpolate 1D data with polynomials. A polynomial interpolation consists in finding a polynomial that fits a discrete set of known data points, allowing us to construct new data points within the range of the data. Formally, a polynomial $P(x)$ interpolate the data $(x_1,y_1),...,(x_n,y_n)$ if $P(x_i)=y_i$ for all $i$ in $1,...,n$.
End of explanation
"""
def Vandermonde(x, y, show=False):
# We construct the matrix and solve the system of linear equations
A = np.array([xi**np.arange(len(x)) for xi in x])
b = y
xsol = np.linalg.solve(A,b)
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
print('A = '); print(np.array_str(A, precision=2, suppress_small=True))
print("cond(A) = "+str(np.linalg.cond(A)))
print('b = '); print(np.array_str(b, precision=2, suppress_small=True))
print('x = '); print(np.array_str(xsol, precision=2, suppress_small=True))
xS = sp.Symbol('x')
F = np.dot(xS**np.arange(len(x)),xsol)
print('Interpolation Function: ')
print('F(x) = ')
print(F)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Vandermonde',
'P':xsol,
'x':x,
'y':y}
return D
def show_time_V(epsilon=0):
x = np.array([1.0,2.0,3.0+epsilon,5.0,6.5])
y = np.array([2.0,5.0,4.0,6.0,2.0])
D = Vandermonde(x,y,True)
Interpolation_Plot(D,[-4,10])
interact(show_time_V,epsilon=(-1,2,0.1))
"""
Explanation: <div id='vander' />
Vandermonde Matrix
First, we are going to learn the Vandermonde Matrix method. This is a $m \times m$ matrix (with $m$ being the length of the set of known data points) with the terms of a geometric progression in each row. It allows us to construct a system of linear equations with the objective of find the coefficients of the polynomial function that interpolates our data.
Example:
Given the set of known data points: $(x_1,y_1),(x_2,y_2),(x_3,y_3)$
Our system of linear equations will be:
$$ \begin{bmatrix}
1 & x_1 & x_1^2 \[0.3em]
1 & x_2 & x_2^2 \[0.3em]
1 & x_3 & x_3^2 \end{bmatrix}
\begin{bmatrix}
a_1 \[0.3em]
a_2 \[0.3em]
a_3 \end{bmatrix} =
\begin{bmatrix}
y_1 \[0.3em]
y_2 \[0.3em]
y_3 \end{bmatrix}$$
And solving it we will find the coefficients $a_1,a_2,a_3$ that we need to construct the polynomial $P(x)=a_1+a_2x+a_3x^2$ that interpolates our data.
End of explanation
"""
def Lagrange(x, y, show=False):
# We calculate the li's
p = np.array([y[i]/np.prod(x[i] - np.delete(x,i)) for i in range(len(x))])
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
xS = sp.Symbol('x')
L = np.dot(np.array([np.prod(xS - np.delete(x,i))/np.prod(x[i] - np.delete(x,i)) for i in range(len(x))]),y)
print('Interpolation Function: ');
print(L)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Lagrange',
'P':p,
'x':x,
'y':y}
return D
def show_time_L(epsilon=0):
x = np.array([1.0,2.0,3.0+epsilon,4.0,5.0,7.0,6.0])
y = np.array([2.0,5.0,4.0,6.0,7.0,3.0,8.0])
D = Lagrange(x,y,True)
Interpolation_Plot(D,[0,10])
interact(show_time_L,epsilon=(-1,1,0.1))
def show_time_Li(i=0, N=7):
x = np.arange(N+1)
y = np.zeros(N+1)
y[i]=1
D = Lagrange(x,y,True)
Interpolation_Plot(D,[-1,2])
i_widget = IntSlider(min=0, max=7, step=1, value=0)
N_widget = IntSlider(min=5, max=20, step=1, value=7)
def update_i_range(*args):
i_widget.max = N_widget.value
N_widget.observe(update_i_range, 'value')
interact(show_time_Li,i=i_widget,N=N_widget)
"""
Explanation: <div id='lagrange' />
Lagrange Interpolation
With this method, we can interpolate data thanks to the Lagrange basis polynomials. Given a set of $n$ data points $(x_1,y_1),...,(x_n,y_n)$, the Lagrange interpolation polynomial is the following:
$$ P(x) = \sum^n_{i=1} y_i\,L_i(x),$$
where $L_i(x)$ are the Lagrange basis polynomials:
$$ L_i(x) = \prod^n_{j=1,j \neq i} \frac{x-x_j}{x_i-x_j} = \frac{x-x_1}{x_i-x_1} \cdot ... \cdot \frac{x-x_{i-1}}{x_i-x_{i-1}} \cdot \frac{x-x_{i+1}}{x_i-x_{i+1}} \cdot ... \cdot \frac{x-x_n}{x_i-x_n}$$
or simply $L_i(x)=\dfrac{l_i(x)}{l_i(x_i)}$, where $l_i(x)=\displaystyle{\prod^n_{j=1,j \neq i} (x-x_j)}$.
The most important property of these basis polynomials is:
$$ L_{j \neq i}(x_i) = 0 $$
$$ L_i(x_i) = 1 $$
So, we assure that $L(x_i) = y_i$, which indeed interpolates the data.
End of explanation
"""
def Divided_Differences(x, y):
dd = np.array([y])
for i in range(len(x)-1):
ddi = []
for a in range(len(x)-i-1):
ddi.append((dd[i][a+1]-dd[i][a])/(x[a+i+1]-x[a]))
ddi = np.append(ddi,np.full((len(x)-len(ddi),),0.0))
dd = np.append(dd,[ddi],axis=0)
return np.array(dd)
def Newton(x, y, show=False):
# We calculate the divided differences and store them in a data structure
dd = Divided_Differences(x,y)
# The function shows the data if the flag is true
if show:
print('Data Points: '); print([(x[i],y[i]) for i in range(len(x))])
xS = sp.Symbol('x')
N = np.dot(dd[:,0],np.append([1],[np.prod(xS-x[:i]) for i in range(1,len(dd))]))
print('Interpolation Function: ');
print(N)
# Finally, we return a data structure with our interpolating polynomial
D = {'M':'Newton',
'P':dd[:,0],
'x':x,
'y':y}
return D
def show_time_N(epsilon=0):
x = np.array([0.0,2.0,3.0+epsilon,4.0,5.0,6.0])
y = np.array([1.0,3.0,0.0,6.0,8.0,4.0])
D = Newton(x,y,True)
Interpolation_Plot(D)
interact(show_time_N,epsilon=(-1,1,0.1))
"""
Explanation: Here you get some questions about Lagrange Interpolation:
- Explain what happens with the interpolator polynomial when you add a new point to the set of points to interpolate. Answer: We need to recalculate the polynomial
- Why it is not a good idea to use Lagrange interpolation for a set of points which is constantly changing? A: Because we need to compute the whole interpolation again
- What is the operation count of obtaining the interpolator polynomial using Lagrange? What happens with the error?
<div id='DDN' />
Newton's Divided Difference
In this interpolation method we will use divided differences to calculate the coefficients of our interpolation polynomial. Given a set of $n$ data points $(x_1,y_1),...,(x_n,y_n)$, the Newton polynomial is:
$$ P(x) = \sum^n_{i=1} (f[x_1 ... x_i] \cdot \prod^{i-1}_{j=1} (x-x_j)) ,$$
where $ \prod^{0}_{j=1} (x-x_j) = 0 $, and:
$$ f[x_i] = y_i $$
$$ f[x_j...x_i] = \frac{f[x_{j+1}...x_i]-f[x_j...x_{i-1}]}{x_i-x_j}$$
End of explanation
"""
def Error(f, n, xmin, xmax, method=Lagrange, points=np.linspace, plot_flag=True):
# This function plots f(x), the interpolating polynomial, and the associated error
# points can be np.linspace to equidistant points or Chebyshev to get Chebyshev points
x = points(xmin,xmax,n)
y = f(x)
xe = np.linspace(xmin,xmax,100)
ye = f(xe)
D = method(x,y)
yi = Y(D, xe)
if plot_flag:
plt.figure(figsize=(5,10))
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15,5), sharey = False)
ax1.plot(xe, ye,'r-', label='f(x)')
ax1.plot(x, y,'ro', label='Interpolation points')
ax1.plot(xe, yi,'b-', label='Interpolation')
ax1.set_xlim(xmin-0.5,xmax+0.5)
ax1.set_ylim(min(yi)-0.5,max(yi)+0.5)
ax1.set_title('Interpolation')
ax1.grid(True)
ax1.set_xlabel('$x$')
ax1.legend(loc='best')
ax2.semilogy(xe, abs(ye-yi),'b-', label='Absolute Error')
ax2.set_xlim(xmin-0.5,xmax+0.5)
ax2.set_title('Absolute Error')
ax2.set_xlabel('$x$')
ax2.grid(True)
#ax2.legend(loc='best')
plt.show()
return max(abs(ye-yi))
def test_error_Newton(n=5):
#me = Error(lambda x: np.sin(x)**3, n, 1, 7, Newton)
me = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, Newton)
print("Max Error:", me)
interact(test_error_Newton,n=(5,25))
"""
Explanation: Questions about Newton's DD:
- What is the main problem using this method (and Lagrange)? How can you fix it? A: A problem with polynomial interpolation with equispaced date is the Runge phenomenon and can be handle with Chebyshev points
- What to do when a new point is added? A: Pro, is not necessary re-calculate the whole polynomial only a small piece
<div id='Error' />
Polynomial Interpolation Error
The interpolation error is given by:
$$ f(x)-P(x) = \frac{(x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n)}{n!} \cdot f^{(n)}(c) ,$$
where $c$ is within the interval from the minimun value of $x$ and the maximum one.
End of explanation
"""
def Runge(n=9):
x = np.linspace(0,1,n)
y = np.zeros(n)
y[int((n-1.0)/2.)]=1
D = Newton(x,y,False)
Interpolation_Plot(D)
interact(Runge,n=(5,25,2))
"""
Explanation: <div id='runge' />
Runge's Phenomenon: It is a problem of oscillation of polynomials at the edges of the interval.
We are interpolating a data that is 0 almost everywhere and 1 at the middle point, notice that when $n$ increases the oscilations increase and the red dots seems to be at 0 everywhere but it is just an artifact, there must be a 1 at the middle. The oscillations you see at the end of the interval is the Runge phenomenon.
End of explanation
"""
def Chebyshev(xmin,xmax,n=5):
# This function calculates the n Chebyshev points and plots or returns them depending on ax
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
y = np.sin((2*ns-1)*np.pi/(2*n))
plt.figure(figsize=(10,5))
plt.ylim(-0.1,1.1)
plt.xlim(-1.1,1.1)
plt.plot(np.cos(np.linspace(0,np.pi)),np.sin(np.linspace(0,np.pi)),'k-')
plt.plot([-2,2],[0,0],'k-')
plt.plot([0,0],[-1,2],'k-')
for i in range(len(y)):
plt.plot([x[i],x[i]],[0,y[i]],'r-')
plt.plot([0,x[i]],[0,y[i]],'r-')
plt.plot(x,[0]*len(x),'bo',label='Chebyshev points')
plt.plot(x,y,'ro')
plt.xlabel('$x$')
plt.title('n = '+str(n))
plt.grid(True)
plt.legend(loc='best')
plt.show()
def Chebyshev_points(xmin,xmax,n):
ns = np.arange(1,n+1)
x = np.cos((2*ns-1)*np.pi/(2*n))
#y = np.sin((2*ns-1)*np.pi/(2*n))
return (xmin+xmax)/2 + (xmax-xmin)*x/2
def Chebyshev_points_histogram(n=50,nbins=20):
xCheb=Chebyshev_points(-1,1,n)
plt.figure()
plt.hist(xCheb,bins=nbins,density=True)
plt.grid(True)
plt.show()
interact(Chebyshev,xmin=fixed(-1),xmax=fixed(1),n=(2,50))
interact(Chebyshev_points_histogram,n=(20,10000),nbins=(20,200))
"""
Explanation: <div id='cheby' />
Chebyshev Interpolation
With the objective of reducing the error of the polynomial interpolation, we need to find the values of $x_1,x_2,...,x_n$ that minimize $(x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n)$.
To choose these values of $-1 \leq x_1,x_2,...,x_n \leq 1$ (to use another interval we just need to do a change of variables) that minimize the error, we will use the roots of the Chebyshev polynomials, also called Chebyshev nodes (of the first kind), which are defined by:
$$ x_i = \cos\left(\frac{(2i-1)\pi}{2n}\right), i = 1,...,n $$
End of explanation
"""
def T(n,x):
# Recursive function that returns the n-th Chebyshev polynomial evaluated at x
if n == 0:
return x**0
elif n == 1:
return x
else:
return 2*x*T(n-1,x)-T(n-2,x)
def Chebyshev_Polynomials(n=2, Flag_All_Tn=False):
# This function plots the first n Chebyshev polynomials
x = np.linspace(-1,1,1000)
plt.figure(figsize=(10,5))
plt.xlim(-1, 1)
plt.ylim(-1.1, 1.1)
if Flag_All_Tn:
for i in np.arange(n+1):
y = T(i,x)
plt.plot(x,y,label='$T_{'+str(i)+'}(x)$')
else:
y = T(n,x)
plt.plot(x,y,label='$T_{'+str(n)+'}(x)$')
# plt.title('$T_${:}$(x)$'.format(n))
plt.legend(loc='right')
plt.grid(True)
plt.xlabel('$x$')
plt.show()
interact(Chebyshev_Polynomials,n=(0,12),Flag_All_Tn=True)
n=9
xmin=1
xmax=9
mee = Error(lambda x: np.sin(x)**3, n, xmin, xmax, method=Lagrange)
mec = Error(lambda x: np.sin(x)**3, n, xmin, xmax, method=Lagrange, points=Chebyshev_points)
print("Max error (equidistants points):", mee)
print("Max error (Chebyshev nodes):", mec)
def test_error_chebyshev(n=5):
mee = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, Lagrange)
mec = Error(lambda x: (1/(1+12*x**2)), n, -1, 1, method=Lagrange, points=Chebyshev_points)
print("Max error (equidistants points):", mee)
print("Max error (Chebyshev nodes):", mec)
interact(test_error_chebyshev,n=(5,100,2))
"""
Explanation: By using these points, we reduce the numerator of the interpolation error formula:
$$ (x-x_1) \cdot (x-x_2) \cdot ... \cdot (x-x_n) = \dfrac{1}{2^{n-1}} \cdot T_n(x), $$
where $T(x) = \cos (n \cdot \arccos (x))$ is the n-th Chebyshev polynomial.
$$ T_0(x) = 1 $$
$$ T_1(x) = x $$
$$ T_2(x) = 2x^2 -1 $$
$$...$$
$$ T_{n+1}(x) = 2 \cdot x \cdot T_n(x) - T_{n-1}(x) $$
End of explanation
"""
n=50
shift=2
my_functions={0:lambda x: (x)**10,
1:lambda x: np.abs((x)**3),
2:lambda x: np.exp(-((x)**-2)),
3:lambda x: 1/(1+x**2),
4:lambda x: np.sin(x)**3}
labels = {0: "x^{10}",
1: "|x^3|",
2: "\exp(-x^{-2})",
3: "1/(1+x^2)",
4: "\sin^3(x)"}
n_points=np.arange(shift,n)
for k in np.arange(5):
max_error=np.zeros(n-shift)
max_error_es=np.zeros(n-shift)
for i in n_points:
max_error[i-shift] = Error(my_functions[k], i, -1, 1, Newton, Chebyshev_points, plot_flag=False)
max_error_es[i-shift] = Error(my_functions[k], i, -1, 1, Newton, points=np.linspace, plot_flag=False)
axis=plt.figure()
plt.semilogy(n_points,max_error,'kd',label='Chebyshev points')
plt.semilogy(n_points,max_error_es,'k.',label='Equalspaced poins')
plt.ylim(10**-16,10**4)
plt.grid(True)
plt.title('Interpolation Error of $f(x)='+str(labels[k])+"$")
plt.xlabel('Number of points used in the interpolation')
plt.ylabel('Max error on domain')
plt.legend(loc='best')
plt.show()
"""
Explanation: Questions about Chebyshev:
- How can you calculate the Chebyshev points in the interval [a,b] instead of [-1,1]? A: Using a change of variable
Convergence analysis
End of explanation
"""
|
varnion/sabesPy | G1Errou.ipynb | bsd-3-clause | from sabesPy import getData
import pandas as pd
df = pd.DataFrame([getData('2014-03-14'), getData('2015-03-14')])
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns ## só pra deixar o matplotlib com o estilo bonitão do seaborn ;)
sns.set_context("talk")
sns.set_style("darkgrid", {"grid.linewidth": .5, "axes.facecolor": ".9"})
#pd.options.display.mpl_style = 'default' ## estilo ggplot
datasG1 = ['2014-03-14','2015-03-14']
df.ix[datasG1,:].T.plot(kind='bar', rot=0, figsize=(8,4))
plt.show()
"""
Explanation: Notícia do G1 São Paulo
<!---->
Fiquei desconfiado desse infografico. Quer dizer que já pagamos nossa "dívida" com o volume morto?
Vamos ver...
A sabesp disponibiliza dados para consulta neste endereço, mas não faço idéia de como pegar os dados automaticamente...
Ainda bem que uma boa alma já fez uma api que dá conta do serviço! Graças a ele pude escrever uma função bem meia boca pra capturar esses dados, o getData.
End of explanation
"""
datas = ['2014-03-14',
'2014-05-15', # pré-volume morto
'2014-05-16', # estréia da "primeira reserva técnica", a.k.a. volume morto
'2014-07-12', # data em que a "reserva normal" é esgotada
'2014-10-23',
'2014-10-24', # "segunda reserva técnica" ou "VOLUME MORTO 2: ELECTRIC BOOGALOO"
'2015-01-01', # feliz ano novo ?
'2015-03-14']
import numpy as np
df = pd.DataFrame(pd.concat(map(getData, datas), axis=1))
df = df.T
from sabesPy import plotSideBySide
dados = df.ix[['2014-05-15','2014-05-16']], df.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados, titles=['$1^a$ Reserva técnica', '$2^a$ Reserva técnica'])
"""
Explanation: OK. Tudo certo. Bate com os gráficos mostrados pelo G1, apenas está sendo mostrado de uma forma diferente.
Maaas...
End of explanation
"""
from sabesPy import fixPerc
dFixed = df.copy()
dFixed.Cantareira = ([fixPerc(p, dia) for p, dia in zip(df.Cantareira, df.index)])
dados = dFixed.ix[['2014-05-15','2014-05-16']], dFixed.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados,
titles=['1$^a$ reserva técnica\n(percentuais corrigidos)',
'2$^a$ reserva técnica\n(percentuais corrigidos)'])
"""
Explanation: o cantareira tem capacidade total de quase 1 trilhão de litros, segundo a matéria do G1.
Então, entre os dias 15 e 16 de março, POOF: 180 bilhões de litros surgiram num passe de mágica!
Depois, em outubro, POOF. Surgem mais 100 bilhões.
QUE BRUXARIA É ESSA?!?
O próprio site da sabesp esclarece:
A primeira reserva técnica entrou em operação em 16/05/2014 e acrescentou mais 182,5 bilhões de litros ao sistema - 18,5% de acréscimo;
A segunda reserva técnica entrou em operação em 24/10/2014 e acrescentou mais 105,4 bilhões de litros ao sistema - 10,7% de acréscimo
Ou seja, o percentual divulgado pela sabesp não é corrigido para descontar o volume morto e, portanto, comparar 14 de março desse ano com o ano passado não faz sentido.
O G1 errou. Mas o quanto errou? Será que já, pelo menos, pagamos nossa "dívida" com o volume morto?
corrigindo o percentual do cantareira
Simples assim:
$$ Vol_1 = p \times Vol_0 $$
$$ Vol_{corrigido} = Vol_1 - Vol_{morto}$$
$$ p_{corrigido} = \frac{Vol_{corrigido}}{Vol_0} $$
<table>
<tr>
<td> $Vol_1$ </td>
<td> o volume atual do sistema </td>
</tr> <tr>
<td> $Vol_0$ </td>
<td> a capacidade máxima do reservatório (sem contar o volume morto) </td>
</tr> <tr>
<td> $Vol_{corrigido}$ </td>
<td> volume atual $-$ volume morto) </td>
</tr> <tr>
<td> $Vol_{morto}$ </td>
<td> advinha! </td>
</tr> <tr>
<td> $p$ </td>
<td> o percentual divulgado pela sabesp </td>
</tr> <tr>
<td>$p_{corrigido}$ </td>
<td> o percentual corrigido</td>
</table>
essa correção foi implementada no "módulo" sabesPy
End of explanation
"""
dados = df.ix[datasG1], dFixed.ix[datasG1]
plotSideBySide(dados,cm=[None,None],
titles=["S/ CORRIGIR as %'s", "%'s CORRIGIDAS"])
"""
Explanation: Pronto. Agora faz sentido, sem aquela variação absurda.
Finalmente, comparemos o grafico com os dados usados pelo G1 e com dados corrigidos
Pronto.
O G1 errou
$-14.8 \neq 14.5$
~~vai ver só confundiram o sinal~~
End of explanation
"""
def percFixer2(p,volMax, volumeMorto=0):
volAtual = (volMax + volumeMorto)*(p/100) - volumeMorto
q = 100*volAtual/volMax
#import numpy as np
q = round(q,1)
return q
dFixed2 = df.copy()
dFixed2.Cantareira = [fixPerc(p, dia, fixFunc=percFixer2) for p, dia in zip(df.Cantareira, df.index)]
dados = [[dFixed2.ix[['2014-05-15','2014-05-16']], dFixed.ix[['2014-05-15','2014-05-16']]],
[dFixed2.ix[['2014-10-23','2014-10-24']], dFixed.ix[['2014-10-23','2014-10-24']]]]
plotSideBySide(dados[0], cm=['Spectral', 'Spectral'],
titles=['1$^a$ reserva técnica\n$ Vol_1 = p \\times (Vol_0 + Vol_{morto})$',
'1$^a$ reserva técnica\n$ Vol_1 = p \\times Vol_0$'])
plotSideBySide(dados[1], cm=['coolwarm', 'coolwarm'],
titles=['2$^a$ reserva técnica\n$ Vol_1 = p \\times (Vol_0 + Vol_{morto})$',
'2$^a$ reserva técnica\n$ Vol_1 = p \\times Vol_0$'])
"""
Explanation: O G1 errou catastróficamente. ~~errou feio, errou rude.~~
Estamos muito longe do nível do ano passado.
(E mesmo que o cantareira estivesse com 15%, ainda seria uma situação crítica)
PS: Ainda falta corrigir o percentual pro Alto Tietê, que também incorporou um "volume morto".
Sabesp divulga dados incorretos
vc deve ter notado a conta para pegar o volume atual está estranha.
calculei assim:
$ Vol_1 = p \times Vol_0$
mas deveria ter calculado assim:
$ Vol_1 = p \times (Vol_0 + Vol_{morto})$
Eu mesmo comecei tentando calcular dessa forma, mas esse método não corrigia as variações que ocorreram qndo o volume morto foi incorporado.
Vamos corrigir os dados com essa outra formula e comparar os graficos.
End of explanation
"""
dadosCantareira = pd.DataFrame(pd.concat([df.Cantareira, dFixed2.Cantareira, dFixed.Cantareira], axis=1))
titles = ['$\%$ divulgada pela Sabesp ($p$)',
'$ Vol_{atual} = p \\times (Vol_{max} +Vol_{morto})$',
'$ Vol_{atual} = p \\times Vol_{max}$']
dadosCantareira.columns = titles
#dadosCantareira.index = pd.to_datetime(dadosCantareira.index)
from sabesPy import reverseDate
dadosCantareira.index = map(reverseDate, dadosCantareira.index)
sns.set_context("poster")
ax = dadosCantareira.plot(kind='bar', ylim=(-25,30), yticks=range(-25,30,5), rot=20, title='Sistema Cantareira\n')
ax.set_ylabel("$\%$", fontsize=30)
ax.text(2.5, -10, r" $ \% =\frac{Vol_{atual} - Vol_{morto}}{Vol_{max}} $", fontsize=30, ha='right')
plt.show()
"""
Explanation: Viu?
A porcentagem divulgada pela Sabesp é falsa, pois faz de conta que a capacidade máxima do cantareira são os mesmos 980 bi L anteriores à aquisição das reservas técnicas. (A capacidade atual do sistema é 1.3 tri L!!!)
End of explanation
"""
|
clemaitre58/power-profile | notebook/metrics/example_aerobic_modeling.ipynb | mit | %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from skcycling.data_management import Rider
from skcycling.metrics import aerobic_meta_model
from skcycling.utils.fit import log_linear_model
from skcycling.utils.fit import linear_model
from datetime import date
"""
Explanation: Aerobic metabolic model
Import the necessary libraries
End of explanation
"""
filename = '../../data/rider/user_1.p'
my_rider = Rider.load_from_pickles(filename)
"""
Explanation: Load the data
End of explanation
"""
# Define the starting and ending date from which
# we want to compute the record power-profile
start_date = date(2014,1,1)
end_date = date(2014, 12, 31)
# Compute the record power-profile
my_rider.compute_record_pp((start_date, end_date))
# Compute the amm
pma, t_pma, aei, fit_info_pma_fitting, fit_info_aei_fitting = aerobic_meta_model(my_rider.record_pp_)
print 'MAP : {}, time at MAP : {}, aei : {}'.format(pma, t_pma, aei)
print 'Fitting information about the MAP: {}'.format(fit_info_pma_fitting)
print 'Fitting information about the AEI: {}'.format(fit_info_aei_fitting)
"""
Explanation: Compute the aerobic metabolic model
End of explanation
"""
# Plot the normalized power
plt.figure(figsize=(14, 10))
# Define the time samples to use for the plotting
t = np.array([3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 10, 20, 30, 45, 60, 120, 180, 240])
# Plot the log linear model found
plt.semilogx(t, log_linear_model(t, fit_info_pma_fitting['slope'], fit_info_pma_fitting['intercept']),
label=r'$R^2={0:.2f}$'.format(fit_info_aei_fitting['coeff_det']))
# Plot the confidence
plt.fill_between(t,
log_linear_model(t, fit_info_pma_fitting['slope'],
fit_info_pma_fitting['intercept']) + 2 * fit_info_pma_fitting['std_err'],
log_linear_model(t, fit_info_pma_fitting['slope'],
fit_info_pma_fitting['intercept']) - 2 * fit_info_pma_fitting['std_err'],
alpha=0.2)
# Plot the real data point
plt.semilogx(t, my_rider.record_pp_.resampling_rpp(t), 'go')
# Plot the MAP point
plt.semilogx(t_pma, pma, 'ro', label='t={0:.1f} min / MAP={1:.1f} W'.format(t_pma, pma))
# Plot the legend
plt.xlabel('Time in minutes (min)')
plt.ylabel('Power in watts (W)')
plt.xlim(min(t), max(t))
plt.ylim(0, max(my_rider.record_pp_.resampling_rpp(t)+50))
plt.title(r'Determine MAP with the model ${0:.1f} \times \log(t) + {1:.1f}$'.format(fit_info_pma_fitting['slope'],
fit_info_pma_fitting['intercept']))
plt.legend()
plt.show()
"""
Explanation: Plot the information related to the MAP determination using Pinot et al. approach
End of explanation
"""
# Plot the normalized power
plt.figure(figsize=(14, 10))
# Define the time samples to use for the plotting
t = np.array([3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 10, 20, 30, 45, 60, 120, 180, 240])
t = t[np.nonzero(t > t_pma)]
# Plot the log linear model found
plt.semilogx(t, log_linear_model(t, fit_info_aei_fitting['slope'], fit_info_aei_fitting['intercept']),
label=r'$R^2={0:.2f}$'.format(fit_info_aei_fitting['coeff_det']))
# Plot the confidence
plt.fill_between(t,
log_linear_model(t, fit_info_aei_fitting['slope'],
fit_info_aei_fitting['intercept']) + 2 * fit_info_aei_fitting['std_err'],
log_linear_model(t, fit_info_aei_fitting['slope'],
fit_info_aei_fitting['intercept']) - 2 * fit_info_aei_fitting['std_err'],
alpha=0.2)
# Plot the real data point
plt.semilogx(t, my_rider.record_pp_.resampling_rpp(t) / pma * 100., 'go')
# Plot the legend
plt.xlabel('Time in minutes (min)')
plt.ylabel('Power in watts (W)')
plt.xlim(min(t), max(t))
plt.ylim(0, 100)
plt.title(r'Determine AEI with the model ${0:.1f} \times \log(t) + {1:.1f}$'.format(fit_info_aei_fitting['slope'],
fit_info_aei_fitting['intercept']))
plt.legend()
plt.show()
"""
Explanation: Plot the information related to the AEI determination using Pinot et al. approach
End of explanation
"""
|
robertoalotufo/ia898 | src/histogram.ipynb | mit | import numpy as np
def histogram(f):
return np.bincount(f.ravel())
"""
Explanation: Function histogram
Synopse
Image histogram.
h = histogram(f)
f: Input image. Pixel data type must be integer.
h: Output, integer vector.
Description
This function computes the number of occurrence of each pixel value.
The function histogram_eq is implemented to show an implementation based
on the equation of the histogram.
Function Code
End of explanation
"""
def histogram_eq(f):
from numpy import amax, zeros, arange, sum
n = amax(f) + 1
h = zeros((n,),int)
for i in arange(n):
h[i] = sum(i == f)
return h
"""
Explanation: Function Code for brute force implementation
End of explanation
"""
def histogram_eq1(f):
import numpy as np
n = f.size
m = f.max() + 1
haux = np.zeros((m,n),int)
fi = f.ravel()
i = np.arange(n)
haux[fi,i] = 1
h = np.add.reduce(haux,axis=1)
return h
"""
Explanation: Function code for bidimensional matrix implementation
End of explanation
"""
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python histogram.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
"""
Explanation: Examples
End of explanation
"""
if testing:
f = np.array([[0,1,2,3,4],
[4,3,2,1,1]], 'uint8')
h = ia.histogram(f)
print(h.dtype)
print(h)
if testing:
h1 = ia.histogram_eq(f)
print(h1.dtype)
print(h1)
if testing:
h1 = ia.histogram_eq1(f)
print(h1.dtype)
print(h1)
"""
Explanation: Numerical examples
End of explanation
"""
if testing:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
f = mpimg.imread('../data/woodlog.tif')
plt.imshow(f,cmap='gray')
if testing:
h = ia.histogram(f)
plt.plot(h)
"""
Explanation: Example 1
End of explanation
"""
if testing:
import numpy as np
from numpy.random import rand, seed
seed([10])
f = (255 * rand(1000,1000)).astype('uint8')
%timeit h = ia.histogram(f)
%timeit h1 = ia.histogram_eq(f)
%timeit h2 = ia.histogram_eq1(f)
"""
Explanation: Speed performance
End of explanation
"""
if testing:
print(ia.histogram(np.array([3,7,0,0,3,0,10,7,0,7])) == \
np.array([4, 0, 0, 2, 0, 0, 0, 3, 0, 0, 1]))
"""
Explanation: Equation
$$ h(i) = card{p | f(p)=i} \
\text{or} \
h(i) = \sum_p \left{
\begin{array}{l l}
1 & \quad \text{if} \ f(p) = i\
0 & \quad \text{otherwise}\
\end{array} \right.
$$
End of explanation
"""
|
Luke035/dlnd-lessons | gan_mnist/Intro_to_GANs_Exercises.ipynb | mit | %matplotlib inline
import pickle as pkl
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data')
"""
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN
A whole list
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes fake data to pass to the discriminator. The discriminator also sees real data and predicts if the data it's received is real or fake. The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real data. And the discriminator is trained to figure out which data is real and which is fake. What ends up happening is that the generator learns to make data that is indistiguishable from real data to the discriminator.
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector the generator uses to contruct it's fake images. As the generator learns through training, it figures out how to map these random vectors to recognizable images that can fool the discriminator.
The output of the discriminator is a sigmoid function, where 0 indicates a fake image and 1 indicates an real image. If you're interested only in generating new images, you can throw out the discriminator after training. Now, let's see how we build this thing in TensorFlow.
End of explanation
"""
def model_inputs(real_dim, z_dim):
#(Batch_size, Dim)
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
"""
Explanation: Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks.
Exercise: Finish the model_inputs function below. Create the placeholders for inputs_real and inputs_z using the input sizes real_dim and z_dim respectively.
End of explanation
"""
def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
''' Build the generator network.
Arguments
---------
z : Input tensor for the generator
out_dim : Shape of the generator output
n_units : Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out:
'''
with tf.variable_scope('generator', reuse=reuse): # Netowrk creating -> reuse set to false
# Hidden layer
#activation_fn is None given that it should be activated through the LRELU layer
h1 = tf.layers.dense(inputs=z, units=n_units,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
activation=None
)
# Leaky ReLU
h1 = tf.maximum(h1 * alpha, h1)
# Logits and tanh output
#Read out layer
logits = tf.layers.dense(inputs=h1, units=out_dim,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
activation=None
)
out = tf.tanh(logits)
return out
"""
Explanation: Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
Variable Scope
Here we need to use tf.variable_scope for two reasons. Firstly, we're going to make sure all the variable names start with generator. Similarly, we'll prepend discriminator to the discriminator variables. This will help out later when we're training the separate networks.
We could just use tf.name_scope to set the names, but we also want to reuse these networks with different inputs. For the generator, we're going to train it, but also sample from it as we're training and after training. The discriminator will need to share variables between the fake and real input images. So, we can use the reuse keyword for tf.variable_scope to tell TensorFlow to reuse the variables instead of creating new ones if we build the graph again.
To use tf.variable_scope, you use a with statement:
python
with tf.variable_scope('scope_name', reuse=False):
# code here
Here's more from the TensorFlow documentation to get another look at using tf.variable_scope.
Leaky ReLU
TensorFlow doesn't provide an operation for leaky ReLUs, so we'll need to make one . For this you can just take the outputs from a linear fully connected layer and pass them to tf.maximum. Typically, a parameter alpha sets the magnitude of the output for negative values. So, the output for negative input (x) values is alpha*x, and the output for positive x is x:
$$
f(x) = max(\alpha * x, x)
$$
Tanh Output
The generator has been found to perform the best with $tanh$ for the generator output. This means that we'll have to rescale the MNIST images to be between -1 and 1, instead of 0 and 1.
Exercise: Implement the generator network in the function below. You'll need to return the tanh output. Make sure to wrap your code in a variable scope, with 'generator' as the scope name, and pass the reuse keyword argument from the function to tf.variable_scope.
End of explanation
"""
def discriminator(x, n_units=128, reuse=False, alpha=0.01):
''' Build the discriminator network.
Arguments
---------
x : Input tensor for the discriminator
n_units: Number of units in hidden layer
reuse : Reuse the variables with tf.variable_scope
alpha : leak parameter for leaky ReLU
Returns
-------
out, logits:
'''
with tf.variable_scope('discriminator', reuse=reuse): # Netowrk creating -> reuse set to false
# Hidden layer
h1 = tf.layers.dense(inputs=x, units=n_units,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
activation=None
)
# Leaky ReLU
h1 = tf.maximum(h1 * alpha, h1)
#Out dim is 1, it should be simgmoided, return a 0 to 1 prob value after sigmoid
logits = tf.layers.dense(inputs=h1, units=1,
kernel_initializer=tf.truncated_normal_initializer(stddev=0.01),
activation=None
)
out = tf.sigmoid(logits)
return out, logits
"""
Explanation: Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer.
Exercise: Implement the discriminator network in the function below. Same as above, you'll need to return both the logits and the sigmoid output. Make sure to wrap your code in a variable scope, with 'discriminator' as the scope name, and pass the reuse keyword argument from the function arguments to tf.variable_scope.
End of explanation
"""
# Size of input image to discriminator
input_size = 784 # 28x28 MNIST images flattened
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Label smoothing
smooth = 0.1
"""
Explanation: Hyperparameters
End of explanation
"""
tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(real_dim=input_size, z_dim=z_size)
# Generator network here
#Output dim for generator is the same as real input dim
g_model = generator(z=input_z, alpha=alpha, n_units=g_hidden_size, out_dim=input_size)
# g_model is the generator output
# Disriminator network here
d_model_real, d_logits_real = discriminator(x=input_real, alpha=alpha, n_units=d_hidden_size)
#Si passa il dato ottenuto dal generatore riutilizzando le variabili
d_model_fake, d_logits_fake = discriminator(x=g_model, alpha=alpha, n_units=d_hidden_size, reuse=True)
"""
Explanation: Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Then the discriminators. We'll build two of them, one for real data and one for fake data. Since we want the weights to be the same for both real and fake data, we need to reuse the variables. For the fake data, we're getting it from the generator as g_model. So the real data discriminator is discriminator(input_real) while the fake discriminator is discriminator(g_model, reuse=True).
Exercise: Build the network from the functions you defined earlier.
End of explanation
"""
# Calculate losses
#Cross entropy tra logits e label sempre a 1 (sono le immagini vere)
real_labels = tf.ones_like(d_logits_real) * (1 - smooth) #Smoothed labels
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=real_labels))
#Fake uguale ma con label a 0
fake_labels = tf.zeros_like(d_logits_fake)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=fake_labels))
d_loss = d_loss_real + d_loss_fake
#G loss needs flipped labels, and needs all ones for all the generated fake images
#La perdita parte dal risultato del discriminatore, non dall'output del generatore e deve essere girata!!
flipped_fake_labels = tf.ones_like(d_logits_fake)
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=flipped_fake_labels))
"""
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropies, which we can get with tf.nn.sigmoid_cross_entropy_with_logits. We'll also wrap that in tf.reduce_mean to get the mean for all the images in the batch. So the losses will look something like
python
tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
For the real image logits, we'll use d_logits_real which we got from the discriminator in the cell above. For the labels, we want them to be all ones, since these are all real images. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9, for example, using the parameter smooth. This is known as label smoothing, typically used with classifiers to improve performance. In TensorFlow, it looks something like labels = tf.ones_like(tensor) * (1 - smooth)
The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
Finally, the generator losses are using d_logits_fake, the fake image logits. But, now the labels are all ones. The generator is trying to fool the discriminator, so it wants to discriminator to output ones for fake images.
Exercise: Calculate the losses for the discriminator and the generator. There are two discriminator losses, one for real images and one for fake images. For the real image loss, use the real logits and (smoothed) labels of ones. For the fake image loss, use the fake logits with labels of all zeros. The total discriminator loss is the sum of those two losses. Finally, the generator loss again uses the fake logits from the discriminator, but this time the labels are all ones because the generator wants to fool the discriminator.
End of explanation
"""
for var in tf.trainable_variables():
if 'generator' in var.name:
print(var.name)
# Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer().minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer().minimize(g_loss, var_list=g_vars)
"""
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part and build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator optimizer, we only want to generator variables. Our past selves were nice and used a variable scope to start all of our generator variable names with generator. So, we just need to iterate through the list from tf.trainable_variables() and keep variables that start with generator. Each variable object has an attribute name which holds the name of the variable as a string (var.name == 'weights_0' for instance).
We can do something similar with the discriminator. All the variables in the discriminator start with discriminator.
Then, in the optimizer we pass the variable lists to the var_list keyword argument of the minimize method. This tells the optimizer to only update the listed variables. Something like tf.train.AdamOptimizer().minimize(loss, var_list=var_list) will only train the variables in var_list.
Exercise: Below, implement the optimizers for the generator and discriminator. First you'll need to get a list of trainable variables, then split that list into two lists, one for the generator variables and another for the discriminator variables. Finally, using AdamOptimizer, create an optimizer for each network that update the network variables separately.
End of explanation
"""
batch_size = 100
epochs = 100
samples = []
losses = []
saver = tf.train.Saver(var_list = g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images, reshape and rescale to pass to D
batch_images = batch[0].reshape((batch_size, 784))
batch_images = batch_images*2 - 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(d_train_opt, feed_dict={input_real: batch_images, input_z: batch_z})
_ = sess.run(g_train_opt, feed_dict={input_z: batch_z})
# At the end of each epoch, get the losses and print them out
train_loss_d = sess.run(d_loss, {input_z: batch_z, input_real: batch_images})
train_loss_g = g_loss.eval({input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
# Sample from generator as we're training for viewing afterwards
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
samples.append(gen_samples)
saver.save(sess, './checkpoints/generator.ckpt')
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
"""
Explanation: Training
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
"""
Explanation: Training loss
Here we'll check out the training losses for the generator and discriminator.
End of explanation
"""
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
return fig, axes
# Load samples from generator taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
"""
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training.
End of explanation
"""
_ = view_samples(-1, samples)
"""
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 5, 7, 3, 0, 9. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
"""
rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
"""
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion!
End of explanation
"""
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
feed_dict={input_z: sample_z})
view_samples(0, [gen_samples])
"""
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise. Looks like 1, 9, and 8 show up first. Then, it learns 5 and 3.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation
"""
|
fluxcapacitor/source.ml | jupyterhub.ml/notebooks/train_deploy/zz_under_construction/tensorflow/optimize/03a_Train_Model_GPU.ipynb | apache-2.0 | import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
"""
Explanation: Train Model with GPU (and CPU*)
CPU is still used to store variables that we are learning (W and b). This allows the GPU to focus on compute vs. storage.
End of explanation
"""
tf.reset_default_graph()
"""
Explanation: Reset TensorFlow Graph
Useful in Jupyter Notebooks
End of explanation
"""
config = tf.ConfigProto(
log_device_placement=True,
)
config.gpu_options.allow_growth=True
config.gpu_options.per_process_gpu_memory_fraction = 0.4
print(config)
sess = tf.Session(config=config)
print(sess)
"""
Explanation: Create TensorFlow Session
End of explanation
"""
from datetime import datetime
version = int(datetime.now().strftime("%s"))
"""
Explanation: Generate Model Version (current timestamp)
End of explanation
"""
num_samples = 100000
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
with tf.device("/gpu:0"):
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/gpu:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
"""
Explanation: Load Model Training and Test/Validation Data
End of explanation
"""
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
"""
Explanation: Randomly Initialize Variables (Weights and Bias)
The goal is to learn more accurate Weights and Bias during training.
End of explanation
"""
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_train, y_train)
"""
Explanation: View Accuracy of Pre-Training, Initial Random Variables
We want this to be close to 0, but it's relatively far away. This is why we train!
End of explanation
"""
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/test' % version,
graph=tf.get_default_graph())
"""
Explanation: Setup Loss Summary Operations for Tensorboard
End of explanation
"""
%%time
with tf.device("/gpu:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-gpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
"""
Explanation: Train Model
End of explanation
"""
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/gpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_gpu.pb' % optimize_me_parent_path
print(unoptimized_model_graph_path)
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
"""
Explanation: View Loss Summaries in Tensorboard
Navigate to the Scalars and Graphs tab at this URL:
http://[ip-address]:6006
Save Graph For Optimization
We will use this later.
End of explanation
"""
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/gpu/unoptimized_gpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/gpu/unoptimized_gpu.pb'
output_dot='/root/notebooks/unoptimized_gpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png /root/notebooks/unoptimized_gpu.dot \
-o /root/notebooks/unoptimized_gpu.png > /tmp/a.out
from IPython.display import Image
Image('/root/notebooks/unoptimized_gpu.png', width=1024, height=768)
"""
Explanation: Show Graph
End of explanation
"""
|
oditorium/blog | iPython/FridayPuzzle.ipynb | agpl-3.0 | import time
def timer():
start = time.time()
def f(report=False):
elapsed = time.time() - start
if report:
print ("time elapsed %5.3f" % elapsed)
return elapsed
return f
"""
Explanation: Friday Puzzle
Question
What numbers that can not be written in the form $i_4 * 4 + i_9*9$ where the $i$ are positive integers?
Solution
generate all possible combinations of sums of 4's and 9's (up to a maximum of $i<limit$), put them into a set, and see what numbers are missing from this set
source
End of explanation
"""
limit = 250
"""
Explanation: We are only looking for combinations with $i<limit$
End of explanation
"""
mytimer = timer()
can_reach = set()
numbers = set(range(1,100))
for i in range(0,limit):
for j in range (0,limit):
num = 4*i + 9*j
can_reach = can_reach | {num}
can_not_reach = numbers - can_reach
t_sqr = mytimer(True)
can_not_reach
"""
Explanation: Procedural Style - square
here we simple generate the square $i_4,i_9 = 1\ldots limit$
End of explanation
"""
mytimer = timer()
can_reach = set()
numbers = set(range(1,100))
for i in range(0,limit):
for j in range (0,i+1):
num = 4*j + 9*(i-j)
#print ("%i = 4*%i + 9*%i" % (num,j,i-j))
can_reach = can_reach | {num}
can_not_reach = numbers - can_reach
t_tria = mytimer(True)
can_not_reach
"""
Explanation: Procedural Style - triangle
here we are somewhat smarter, using that the addition commutes, and only sum over the triangle, ie the following series
$$
0,\ 4,\ 9, 4+4,\ 4+9,\ 9+9,\ 4+4+4,\ 4+4+9,\ \ldots
$$
End of explanation
"""
from operator import add
from itertools import product
mytimer = timer()
fours = range(0,4*limit,4)
nines = range(0,9*limit,9)
fours_and_nines = product(fours, nines)
fours_plus_nines = map( lambda x: add (*x), fours_and_nines)
set(range(100)) - set(fours_plus_nines)
t_func = mytimer(True)
"""
Explanation: Functional Style
Here use a more pythonic functional style
1. we generate the ranges $(0,4,8,\ldots)$ and $(0,9,18,\ldots)$
1. we generate the cartesian product range $(0,0), (4,0), \ldots$
1. we generate the sum range $0, 4, 8, \ldots$
1. we generate the set of numbers in the sum range ${0, 4, 8, \ldots}$
1. we calculate ${1\ldots 100} - {0,4,8\ldots}$
End of explanation
"""
print ("triangle sum vs square sum = %2.0fx" % (t_sqr / t_tria))
print ("functional vs triangle sum = %2.0fx" % (t_tria / t_func))
print ("functional vs square sum = %2.0fx" % (t_sqr / t_func))
"""
Explanation: Results
the triangle solution is an improvement by a factor 4, as expected; the Pythonic way of doing things is an improvement by a factor of almost 50x vs the triangle method. Note that the pythonic way is a square method, so raw-speedup is 150x+
End of explanation
"""
|
SamLau95/nbinteract | notebooks/Using_Interact.ipynb | bsd-3-clause | from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
"""
Explanation: Using Interact
The interact function (ipywidgets.interact) automatically creates user interface (UI) controls for exploring code and data interactively. It is the easiest way to get started using IPython's widgets.
End of explanation
"""
def f(x):
return x
"""
Explanation: Basic interact
At the most basic level, interact autogenerates UI controls for function arguments, and then calls the function with those arguments when you manipulate the controls interactively. To use interact, you need to define a function that you want to explore. Here is a function that prints its only argument x.
End of explanation
"""
interact(f, x=10);
"""
Explanation: When you pass this function as the first argument to interact along with an integer keyword argument (x=10), a slider is generated and bound to the function parameter.
End of explanation
"""
interact(f, x=True);
"""
Explanation: When you move the slider, the function is called, which prints the current value of x.
If you pass True or False, interact will generate a checkbox:
End of explanation
"""
interact(f, x='Hi there!');
"""
Explanation: If you pass a string, interact will generate a text area.
End of explanation
"""
@interact(x=True, y=1.0)
def g(x, y):
return (x, y)
"""
Explanation: interact can also be used as a decorator. This allows you to define a function and interact with it in a single shot. As this example shows, interact also works with functions that have multiple arguments.
End of explanation
"""
def h(p, q):
return (p, q)
"""
Explanation: Fixing arguments using fixed
There are times when you may want to explore a function using interact, but fix one or more of its arguments to specific values. This can be accomplished by wrapping values with the fixed function.
End of explanation
"""
interact(h, p=5, q=fixed(20));
"""
Explanation: When we call interact, we pass fixed(20) for q to hold it fixed at a value of 20.
End of explanation
"""
interact(f, x=widgets.IntSlider(min=-10,max=30,step=1,value=10));
"""
Explanation: Notice that a slider is only produced for p as the value of q is fixed.
Widget abbreviations
When you pass an integer-valued keyword argument of 10 (x=10) to interact, it generates an integer-valued slider control with a range of [-10,+3*10]. In this case, 10 is an abbreviation for an actual slider widget:
python
IntSlider(min=-10,max=30,step=1,value=10)
In fact, we can get the same result if we pass this IntSlider as the keyword argument for x:
End of explanation
"""
interact(f, x=(0,4));
"""
Explanation: This examples clarifies how interact proceses its keyword arguments:
If the keyword argument is a Widget instance with a value attribute, that widget is used. Any widget with a value attribute can be used, even custom ones.
Otherwise, the value is treated as a widget abbreviation that is converted to a widget before it is used.
The following table gives an overview of different widget abbreviations:
<table class="table table-condensed table-bordered">
<tr><td><strong>Keyword argument</strong></td><td><strong>Widget</strong></td></tr>
<tr><td>`True` or `False`</td><td>Checkbox</td></tr>
<tr><td>`'Hi there'`</td><td>Text</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if integers are passed</td><td>IntSlider</td></tr>
<tr><td>`value` or `(min,max)` or `(min,max,step)` if floats are passed</td><td>FloatSlider</td></tr>
<tr><td>`['orange','apple']` or `{'one':1,'two':2}`</td><td>Dropdown</td></tr>
</table>
Note that a dropdown is used if a list or a dict is given (signifying discrete choices), and a slider is used if a tuple is given (signifying a range).
You have seen how the checkbox and textarea widgets work above. Here, more details about the different abbreviations for sliders and dropdowns are given.
If a 2-tuple of integers is passed (min,max), an integer-valued slider is produced with those minimum and maximum values (inclusively). In this case, the default step size of 1 is used.
End of explanation
"""
interact(f, x=(0,8,2));
"""
Explanation: If a 3-tuple of integers is passed (min,max,step), the step size can also be set.
End of explanation
"""
interact(f, x=(0.0,10.0));
"""
Explanation: A float-valued slider is produced if the elements of the tuples are floats. Here the minimum is 0.0, the maximum is 10.0 and step size is 0.1 (the default).
End of explanation
"""
interact(f, x=(0.0,10.0,0.01));
"""
Explanation: The step size can be changed by passing a third element in the tuple.
End of explanation
"""
@interact(x=(0.0,20.0,0.5))
def h(x=5.5):
return x
"""
Explanation: For both integer and float-valued sliders, you can pick the initial value of the widget by passing a default keyword argument to the underlying Python function. Here we set the initial value of a float slider to 5.5.
End of explanation
"""
interact(f, x=['apples','oranges']);
"""
Explanation: Dropdown menus are constructed by passing a list of strings. In this case, the strings are both used as the names in the dropdown menu UI and passed to the underlying Python function.
End of explanation
"""
interact(f, x=[('one', 10), ('two', 20)]);
"""
Explanation: If you want a dropdown menu that passes non-string values to the Python function, you can pass a list of (label, value) pairs.
End of explanation
"""
from IPython.display import display
def f(a, b):
display(a + b)
return a+b
"""
Explanation: interactive
In addition to interact, IPython provides another function, interactive, that is useful when you want to reuse the widgets that are produced or access the data that is bound to the UI controls.
Note that unlike interact, the return value of the function will not be displayed automatically, but you can display a value inside the function with IPython.display.display.
Here is a function that returns the sum of its two arguments and displays them. The display line may be omitted if you don't want to show the result of the function.
End of explanation
"""
w = interactive(f, a=10, b=20)
"""
Explanation: Unlike interact, interactive returns a Widget instance rather than immediately displaying the widget.
End of explanation
"""
type(w)
"""
Explanation: The widget is an interactive, a subclass of VBox, which is a container for other widgets.
End of explanation
"""
w.children
"""
Explanation: The children of the interactive are two integer-valued sliders and an output widget, produced by the widget abbreviations above.
End of explanation
"""
display(w)
"""
Explanation: To actually display the widgets, you can use IPython's display function.
End of explanation
"""
w.kwargs
"""
Explanation: At this point, the UI controls work just like they would if interact had been used. You can manipulate them interactively and the function will be called. However, the widget instance returned by interactive also gives you access to the current keyword arguments and return value of the underlying Python function.
Here are the current keyword arguments. If you rerun this cell after manipulating the sliders, the values will have changed.
End of explanation
"""
w.result
"""
Explanation: Here is the current return value of the function.
End of explanation
"""
def slow_function(i):
print(int(i),list(x for x in range(int(i)) if
str(x)==str(x)[::-1] and
str(x**2)==str(x**2)[::-1]))
return
%%time
slow_function(1e6)
"""
Explanation: Disabling continuous updates
When interacting with long running functions, realtime feedback is a burden instead of being helpful. See the following example:
End of explanation
"""
from ipywidgets import FloatSlider
interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));
"""
Explanation: Notice that the output is updated even while dragging the mouse on the slider. This is not useful for long running functions due to lagging:
End of explanation
"""
interact_manual(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5));
"""
Explanation: There are two ways to mitigate this. You can either only execute on demand, or restrict execution to mouse release events.
interact_manual
The interact_manual function provides a variant of interaction that allows you to restrict execution so it is only done on demand. A button is added to the interact controls that allows you to trigger an execute event.
End of explanation
"""
interact(slow_function,i=FloatSlider(min=1e5, max=1e7, step=1e5, continuous_update=False));
"""
Explanation: continuous_update
If you are using slider widgets, you can set the continuous_update kwarg to False. continuous_update is a kwarg of slider widgets that restricts executions to mouse release events.
End of explanation
"""
a = widgets.IntSlider()
b = widgets.IntSlider()
c = widgets.IntSlider()
ui = widgets.HBox([a, b, c])
def f(a, b, c):
print((a, b, c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
display(ui, out)
"""
Explanation: interactive_output
interactive_output provides additional flexibility: you can control how the UI elements are laid out.
Unlike interact, interactive, and interact_manual, interactive_output does not generate a user interface for the widgets. This is powerful, because it means you can create a widget, put it in a box, and then pass the widget to interactive_output, and have control over the widget and its layout.
End of explanation
"""
x_widget = FloatSlider(min=0.0, max=10.0, step=0.05)
y_widget = FloatSlider(min=0.5, max=10.0, step=0.05, value=5.0)
def update_x_range(*args):
x_widget.max = 2.0 * y_widget.value
y_widget.observe(update_x_range, 'value')
def printer(x, y):
print(x, y)
interact(printer,x=x_widget, y=y_widget);
"""
Explanation: Arguments that are dependent on each other
Arguments that are dependent on each other can be expressed manually using observe. See the following example, where one variable is used to describe the bounds of another. For more information, please see the widget events example notebook.
End of explanation
"""
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
def f(m, b):
plt.figure(2)
x = np.linspace(-10, 10, num=1000)
plt.plot(x, m * x + b)
plt.ylim(-5, 5)
plt.show()
interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
"""
Explanation: Flickering and jumping output
On occasion, you may notice interact output flickering and jumping, causing the notebook scroll position to change as the output is updated. The interactive control has a layout, so we can set its height to an appropriate value (currently chosen manually) so that it will not change size as it is updated.
End of explanation
"""
|
qkitgroup/qkit | qkit/doc/notebooks/IV_curve.ipynb | gpl-2.0 | import numpy as np
from uncertainties import ufloat, umath, unumpy as unp
from scipy import signal as sig
import matplotlib.pyplot as plt
import qkit
qkit.start()
from qkit.analysis.IV_curve import IV_curve as IVC
ivc = IVC()
"""
Explanation: Transport measurement data analysis
This is an example notebook for the analysis class IV_curve of qkit.analysis.IV_curve.py. This handels transport measurment data (focussed of measurements of Josephson junctions in the current bias) taken with qkit.measure.transport.transport.py and provides methods to
* load data files,
* merge data files,
* calculate numerical derivatives, such as differential resistances,
* analyse voltage and current offsets,
* correct ohmic resistances offsets arising due to lead resistivity in 2-wire measurements,
* analyse the normal state resistance,
* analyse critical currents and voltage jumps,
* analyse switching current distributions.
For error propagation the uncertainties package is used.
End of explanation
"""
ivc.load(uuid='XXXXXX')
"""
Explanation: Load qkit transport measurement file
Transport measurement data with a given uuid can be loaded using ivc.load(uuid). Several elements are available, especially
* data file ivc.df,
* settings ivc.settings,
* measurement object ivc.mo,
* current values ivc.I,
* voltage values ivc.V,
* differential resistance values ivc.dVdI,
* sweep list ivc.sweeps,
* scan dimension (1D, 2D or 3D) ivc.scan_dim,
* in case of 2D and 3D scans, x-parameter dataset ivc.x_ds, values ivc.x_vec, name ivc.x_coordname, unit ivc.x_unit,
* in case of 3D scans, y-parameter dataset ivc.y_ds, values ivc.y_vec, name ivc.y_coordname, unit ivc.y_unit.
End of explanation
"""
ivc.merge(uuids=('XXXXXX', 'YYYYYY'), order=(-1, 1))
"""
Explanation: Merge qkit transport measurement files
Qkit transport measurement files can be merged depending on the scan dimension to one single new file by ivc.merge().
* 1D: all sweep data are stacked and views are merged.
* 2D: values of x-parameter and its corresponding sweep data are merged in the order order.
* 3D: values of x- and y-parameters and its corresponding sweep data are merged in the order order.
End of explanation
"""
ivc.get_dVdI()
"""
Explanation: Differential resistance
The differential resistance $ \frac{\text{d}V}{\text{d}I} $ is caclulated as numerical derivative using ivc.get_dVdI(). By default the Savitzky Golay filter is applied, but different methods can be used, e.g. a simple numerical gradient ivc.get_dVdI(mode=np.gradient)
End of explanation
"""
ivc.get_offsets(offset=0, tol_offset=20e-6)
ivc.get_offset(offset=0, tol_offset=20e-6)
"""
Explanation: Current and voltage offsets
The current and voltage offsets can be calculated using ivc.get_offsets() or ivc.get_offset().
The branch where the y-values are nearly constant are evaluated. The average of all corresponding x-values is considered to be the x-offset and the average of the extreme y-values are considered as y-offset. These are by default the critical y-values $ y_c$, but can also be set to retrapping y-values $ y_r $ if yr=True
* ivc.get_offsets() calculates x- and y-offset of every trace,
* ivc.get_offset() calculates x- and y-offset of the whole set (this differs only for 2D or 3D scans).
Note that reasonable initial values offset and tol_offset are sufficient to find the range where the y-values are nearly constant.
End of explanation
"""
ivc.get_2wire_slope_correction(prominence=1)
"""
Explanation: Ohmic resistance offset
The voltage values can be corrected by an ohmic resistance offset such as occur in 2wire measurements using ivc.get_2wire_slope_correction(). The two maxima in the differential resistivity $ \frac{\text{d}V}{\text{d}I} $ are identified as critical and retrapping currents. This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder. The slope of the superconducting regime in between (which should ideally be infinity) is fitted using `numpy.linalg.qr()´ algorithm and subtracted from the raw data.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
"""
ivc.get_Ic_threshold(Ir=True)
"""
Explanation: Critical and retrapping currents
Critical currents can be determined on three different ways. The voltage jump at the critical and retrapping current can be found by
* a voltage threshold value that is exceeded,
* peaks in the differential resistance $ \frac{\text{d}V}{\text{d}I} $,
* peaks in the Gaussian smoothed derivative $ \text{i}f\exp\left(-sf^2\right) $ in the frequency domain
Voltage threshold methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by finding the currents that correspond to the voltages which exceed a certain threshold using ivc.get_Ic_threshold(). The branch where the voltage values are nearly constant are evaluated. Their maximal values of the up- and down-sweep are considered as critical currents $ I_c $ and retrapping current $ I_r $ (if Ir=True), respectively.
Note that it works best, if offset is already determined via get_offsets() and that a reasonable initial value tol_offset is sufficient.
End of explanation
"""
I_cs, I_rs, props = ivc.get_Ic_deriv(prominence=1, Ir=True)
I_cs, I_rs
print('all currents, where voltage jumps')
if ivc._scan_dim == 1:
print(np.array(map(lambda p1D: p1D['I'], props)))
elif ivc._scan_dim == 2:
print(np.array(map(lambda p2D: map(lambda p1D: p1D['I'], p2D), props)))
elif ivc._scan_dim == 3:
print(np.array(map(lambda p3D: map(lambda p2D: map(lambda p1D: p1D['I'], p2D), p3D), props)))
"""
Explanation: Peak detection in differential resistance methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by detecting peaks in the differential resistance $ \frac{\text{d}V}{\text{d}I} $ using ivc.get_Ic_deriv(). This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
"""
I_cs, I_rs, props = ivc.get_Ic_dft(Ir=True, prominence=1e-6)
I_cs, I_rs
"""
Explanation: Peak detection in the Gaussian smoothed derivative methode for critical and retrapping currents
The critical and retrapping currents $ I_\text{c} $, $ I_\text{r} $ can be calculated by detecting peaks in the Gaussian smoothed derivative $ \left(\text{i}f\cdot\text{e}^{-sf^2}\right) $ in the frequency domain using ivc.get_Ic_dft(). This is done using scipy.signal.find_peaks() by default, but can set to custom peak finding algorithms peak_finder.
Note that the smoothing factor s and the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
"""
ivc.get_Rn()
"""
Explanation: Normal state resistance
The normal state resistance $ R_\text{n} $ of ohmic (overcritical) branch can be calculated using ivc.get_Rn().
The normal state resistance corresponds to the inverse linear slope of the normal conducting branch $ I=R_\text{n}^{-1}U $ (mode=0) or the average value of $ \frac{\mathrm{d}V}{\mathrm{d}I} $ of the normal conducting branch (mode=1). The ohmic range, in turn, is considered to range from the outermost tail of the peaks in the curvature $ \frac{\mathrm{d}^2V}{\mathrm{d}I^2} $ to the start/end of the sweep and the resistance is calculated as mean of the differential resistance values dVdI within this range. This is done using scipy.signal.savgol_filter(deriv=2) and scipy.signal.find_peaks() by default, but can set to any second order derivative function deriv_func and peak finding algorithms peak_finder. For scipy.signal.find_peaks() the prominence parameter is set to $ 1\,\% $ of the absolute curvature value by default.
Note that the arguments of the peak finding algorithms need to be set properly, e.g. prominence for scipy.signal.find_peaks().
End of explanation
"""
props = ivc.get_Ic_deriv(prominence=1)
I_0 = np.array(list(map(lambda p2D: list(map(lambda p1D: p1D['I'][0], p2D)), props)))
ivc.scm.fit(I_0=I_0*1e6,
omega_0=1e9,
bins=30)
ivc.scm.plot()
"""
Explanation: Switching current measurements
Switching current distributions can be analyzed and plotted using the ivc.scm.fit() and ivc.scm.plot() of the switching_current subclass.
The switching currents need to be determined beforehand, such as by ivc.get_Ic_deriv(). Their switching current distribution $ P_k \mathrm{d}I = \frac{n_k}{N\Delta I}\mathrm{d}I $ is normalized $ \int\limits_0^\infty P(I^\prime)\mathrm{d}I^\prime = 1 $ and get by numpy.histogram().
The escape rate reads $ \Gamma(I_k) = \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\sum\limits_{j\geq k} P_j}{\sum_\limits{j\geq k+1} P_j}\right) $ and the normalized escape rate $ \left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} $ is fitted versus $ \gamma $ to $ f(\bar{\gamma}) = a\cdot\bar{\gamma}+b $ where the root $ \left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} = 0 $ yields the critical current $ I_\text{c} = -\frac{b}{a} $. Here, the sweep rate results as $ \frac{\mathrm{d}I}{\mathrm{d}t} = \delta I\cdot\frac{\text{nplc}}{\text{plc}} $, the centers of bins as moving average of the returned bin-edges using np.convolve(edges, np.ones((2,))/2, mode='valid') and the bin width as $ \Delta I = \frac{\max(I_\text{b})-\min(I_\text{b})}{N_\text{bins}} $.
theoretical background
the probability distribution of switching currents is related to the escape rate $ \Gamma(I) $ and the current ramp rate $ \frac{\mathrm{d}I}{\mathrm{d}t} $ as
\begin{equation}
P(I)\mathrm{d}I = \Gamma(I)\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1} \left(1-\int\limits_0^I P(I^\prime)\mathrm{d}I^\prime\right)\mathrm{d}I
\end{equation}
This integral equation can be solved explixitly for the switching-current distribution
\begin{align}
P(I) &= \Gamma(I)\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1}\exp\left(-\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|^{-1}\int\limits_0^I\Gamma(I^\prime)\mathrm{d}I^\prime\right)\
P(I_k) &= \Gamma(I_k)\left|\frac{\mathrm{d}I_k}{\mathrm{d}t}\right|^{-1}\exp\left(-\left|\frac{\mathrm{d}I_k}{\mathrm{d}t}\right|^{-1}\Delta I\sum\limits_{j=0}^k\Gamma(I_j)\right)
\end{align}
Solving for the escape rate results in
\begin{align}
\Gamma(I) &= \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\int\limits_I^\infty P(I^\prime)\mathrm{d}I^\prime}{\int\limits_{I+\Delta I}^\infty P(I^\prime)\mathrm{d}I^\prime}\right) \
\Gamma(I_k) &= \frac{\left|\frac{\mathrm{d}I}{\mathrm{d}t}\right|}{\Delta I}\ln\left(\frac{\sum\limits_{j\geq k} P_j}{\sum_\limits{j\geq k+1} P_j}\right)
\end{align}
The escape rate, in turn, is related to the attempt frequency $ \frac{\omega_0}{2\pi} $ and the barrier height $ U_0 = 2E_\mathrm{J}\left(\sqrt{1-\gamma^2}-\gamma\arccos(\gamma)\right) \approx E_\mathrm{J}\frac{4\sqrt{2}}{3}\left(1-\gamma\right)^\frac{3}{2} $ and results in
\begin{align}
\Gamma_\text{th} &= \frac{\omega_0}{2\pi}\exp\left(\frac{U_0}{k_\text{B}T}\right) \
&= \frac{\omega_0}{2\pi}\exp\left(-\frac{E_\text{J}\frac{4\sqrt{2}}{3}(1-\bar{\gamma})^{3/2}}{k_\text{B}T}\right)\
\left[\ln\left(\frac{\omega_0}{2\pi\Gamma}\right)\right]^{2/3} &= \left(\frac{E_\text{J}}{k_\text{B}T}\frac{4\sqrt{2}}{3}\right)^{2/3}\cdot(1-\bar{\gamma})
\end{align}
References
Fulton, T. A., and L. N. Dunkleberger. "Lifetime of the zero-voltage state in Josephson tunnel junctions." Physical Review B 9.11 (1974): 4760.
Wallraff, Andreas. Fluxon dynamics in annular Josephson junctions: From Relativistic Strings to quantum particles. Lehrstuhl für Mikrocharakterisierung, Friedrich-Alexander-Universität, 2001.
End of explanation
"""
|
geodocker/geodocker-jupyter-geopyspark | notebooks/Pine Habitat.ipynb | apache-2.0 | import geopyspark as gps
from pyspark import SparkContext
"""
Explanation: This tutorial will show you how to find the suitable habitat range for Bristlecone pine using GeoPySpark
This tutorial will focus on GeoPySpark functionality, but you can find more resources and tutorials about GeoNotebooks here.
Suitability analysis is a classic GIS case study that enables the combination of factors to return a desired result
This tutorial sets the premise that you are interested in two factors for locating Bristlecone pines:
- Located between 3,000 and 4,000 meters
- Located on a south facing slope
End of explanation
"""
conf=gps.geopyspark_conf(appName="BristleConePine")
conf.set('spark.ui.enabled', True)
sc = SparkContext(conf = conf)
"""
Explanation: You will need to set up a spark context. To learn more about what that means take a look here
End of explanation
"""
elev_rdd = gps.geotiff.get(
layer_type='spatial',
uri='s3://geopyspark-demo/elevation/ca-elevation.tif')
"""
Explanation: Retrieving an elevation .tif from AWS S3:
End of explanation
"""
elev_tiled_rdd = elev_rdd.tile_to_layout(
layout=gps.GlobalLayout(),
target_crs=3857)
elev_pyramided_rdd = elev_tiled_rdd.pyramid().cache()
"""
Explanation: Tile, reproject, pyramid:
End of explanation
"""
from geopyspark.geotrellis.color import get_colors_from_matplotlib
elev_histo = elev_pyramided_rdd.get_histogram()
elev_colors = get_colors_from_matplotlib('viridis', 100)
elev_color_map = gps.ColorMap.from_histogram(elev_histo, elev_colors)
elev_tms = gps.TMS.build(elev_pyramided_rdd, elev_color_map)
elev_tms.bind('0.0.0.0')
"""
Explanation: Imports for creating a TMS server capable of serving layers with custom colormaps
End of explanation
"""
import folium
map_center = [37.75, -118.85]
zoom = 7
m = folium.Map(location=map_center, zoom_start=zoom)
folium.TileLayer(tiles="Stamen Terrain", overlay=False).add_to(m)
folium.TileLayer(tiles=elev_tms.url_pattern, attr="GeoPySpark", overlay=True).add_to(m)
m
"""
Explanation: Display the tiles in an embedded Folium map:
End of explanation
"""
# use: elev_reprojected_rdd
elev_reclass_pre = elev_tiled_rdd.reclassify({1000:2, 2000:2, 3000:2, 4000:1, 5000:2}, int)
elev_reclass_rdd = elev_reclass_pre.reclassify({1:1}, int)
elev_reclass_pyramid_rdd = elev_reclass_rdd.pyramid()
elev_reclass_histo = elev_reclass_pyramid_rdd.get_histogram()
#elev_reclass_color_map = ColorMap.from_histogram(sc, elev_reclass_histo, get_breaks(sc, 'Viridis', num_colors=100))
elev_reclass_color_map = gps.ColorMap.from_colors(
breaks =[1],
color_list = [0xff000080])
elev_reclass_tms = gps.TMS.build(elev_reclass_pyramid_rdd, elev_reclass_color_map)
elev_reclass_tms.bind('0.0.0.0')
m2 = folium.Map(location=map_center, zoom_start=zoom)
folium.TileLayer(tiles="Stamen Terrain", overlay=False).add_to(m2)
folium.TileLayer(tiles=elev_tms.url_pattern, attr='GeoPySpark', name="Elevation", overlay=True).add_to(m2)
folium.TileLayer(tiles=elev_reclass_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m2)
folium.LayerControl().add_to(m2)
m2
"""
Explanation: Classify the elevation such that values of interest (between 3,000 and 4,000 meters) return a value of 1.
End of explanation
"""
# square_neighborhood = Square(extent=1)
aspect_rdd = elev_tiled_rdd.focal(
gps.Operation.ASPECT,
gps.Neighborhood.SQUARE, 1)
aspect_pyramid_rdd = aspect_rdd.pyramid()
aspect_histo = aspect_pyramid_rdd.get_histogram()
aspect_color_map = gps.ColorMap.from_histogram(aspect_histo, get_colors_from_matplotlib('viridis', num_colors=256))
aspect_tms = gps.TMS.build(aspect_pyramid_rdd, aspect_color_map)
aspect_tms.bind('0.0.0.0')
m3 = folium.Map(tiles='Stamen Terrain', location=map_center, zoom_start=zoom)
folium.TileLayer(tiles=aspect_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m3)
m3
aspect_tms.unbind()
"""
Explanation: Focal operation: aspect. To find south facing slopes
End of explanation
"""
aspect_reclass_pre = aspect_rdd.reclassify({120:2, 240:1, 360: 2}, int)
aspect_reclass = aspect_reclass_pre.reclassify({1:1}, int)
aspect_reclass_pyramid_rdd = aspect_reclass.pyramid()
aspect_reclass_histo = aspect_reclass_pyramid_rdd.get_histogram()
aspect_reclass_color_map = gps.ColorMap.from_histogram(aspect_reclass_histo, get_colors_from_matplotlib('viridis', num_colors=256))
aspect_reclass_tms = gps.TMS.build(aspect_reclass_pyramid_rdd, aspect_reclass_color_map)
aspect_reclass_tms.bind('0.0.0.0')
m4 = folium.Map(tiles='Stamen Terrain', location=map_center, zoom_start=zoom)
folium.TileLayer(tiles=aspect_reclass_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m4)
m4
aspect_reclass_tms.unbind()
"""
Explanation: Reclassify values such that values between 120 and 240 degrees (south) have a value of 1
End of explanation
"""
added = elev_reclass_pyramid_rdd + aspect_reclass_pyramid_rdd
added_histo = added.get_histogram()
added_color_map = gps.ColorMap.from_histogram(added_histo, get_colors_from_matplotlib('viridis', num_colors=256))
added_tms = gps.TMS.build(added, added_color_map)
added_tms.bind('0.0.0.0')
m5 = folium.Map(tiles='Stamen Terrain', location=map_center, zoom_start=zoom)
folium.TileLayer(tiles=added_tms.url_pattern, attr='GeoPySpark', name="High Elevation Areas", overlay=True).add_to(m5)
m5
import matplotlib.pyplot as plt
%matplotlib inline
v = elev_tiled_rdd.lookup(342,787)
plt.imshow(v[0].cells[0])
"""
Explanation: Now add the values togehter to find the suitable range:
End of explanation
"""
|
jaduimstra/nilmtk | notebooks/experimental/test_num_states_co_fhmm.ipynb | apache-2.0 | ds.set_window(start='2014-04-01 00:00:00', end='2014-05-01 00:00:00')
elec
fridge_elecmeter = elec['fridge']
fridge_elecmeter
fridge_mg = MeterGroup([fridge_elecmeter])
co.train(fridge_mg)
co.model
"""
Explanation: Reducing time window
End of explanation
"""
num_states_dict = {fridge_elecmeter:2}
co = CombinatorialOptimisation()
co.train(fridge_mg, num_states_dict=num_states_dict)
co.model
"""
Explanation: So, fridge is learnt as a 3 state appliance. What if we wanted to specify it to use 2 states? The latest version of nilmtk allows us to specify the number of states for an appliance.
End of explanation
"""
from nilmtk.disaggregate import FHMM
f = FHMM()
f.train(fridge_mg)
f.model.means_
f = FHMM()
f.train(fridge_mg, num_states_dict)
f.model.means_
"""
Explanation: Now, fridge is learnt as a 2 state appliance.
Let us try the same thing with FHMM now
End of explanation
"""
|
ToqueWillot/M2DAC | FDMS/TME_Dataiku/kaggle_whats_cooking/Model_V7.ipynb | gpl-2.0 | # -*- coding: utf-8 -*-
"""
Explanation: FDMS TME3
Kaggle How Much Did It Rain? II
Florian Toque & Paul Willot
End of explanation
"""
# from __future__ import exam_success
from __future__ import absolute_import
from __future__ import print_function
# Standard imports
%matplotlib inline
import os
import sklearn
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import random
import pandas as pd
import scipy.stats as stats
# Sk cheats
from sklearn.cross_validation import cross_val_score
from sklearn import grid_search
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
#from sklearn.preprocessing import Imputer # get rid of nan
from sklearn.decomposition import NMF # to add features based on the latent representation
from sklearn.decomposition import ProjectedGradientNMF
# Faster gradient boosting
import xgboost as xgb
# For neural networks models
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD, RMSprop
"""
Explanation: Notes
We tried different regressor model, like GBR, SVM, MLP, Random Forest and KNN as recommanded by the winning team of the Kaggle on taxi trajectories. So far GBR seems to be the best, slightly better than the RF.
The new features we exctracted only made a small impact on predictions but still improved them consistently.
We tried to use a LSTM to take advantage of the sequential structure of the data but it didn't work too well, probably because there is not enought data (13M lines divided per the average length of sequences (15), less the 30% of fully empty data)
End of explanation
"""
from pandas import Series, DataFrame
import pandas as pd
import numpy as np
import nltk
import re
from nltk.stem import WordNetLemmatizer
from sklearn.svm import LinearSVC
from sklearn.metrics import classification_report
import sklearn.metrics
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn import grid_search
from sklearn.linear_model import LogisticRegression
"""
Explanation: 13.765.202 lines in train.csv
8.022.757 lines in test.csv
End of explanation
"""
%%time
#filename = "data/train.csv"
filename = "data/train.json"
#filename = "data/reduced_train_1000000.csv"
raw = pd.read_json(filename)
#raw = raw.set_index('Id')
traindf = raw
traindf['ingredients_clean_string'] = [' , '.join(z).strip() for z in traindf['ingredients']]
traindf['ingredients_string'] = [' '.join([WordNetLemmatizer().lemmatize(re.sub('[^A-Za-z]', ' ', line)) for line in lists]).strip() for lists in traindf['ingredients']]
#
testdf = pd.read_json("data/train.json")
testdf['ingredients_clean_string'] = [' , '.join(z).strip() for z in testdf['ingredients']]
testdf['ingredients_string'] = [' '.join([WordNetLemmatizer().lemmatize(re.sub('[^A-Za-z]', ' ', line)) for line in lists]).strip() for lists in testdf['ingredients']]
corpustr = traindf['ingredients_string']
vectorizertr = TfidfVectorizer(stop_words='english',
ngram_range = ( 1 , 1 ),analyzer="word",
max_df = .57 , binary=False , token_pattern=r'\w+' , sublinear_tf=False)
tfidftr=vectorizertr.fit_transform(corpustr).todense()
#
corpusts = testdf['ingredients_string']
vectorizerts = TfidfVectorizer(stop_words='english')
#
tfidfts=vectorizertr.transform(corpusts)
predictors_tr = tfidftr
targets_tr = traindf['cuisine']
#
predictors_ts = tfidfts
raw.head()
tfidftr[0]
targets_tr[:5]
labels = pd.get_dummies(targets_tr)
labels[:5]
classifier = LogisticRegression()
classifier=classifier.fit(predictors_tr,targets_tr)
predictions=classifier.predict(predictors_ts)
testdf['cuisine'] = predictions
testdf = testdf.sort('id' , ascending=True)
#testdf[['id' , 'ingredients_clean_string' , 'cuisine' ]].to_csv("submission.csv")
%%time
classifier = GradientBoostingRegressor()
classifier=classifier.fit(predictors_tr,targets_tr)
predictions=classifier.predict(predictors_ts)
testdf['cuisine'] = predictions
testdf = testdf.sort('id' , ascending=True)
testdf[['id' , 'ingredients_clean_string' , 'cuisine' ]].to_csv("submission.csv")
raw.head()
raw['ingredients'][0]
"""
Explanation: Few words about the dataset
Predictions is made in the USA corn growing states (mainly Iowa, Illinois, Indiana) during the season with the highest rainfall (as illustrated by Iowa for the april to august months)
The Kaggle page indicate that the dataset have been shuffled, so working on a subset seems acceptable
The test set is not a extracted from the same data as the training set however, which make the evaluation trickier
Load the dataset
End of explanation
"""
# the dbz feature does not influence xgbr so much
xgbr = xgb.XGBRegressor(max_depth=6, learning_rate=0.1, n_estimators=700, silent=True,
objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1,
max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5,
seed=0, missing=None)
%%time
xgbr = xgbr.fit(X_train,y_train)
# without the nmf features
# print(xgbr.score(X_train,y_train))
## 0.993948231144
# print(xgbr.score(X_test,y_test))
## 0.613931733332
# with nmf features
print(xgbr.score(X_train,y_train))
print(xgbr.score(X_test,y_test))
"""
Explanation: Gradient Boosting Regressor
End of explanation
"""
# tfidftr, labels
np.shape(labels)[1]
#from keras.models import Sequential
#from keras.layers.core import Dense, Dropout, Activation
#from keras.optimizers import SGD
in_dim = np.shape(tfidftr)[1]
out_dim = np.shape(labels)[1]
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(128, input_shape=(in_dim,)))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(out_dim, init='uniform'))
model.add(Activation('softmax'))
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
#model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
#score = model.evaluate(X_test, y_test, batch_size=16)
np.count_nonzero(tfidftr[4])
tfidftr[0]
labels
model.fit(tfidftr, np.zeros(len(tfidftr)), nb_epoch=20, batch_size=16) #np.zeros(len(tfidftr))
in_dim = np.shape(tfidftr)[1]
out_dim = np.shape(labels)[1]
model = Sequential()
# Dense(64) is a fully-connected layer with 64 hidden units.
# in the first layer, you must specify the expected input data shape:
# here, 20-dimensional vectors.
model.add(Dense(128, input_shape=(in_dim,)))
model.add(Activation('tanh'))
model.add(Dropout(0.5))
model.add(Dense(out_dim, init='uniform'))
model.add(Activation('softmax'))
#sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
#model.compile(loss='mean_squared_error', optimizer=sgd)
rms = RMSprop()
model.compile(loss='mean_squared_error', optimizer=rms)
#model.fit(X_train, y_train, nb_epoch=20, batch_size=16)
#score = model.evaluate(X_test, y_test, batch_size=16)
print(np.shape(tfidftr))
print(np.shape(labels))
model.fit(tfidftr,labels)
prep = []
for i in y_train:
prep.append(min(i,20))
prep=np.array(prep)
mi,ma = prep.min(),prep.max()
fy = (prep-mi) / (ma-mi)
#my = fy.max()
#fy = fy/fy.max()
model.fit(np.array(X_train), fy, batch_size=10, nb_epoch=10, validation_split=0.1)
pred = model.predict(np.array(X_test))*ma+mi
err = (pred-y_test)**2
err.sum()/len(err)
r = random.randrange(len(X_train))
print("(Train) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_train[r]]))[0][0]*ma+mi,y_train[r]))
r = random.randrange(len(X_test))
print("(Test) Prediction %0.4f, True: %0.4f"%(model.predict(np.array([X_test[r]]))[0][0]*ma+mi,y_test[r]))
"""
Explanation: Here for legacy
End of explanation
"""
%%time
filename = "data/reduced_test_5000.csv"
#filename = "data/test.csv"
test = pd.read_csv(filename)
test = test.set_index('Id')
features_columns = list([u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
def getX(raw):
selected_columns = list([ u'minutes_past',u'radardist_km', u'Ref', u'Ref_5x5_10th',
u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',
u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',
u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',
u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',
u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',
u'Kdp_5x5_50th', u'Kdp_5x5_90th'])
data = raw[selected_columns]
docX= []
for i in data.index.unique():
if isinstance(data.loc[i],pd.core.series.Series):
m = [data.loc[i].as_matrix()]
docX.append(m)
else:
m = data.loc[i].as_matrix()
docX.append(m)
X = np.array(docX)
return X
#%%time
#X=getX(test)
#tmp = []
#for i in X:
# tmp.append(len(i))
#tmp = np.array(tmp)
#sns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))
#plt.title("Number of ID per number of observations\n(On test dataset)")
#plt.plot()
#testFull = test.dropna()
testNoFullNan = test.loc[test[features_columns].dropna(how='all').index.unique()]
%%time
X=getX(testNoFullNan) # 1min
#XX = [np.array(t).mean(0) for t in X] # 10s
XX=[]
for t in X:
nm = np.nanmean(t,0)
for idx,j in enumerate(nm):
if np.isnan(j):
nm[idx]=global_means[idx]
XX.append(nm)
XX=np.array(XX)
# rescale to clip min at 0 (for non negative matrix factorization)
XX_rescaled=XX[:,:]-np.min(XX,0)
%%time
W = nmf.transform(XX_rescaled)
XX=addFeatures(X,mf=W)
pd.DataFrame(xgbr.predict(XX)).describe()
reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
predTest = globalPred.mean(1)
predFull = zip(testNoFullNan.index.unique(),predTest)
testNan = test.drop(test[features_columns].dropna(how='all').index)
pred = predFull + predNan
tmp = np.empty(len(testNan))
tmp.fill(0.445000) # 50th percentile of full Nan dataset
predNan = zip(testNan.index.unique(),tmp)
testLeft = test.drop(testNan.index.unique()).drop(testFull.index.unique())
tmp = np.empty(len(testLeft))
tmp.fill(1.27) # 50th percentile of full Nan dataset
predLeft = zip(testLeft.index.unique(),tmp)
len(testFull.index.unique())
len(testNan.index.unique())
len(testLeft.index.unique())
pred = predFull + predNan + predLeft
pred.sort(key=lambda x: x[0], reverse=False)
#reducedModelList = [knn,etreg,xgbr,gbr]
globalPred = np.array([f.predict(XX) for f in reducedModelList]).T
#globalPred.mean(1)
submission = pd.DataFrame(pred)
submission.columns = ["Id","Expected"]
submission.head()
submission.loc[submission['Expected']<0,'Expected'] = 0.445
submission.to_csv("submit4.csv",index=False)
filename = "data/sample_solution.csv"
sol = pd.read_csv(filename)
sol
ss = np.array(sol)
%%time
for a,b in predFull:
ss[a-1][1]=b
ss
sub = pd.DataFrame(pred)
sub.columns = ["Id","Expected"]
sub.Id = sub.Id.astype(int)
sub.head()
sub.to_csv("submit3.csv",index=False)
"""
Explanation: Predict on testset
End of explanation
"""
|
BinRoot/TensorFlow-Book | ch04_classification/Concept01_linear_regression_classification.ipynb | mit | %matplotlib inline
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Ch 04: Concept 01
Linear regression for classification (just for demonstrative purposes)
Import the usual libraries:
End of explanation
"""
x_label0 = np.random.normal(5, 1, 10)
x_label1 = np.random.normal(2, 1, 10)
xs = np.append(x_label0, x_label1)
labels = [0.] * len(x_label0) + [1.] * len(x_label1)
plt.scatter(xs, labels)
"""
Explanation: Let's say we have numbers that we want to classify. They'll just be 1-dimensional values. Numbers close to 5 will be given the label [0], and numbers close to 2 will be given the label [1], as designed here:
End of explanation
"""
learning_rate = 0.001
training_epochs = 1000
X = tf.placeholder("float")
Y = tf.placeholder("float")
w = tf.Variable([0., 0.], name="parameters")
"""
Explanation: Define the hyper-parameters, placeholders, and variables:
End of explanation
"""
def model(X, w):
return tf.add(tf.multiply(w[1], tf.pow(X, 1)),
tf.multiply(w[0], tf.pow(X, 0)))
"""
Explanation: Define the model:
End of explanation
"""
y_model = model(X, w)
cost = tf.reduce_sum(tf.square(Y-y_model))
"""
Explanation: Given a model, define the cost function:
End of explanation
"""
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
correct_prediction = tf.equal(Y, tf.to_float(tf.greater(y_model, 0.5)))
accuracy = tf.reduce_mean(tf.to_float(correct_prediction))
"""
Explanation: Set up the training op, and also introduce a couple ops to calculate some metrics, such as accuracy:
End of explanation
"""
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)
"""
Explanation: Prepare the session:
End of explanation
"""
for epoch in range(training_epochs):
sess.run(train_op, feed_dict={X: xs, Y: labels})
current_cost = sess.run(cost, feed_dict={X: xs, Y: labels})
if epoch % 100 == 0:
print(epoch, current_cost)
"""
Explanation: Run the training op multiple times on the input data:
End of explanation
"""
w_val = sess.run(w)
print('learned parameters', w_val)
print('accuracy', sess.run(accuracy, feed_dict={X: xs, Y: labels}))
sess.close()
"""
Explanation: Show some final metrics/results:
End of explanation
"""
all_xs = np.linspace(0, 10, 100)
plt.plot(all_xs, all_xs*w_val[1] + w_val[0])
plt.scatter(xs, labels)
plt.show()
"""
Explanation: Plot the learned function
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree | sentiment-rnn/Sentiment RNN Solution.ipynb | mit | import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
"""
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
"""
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
"""
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
"""
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
"""
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
"""
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
"""
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
"""
# Filter out that review with 0 length
reviews_ints = [each for each in reviews_ints if len(each) > 0]
print(len(reviews_ints))
print(reviews_ints[1])
"""
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
"""
seq_len = 200
print(len(reviews))
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
print(len(features))
features[:10,:100]
"""
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
"""
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
"""
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
"""
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
"""
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
"""
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='prob')
"""
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
"""
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
"""
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
"""
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
"""
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
"""
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
"""
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
"""
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
"""
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
"""
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
"""
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
"""
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
"""
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
"""
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
"""
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
"""
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
"""
Explanation: Testing
End of explanation
"""
|
pyReef-model/pyReefCore | Tests/case2/run-case2.ipynb | gpl-3.0 | from pyReefCore.model import Model
"""
Explanation: ReefCore library
pyReef-Core is a deterministic, one-dimensional (1-D) numerical model, that simulates the vertical coralgal growth patterns observed in a drill core, as well as the physical, environmental processes that effect coralgal growth.
The model is capable of integrating ecological processes like coralgal community interactions over centennial-to-millennial scales using predator-prey or Generalised Lotka-Volterra Equations.
End of explanation
"""
# Initialise model
reef = Model()
"""
Explanation: Once the library has been loaded, the model initialisation is done using the following command:
End of explanation
"""
# Define the XmL input file
reef.load_xml('input-case2.xml')
"""
Explanation: XmL input file
The next step consists in defining the initial conditions for our simulation. This is done by using an XmL input file which set the parameters to be used, such as:
the initial community population number $X0$
the intrinsic rate of a population species $\epsilon$
the interaction coefficients among the species association $\alpha$
End of explanation
"""
reef.core.initialSetting(size=(10,4), fname='input')
"""
Explanation: Visualise the initial conditions of your model run can be done using the following command:
End of explanation
"""
reef.run_to_time(0,showtime=5000.,verbose=False)
"""
Explanation: Model simulation
The core of the code consist in solving the system of ODEs from the GLV equations using the RKF method.
Once a community association population is resolved, carbonate production is calculated using a carbonate production factor. Production factors are specified for the maximum population, and linearly scaled to the actual population.
To run the model for a given time period [years], the following function needs to be called:
End of explanation
"""
from matplotlib.cm import terrain, plasma
nbcolors = len(reef.core.coralH)+10
#colors = cmo.cm.dense(np.linspace(0, 4, nbcolors))
colors = terrain(np.linspace(0, 1, nbcolors))
#nbcolors = len(reef.core.layTime)+3
nbcolors = 2500
colors2 = cmo.cm.haline_r(np.linspace(0, 1, nbcolors)) #oxy
"""
Explanation: Results
All the output from the model run can be plotted on the notebook using a series of internal functions presented below.
First one can specify a colormap to use for the plot using one of the matplotlib predefined colormap proposed here:
- colormaps_reference
End of explanation
"""
reef.plot.communityTime(colors=colors, size=(10,4), font=8, dpi=100,fname='apop_t.pdf')
reef.plot.communityDepth(colors=colors, size=(10,4), font=8, dpi=100, fname ='apop_d.pdf')
reef.plot.accommodationTime(size=(10,4), font=8, dpi=100, fname ='acc_t.pdf')
"""
Explanation: Communities population evolution
with time: reef.plot.communityTime
with depth: reef.plot.communityDepth
End of explanation
"""
reef.plot.drawCore(lwidth = 3, colsed=colors, coltime = colors2, tstep = 20, size=(12,18), font=8, dpi=500,
figname='core.pdf')
"""
Explanation: Coral synthetic core
The main output of the model consists in the synthetic core which shows the evolution of the coral stratigraphic architecture obtained from the interactions among species and with their environment. The plot is obtained using the following function:
- reef.plot.drawCore
The user has the option to save:
- the figure using the figname parameter (figname could either have a .png or .pdf extension)
- the model output as a CSV file using the filename parameter. This will dump all output dataset for further analysis if required.
End of explanation
"""
|
dbouquin/AstroHackWeek2015 | day3-machine-learning/09.3 - Trees and Forests.ipynb | gpl-2.0 | %matplotlib nbagg
import numpy as np
import matplotlib.pyplot as plt
"""
Explanation: Trees and Forests
End of explanation
"""
from plots import plot_tree_interactive
plot_tree_interactive()
"""
Explanation: Decision Tree Classification
End of explanation
"""
from plots import plot_forest_interactive
plot_forest_interactive()
"""
Explanation: Random Forests
End of explanation
"""
from sklearn import grid_search
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import RandomForestClassifier
digits = load_digits()
X, y = digits.data, digits.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rf = RandomForestClassifier(n_estimators=200, n_jobs=-1)
parameters = {'max_features':['sqrt', 'log2'],
'max_depth':[5, 7, 9]}
clf_grid = grid_search.GridSearchCV(rf, parameters)
clf_grid.fit(X_train, y_train)
clf_grid.score(X_train, y_train)
clf_grid.score(X_test, y_test)
"""
Explanation: Selecting the Optimal Estimator via Cross-Validation
End of explanation
"""
# %load solutions/forests.py
"""
Explanation: Exercises
Plot the validation curve for the maximum depth of a decision tree on the digits dataset.
Plot the validation curve for max_features of a random forest on the digits dataset.
End of explanation
"""
|
science-of-imagination/nengo-buffer | Project/trained_mental_scaling_testing.ipynb | gpl-3.0 | import nengo
import numpy as np
import cPickle
import matplotlib.pyplot as plt
from matplotlib import pylab
import matplotlib.animation as animation
"""
Explanation: Testing the trained weight matrices (not in an ensemble)
End of explanation
"""
#Weight matrices generated by the neural network after training
#Maps the label vectors to the neuron activity of the ensemble
label_weights = cPickle.load(open("label_weights1000.p", "rb"))
#Maps the activity of the neurons to the visual representation of the image
activity_to_img_weights = cPickle.load(open("activity_to_img_weights_scale1000.p", "rb"))
#Maps the activity of the neurons of an image with the activity of the neurons of an image scaled
scale_up_weights = cPickle.load(open("scale_up_weights1000.p", "rb"))
scale_down_weights = cPickle.load(open("scale_down_weights1000.p", "rb"))
#Create the pointers for the numbers
temp = np.diag([1]*10)
ZERO = temp[0]
ONE = temp[1]
TWO = temp[2]
THREE= temp[3]
FOUR = temp[4]
FIVE = temp[5]
SIX = temp[6]
SEVEN =temp[7]
EIGHT= temp[8]
NINE = temp[9]
labels =[ZERO,ONE,TWO,THREE,FOUR,FIVE,SIX,SEVEN,EIGHT,NINE]
#Visualize the one hot representation
print(ZERO)
print(ONE)
"""
Explanation: Load the weight matrices from the training
End of explanation
"""
#Change this to imagine different digits
imagine = ZERO
#Can also imagine combitnations of numbers (ZERO + ONE)
#Label to activity
test_activity = np.dot(imagine,label_weights)
#Image decoded
test_output_img = np.dot(test_activity, activity_to_img_weights)
plt.imshow(test_output_img.reshape(28,28),cmap='gray')
plt.show()
"""
Explanation: Visualize the digit from one hot representation through the activity weight matrix to the image representation
- Image is average digit from mnist dataset
End of explanation
"""
#Change this to visualize different digits
imagine = ZERO
#How long the animation should go for
frames=5
#Make a list of the activation of rotated images and add first frame
rot_seq = []
rot_seq.append(np.dot(imagine,label_weights)) #Map the label vector to the activity vector
test_output_img = np.dot(rot_seq[0], activity_to_img_weights) #Map the activity to the visual representation
#add the rest of the frames, using the previous frame to calculate the current frame
for i in range(1,frames):
rot_seq.append(np.dot(rot_seq[i-1],scale_down_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
for i in range(1,frames*2):
rot_seq.append(np.dot(rot_seq[frames+i-2],scale_up_weights)) #add the activity of the current image to the list
test_output_img = np.dot(rot_seq[i], activity_to_img_weights) #map the new activity to the visual image
#Animation of rotation
fig = plt.figure()
def updatefig(i):
image_vector = np.dot(rot_seq[i], activity_to_img_weights) #map the activity to the image representation
im = pylab.imshow(np.reshape(image_vector,(28,28), 'F').T, cmap=plt.get_cmap('Greys_r'),animated=True)
return im,
ani = animation.FuncAnimation(fig, updatefig, interval=100, blit=True)
plt.show()
"""
Explanation: Visualize the rotation of the image using the weight matrix from activity to activity
- does not use the weight matrix used on the recurrent connection
End of explanation
"""
|
reychil/project-alpha-1 | code/utils/misc/BART_Data_Beginning.ipynb | bsd-3-clause | from __future__ import absolute_import, division, print_function
import numpy as np
import numpy.linalg as npl
import matplotlib.pyplot as plt
import nibabel as nib
import pandas as pd # new
import os # new
# the last one is a major thing for ipython notebook, don't include in regular python code
%matplotlib inline
"""
Explanation: BART trial
This is some initial exploration and analysis related to the Bart Trials. I hope the comments make sense :). Since this is ipython I've intermixed bash code with the python code, I hope this is easy to follow.
End of explanation
"""
# setting locations of elements, make sure to change this: (smart idea in general, when dealing with a file system_
location_of_data="/Users/BenjaminLeRoy/Desktop/1.Fall2015/Stat 159/project/data/ds009/"
location_of_subject001=os.path.join(location_of_data,"sub001/")
location_of_simuli="/Users/BenjaminLeRoy/Desktop/test/4d_fmri/"
location_of_present_3d="/Users/BenjaminLeRoy/Desktop/1.Fall2015/Stat 159/project/python_code"
location_of_processed_data="/Users/BenjaminLeRoy/Desktop/1.Fall2015/Stat 159/project/processed_data/"
"""
Explanation: Quickly (some rational for additions):
- pandas: is good because it has Data frame structures similar to R data.frames (I've already make some CSV files using this library)
- os: is good for file location (instead of trying to use $\texttt{bash}$ into $\texttt{ipython}$)
- $\texttt{os.chdir}$ $\Leftrightarrow$ $\texttt{cd}$ in $\texttt{bash}$
- $\texttt{os.listdir}$ $\Leftrightarrow$ $\texttt{ls}$ -> usually I do $\texttt{np.array(os.listdir(...))}$ if the directory is large
Layout of File System, also exploring $\texttt{os}$ library
Below I've provided where my data files are currently located. You may observe that the numbering of the subjects is missing some numbers.
End of explanation
"""
os.chdir(location_of_data)
np.array(os.listdir(location_of_data))
# some subject numbers don't exist (probably removed due to errors mentioned in paper)
"""
Explanation: Folders in the large Data Directory (ds009)
End of explanation
"""
os.chdir(location_of_subject001)
np.array(os.listdir(location_of_subject001))
"""
Explanation: Ignore the '.DS_Store', we can take it out if we ever want to deal with this as a list (see "Creating Data Frames")
Examining 1 subject's data
These is the folders in the sub001 folder:
End of explanation
"""
#Run is in terminal after entering sub001 folder:
# 1)
# tree BOLD/task001_run001
# 2)
# tree anatomy/
# 3)
# tree behav/task001_run001
# 4)
# tree model/model001/onsets/task001_run001
# 5)
# tree model/model002/onsets/task001_run001
"""
Explanation: File Structure for Individual Subject
I've tried to help you visualize the data by using the tree function (try in your own terminal), you need your directory to be "ds009/sub001"
End of explanation
"""
os.chdir(location_of_processed_data)
behav=pd.read_table("task001_run001_model_data_frame.csv",sep=",")
behav.head(5)
"""
Explanation: Commentary on the above structure:
1. Explore for yourself some of the following:
1. that the model 001 and 002 files seem to be the same (iono why that might be)
2. BOLD directory ("BOLD/QA") also contains a lot of their images they produced for this individual (maybe try to reproduce?)
Exploring the behavdata.txt
I've included already created files to combine all behavioral data into CSV files, and we can load these csv files with panda.
Below is visual of the data:
End of explanation
"""
#location of my stimuli.py file
os.chdir(location_of_simuli)
from stimuli import events2neural
#locating my Image_Visualizing.py file
os.chdir(location_of_present_3d)
from Image_Visualizing import present_3d
"""
Explanation: For BART, we just a have a few items in the Behavior data, and they all make a good amount of sense. Feel free to see the dictionary if you can't guess at them now (or read the pdf files).
I will make comments about $\texttt{NumExpl}$ and $\texttt{NumTRs}$ later so try to figure these out at least :)
Loading in All the Libraries (and additional programs)
I've also imported
- events2neural which was done in class
- present_3d a code I have already created, see the example later
End of explanation
"""
os.chdir(os.path.join(location_of_subject001,"BOLD/task001_run001"))
img=nib.load("bold.nii")
data=img.get_data()
# data.shape # (64, 64, 34, 245)
# just a single 3d image to show case the present_3d function
three_d_image=data[...,0]
# use of my specialized function
full=present_3d(three_d_image)
plt.imshow(full,cmap="gray",interpolation="nearest")
plt.colorbar()
"""
Explanation: Now for the actual loading of Files and a little Analysis
There are some other observations below, that might be interesting to find
End of explanation
"""
# cut from middle of the brain
test=data[32,32,15,:] # random voxel
plt.plot(test) # doesn't look like there are problems in the morning
"""
Explanation: Data Exploration
1) Is there a major problem in the beginning of the data?
*we will come comment on this later
End of explanation
"""
# model (condition data) (will be used to create on/off switches)
os.chdir(os.path.join(location_of_subject001,"model/model001/onsets/task001_run001"))
cond1=np.loadtxt("cond001.txt")
cond2=np.loadtxt("cond002.txt")
cond3=np.loadtxt("cond003.txt")
# Looking at the first to understand values
cond1[:10,:]
"""
Explanation: 2) Looking at the Conditions/ different types of events in scans
End of explanation
"""
for i in [cond1,cond2,cond3]:
print(i.shape)
"""
Explanation: If you remember, there are 3 different types of conditions for the BART trial: (regular, before pop, before save)
- We already know how many times the first person popped the balloon (see above) 8. So,... I'd bet money that we could figure out which is that one, and the regular should probably be the largest one. In the first draft of this I included some more analysis, but this is a pretty straight forward reason, so lets use it.
- In the rest of my analysis not included here we saw different lengths of time between elements- and the paper says so as well, this is slightly annoying, but we can deal with it, because we have the start values.
End of explanation
"""
print(str(len(data[0,0,0,:]))+ " is not equal to " + str(behav["NumTRs"][0])) # not the same
"""
Explanation: We should notice that the $\texttt{NumTRs}$ in the behavior file (239) is different than the time dimension of the data (245).
I've talked to Jarrod and he thinks the folks just cut out the first 6 recordings, which makes sense as a general practice, I didn't see any note of it anywhere, but Jarrod suggest looking for sumplimentary documents from the paper.
End of explanation
"""
events=events2neural("cond001.txt",2,239) # 1s are non special events
events=np.abs(events-1) # switching 0,1 to 1,0
data_cut=data[...,6:]
# data_cut.shape (64, 64, 34, 239)
"""
Explanation: 3) Looking at conditional data in different fashions
Problem with dimensions of fMRI data and numTRs
End of explanation
"""
x=np.hstack((cond1[:,0],cond2[:,0],cond3[:,0]))
# specifying which condition they come from (0,1,2)
y=np.zeros((cond1.shape[0]+cond2.shape[0]+cond3.shape[0],))
y[cond1.shape[0]:]=1
y[(cond1.shape[0]+cond2.shape[0]):]+=1
xy=zip(x,y)
xy_sorted=sorted(xy,key= lambda pair:pair[0]) #sorted by x values
x_s,y_s=zip(*xy_sorted) # unzipping
x_s_array=np.array([x for x in x_s])
desired=(x_s_array[1:]-x_s_array[:-1])
# difference between the element before and itself (time delay)
# setting up color coding for 3 different types of condition
dictionary_color={0.:"red",1.:"blue",2.:"green"}
colors=[dictionary_color[elem] for elem in y_s[:-1]]
#plot
plt.scatter(x_s_array[:-1]/2,desired,color=colors,label="starts of stimuli")
plt.plot(events*10,label="event neural stimili function")
#plt.plot(events*4) if it's hard to see with just the 10 function
plt.xlabel("Time, by TR")
plt.ylabel("length of time to the next value")
plt.xlim(0-5,239+5)
plt.legend(loc='lower right', shadow=True,fontsize="smaller")
plt.title("Just Checking")
"""
Explanation: We should approach the rest that we can do, the the dimensions are the same :)
Visualizing when the 3 conditions happen:
and that using only the event data will seperate condition 1 from condition 2 and 3
End of explanation
"""
|
PyClass/PyClassLessons | guest-talks/20180108-graph-dynamic-algorithms/algorithms.ipynb | mit | def fib(n):
if n < 0:
raise Exception("Index was negative. Cannot have a negative index in a series")
if n < 2:
return n
return fib(n-1) + fib(n-2)
fib(25)
def fib(n):
if n < 0:
raise Exception("Index was negative. Cannot have a negative index in a series")
if n < 2:
return n
pred_pred, pred = 0, 1
for _ in range(n-1):
current = pred + pred_pred
pred_pred = pred
pred = current
return current
fib(25)
class Fibber:
def __init__(self):
self.memo = {}
def fib(self, n):
if n < 0:
raise Exception('Index was negative. No such thing as a negative index in a series')
elif n < 2:
return n;
if n in self.memo:
return self.memo[n]
result = self.fib(n-1) + self.fib(n-2)
self.memo[n] = result
return result
fibs = Fibber()
fibs.fib(25)
"""
Explanation: Dynamic Programming and Graph Algorithm Problems
Here are what we went over in the class. In addition here are the links to the MIT Course I mentioned and to my repo that has a lot more implementations of different types of algorithms and strategies.
Dynamic Programming
Dynamic programming is about approaching problems with overlapping substructures. We are taking a careful brute force method where exponential search space is reduced to polynomial search space.
Canonical Example: Fibonacci Sequence
End of explanation
"""
# Triple Step
class triple_step():
def __init__(self):
self.memo = {}
def triple_step(self, step):
if step < 0:
return 0
self.memo[0] = 1
if step == 0:
return self.memo[0]
self.memo[1] = 1
if step == 1:
return self.memo[1]
result = self.triple_step(step-1) + self.triple_step(step-2) + self.triple_step(step-3)
self.memo[step] = result
return result
t = triple_step()
t.triple_step(4)
"""
Explanation: There are two approaches to dynamic programming problems.
Guessing + recursion + memoization. Pick out a feature of the problem space that we don't know and brute force an answer. Then we recurse over the problem space until we reach the part that is relevant for our specific instance. Then we memoize. Memoization takes what is often exponential time and makes linear/polynomial.
The second is a bottom-up approach. We build a dynamic programming table until we can solve the original problem.
Problems to try:
A child is running up a staircase with n steps and can hop either 1 step, 2 steps, or 3 steps at a time. Implement a method to count how many possible ways the child can run up the stairs.
Imagine a robot sitting on the upper left corner of grid with r rows and c columns. The robot can only move in two directions, right and down, but certain cells are 'off limits' such that the robot cannot step on them. Design an algorithm to find a path for the robot from the top left to the bottom right.
End of explanation
"""
# a cake tuple (3, 90) weighs 3 kilograms and has a value
# of 90 pounds
cake_tuples = [(7, 160), (3, 90), (2, 15)]
capacity = 20
def max_duffel_bag_value(cake_tuples, weight_capacity):
# we make a list to hold the max possible value at every
# duffel bag weight capacity from 0 to weight_capacity
# starting each index with value 0
#initialize an array of zeroes for each capacity limit
max_values_at_capacities = [0]*(weight_capacity + 1)
for current_capacity in range(weight_capacity + 1):
current_max_value = 0
#iterate through our range of weights from 0 to capacity
for cake_weight, cake_value in cake_tuples:
if cake_weight <= current_capacity:
#check the cake would fit at all
#take the value from the current capacity - the cake weight and add to the value of this cake
max_value_using_cake = cake_value + max_values_at_capacities[current_capacity - cake_weight]
#do this for each cake, take the one that gives us the highest value
current_max_value = max(max_value_using_cake, current_max_value)
#set that max value to the current capacity
max_values_at_capacities[current_capacity] = current_max_value
return max_values_at_capacities[weight_capacity]
max_duffel_bag_value(cake_tuples, capacity)
def getPath(self, maze):
if len(maze) == 0 or len(maze[0]) == 0:
return None
path = []
failedPoints = set()
if self.pathFinder(maze, len(maze), len(maze[0]), path, failedPoints):
return path
return None
def pathFinder(self, maze, row, col, path, failedPoints):
if col < 0 or row < 0 or not maze[row][col]:
return False
p = (row, col)
if p in failedPoints:
return False
isAtOrigin = (row == 0) and (col == 0)
if isAtOrigin or self.pathFinder(maze, row, col-1, path, failedPoints) or self.pathFinder(maze, row-1, col, path, failedPoints):
path.append(p)
return True
failedPoints.append(p)
return False
"""
Explanation: 0/1 Knapsack Problem
Given a set of tuples that represent the weight and value of different goods, cakes in our examples, find the maximum value we can get given a knapsack with a certain weight restriction. Find the point where the value is maximum and the sum of their weight is equal to the total weight allowed by the knapsack. 0/1 means you cannot split an item.
End of explanation
"""
class Node:
def __init__(self, value):
self.v = value
self.right = None
self.left= None
def checkBST(node):
return (checkBSThelper(node, -math.inf, math.inf))
def checkBSThelper(node, mini, maxi):
if node is None:
return True
if node.v < mini or node.v >= maxi:
return False
return checkBSThelper(node.left, mini, node.v) and checkBSThelper(node.right, node.v, maxi)
class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None
def checkBalanced(root):
if root is None:
return 0
left = checkBalanced(root.left)
right = checkBalanced(root.right)
if abs(left - right) > 1:
return -1
return max(left, right) + 1
"""
Explanation: Common dynamic programming problems:
Fibonacci
Shortest Path
Parenthesization
Shortest Path
Knapsack
Towers of Hanoi
Edit Distance
Eight Queens/ N Queens
Coin change
Longest Common Subsequence
Graph and Trees
Differences between graphs and trees
Trees has a direct child/parent relationship and don't contain cycles. Trees are a DAG (directed acyclic graph) with the restriction that each child has one parent.
Binary Trees
Binary Trees are a great restricted case of a graph problem that are a great way to get familiar with traversing and interacting with graph structures before jumping to something more abstract.
A binary tree is only a binary tree if an inorder traversal is a sorted list of values.
Full - Every node should have exactly 2 nodes except the leaves. The leaves have 0 children
Complete - Every level, except the last, is completely filled, and all nodes are as far left as possible. A binary tree can be complete with nodes which have a single child if it is the leftmost child.
Balanced - The left and right sides of the tree have a height difference of 1 or less.
Here are some tasks you should know how to do with binary search trees:
Build a binary tree from a sorted array
Inorder, Preorder, Postorder traversal
Depth-first and Breadth-first search
Check if a BST is balanced
Validate tree is a BST (must adhere to BST properties)
Find common ancestor between two nodes
End of explanation
"""
class Trie:
def __init__(self):
self.root_node = {}
def check_present_and_add(self, word):
current_node = self.root_node
is_new_word = False
for char in word:
if char not in current_node:
is_new_word = True
current_node[char] = {}
current_node = current_node[char]
if "End of Word" not in current_node:
is_new_word = True
current_node["End Of Word"] = {}
return is_new_word
#[2::][1::2]
import collections
words = ["baa", "", "abcd", "abca", "cab", "cad"]
def alienOrder(words):
pre, suc = collections.defaultdict(set), collections.defaultdict(set)
for pair in zip(words, words[1:]):
print(pair)
for a, b in zip(*pair):
if a != b:
suc[a].add(b)
pre[b].add(a)
break
print('succ %s' % suc)
print('pred %s' % pre)
chars = set(''.join(words))
print('chars %s' % chars)
print(set(pre))
free = chars - set(pre)
print('free %s' % free)
order = ''
while free:
a = free.pop()
order += a
for b in suc[a]:
pre[b].discard(a)
if not pre[b]:
free.add(b)
if set(order) == chars:
return order
else:
False
# return order * (set(order) == chars)
alienOrder(words)
"""
Explanation: Graph Structures
There are many graph structures that are useful.
Tries- Tries are great for indexing words, alphabets, anything where you are trying to keep track of words tries are useful. The key to tries are that the letters lie along the edges of the graph and the vertices represent the word up to that point. Make sure that at the end of a word you have a special character to denote that you have reached the end of the word even if there are edges that continue towards another word.
DAG - Directed Acyclic Graphs: DAG are really good structures for represented relationships between items.
End of explanation
"""
|
AlexGascon/playing-with-keras | #3 - Improving text generation/3.1 - Randomizing our prediction.ipynb | apache-2.0 | import numpy as np
import matplotlib.pyplot as plt
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.callbacks import ModelCheckpoint
from keras.utils import np_utils
-
# Load the network weights
filename = "weights-improvement-03-1.4625.hdf5"
model.load_weights(filename)
model.compile(loss='categorical_crossentropy', optimizer='adam')
# Pick a random seed
start = np.random.randint(0, len(dataX)-1)
pattern = dataX[start]
seed = ''.join([int_to_char[value] for value in pattern])
print "Seed:"
print "\"", seed, "\""
result_str = ""
"""
Explanation: 3.1. Randomizing our prediction
The first approach we're going to try is to randomize our character prediction. However, with an important detail: the random distribution we'll use will be given by the output of our RNN.
Our RNN outputs an array of floating point numbers compressed between 0 and 1, each one representing the probability of the character in that position of being the most suitable option for the prediction. However, currently we search for the most suitable one and select it as our result. What we're going to try now, instead, is to randomize our prediction while taking into account this probabilities.
We'll achieve this by using the output array of our RNN as the Probability Density Function [1] for our randomized prediction. As this array contains the probability of each character, the sum of all its elements will always be 1 (because the probability of our prediction being a character, no matter which one, is 1). Let's see it with an example
We're going to load our last model and try to predict a random character, in order to see the output we get (the code is the same we've already used on previous notebooks, so don't worry about it)
End of explanation
"""
# Generate characters
x = np.reshape(pattern, (1, len(pattern), 1))
x = x / float(n_vocab)
prediction = model.predict(x, verbose=0)
print(prediction)
"""
Explanation: Now, we'll use our model to generate a single prediction: we're not even going to learn a character for it, we simply want to check the output array
End of explanation
"""
# Creating the indices array
indices = np.arange(len(prediction[0]))
# Creating the plots
fig, ax = plt.subplots()
preds = ax.bar(indices, prediction[0])
# Adding some text for labels and titles
ax.set_xlabel('Character')
ax.set_ylabel('Probability')
ax.set_title('Probabilty of each character of being the correct output (according to our model)')
ax.set_xticks(indices)
ax.set_xticklabels((c for c in chars if ord(c) < 0x81)) # We can't use non ascii chars as label
ax.set_ylim(top=1.0)
plt.show()
"""
Explanation: As we can see, the array contains numbers between 0 and 1; each one represents the probability of the character in that position of being the correct character to output. However, this doesn't seem to be a very clear representation. Let's see it graphically.
End of explanation
"""
prob_cum = np.cumsum(prediction[0])
# Creating the indices array
indices = np.arange(len(prob_cum))
# Creating the plots
fig, ax = plt.subplots()
preds = ax.bar(indices, prob_cum)
# Adding some text for labels and titles
ax.set_xlabel('Character')
ax.set_ylabel('Probability')
ax.set_title('Cummulative output probability')
ax.set_xticks(indices)
ax.set_xticklabels((c for c in chars if ord(c) < 0x81)) # We can't use non ascii chars as label
plt.show()
"""
Explanation: As we can see, our model seems quite convinced that the correct character to output is 'i' (and, watching the seed, I can tell you that it's probably correct: "arremetió" is a real word in Spanish, and it would make sense in that context). However, we cannot simply discard the other characters: although they may seem improbable, there may be cases where an improbable option is the correct one (or, what happens more often, that the decision is not so clear). Therefore, as we've said, we're going to choose the output randomly by using this distribution as the PDF.
We'll generate a new array with the same length as the output one that contains the cummulative probability of the characters, i.e. each element will contain the probability of the correct output character to be at his position or a previous one. The result will be similar to the following one:
End of explanation
"""
# Generate characters
for i in range(500):
x = np.reshape(pattern, (1, len(pattern), 1))
x = x / float(n_vocab)
prediction = model.predict(x, verbose=0)
# Choosing the character randomly
prob_cum = np.cumsum(prediction[0])
rand_ind = np.random.rand()
for i in range(len(prob_cum)):
if (rand_ind <= prob_cum[i]) and ((i == 0) or (rand_ind > prob_cum[i-1])):
index = i
break
result = int_to_char[index]
seq_in = [int_to_char[value] for value in pattern]
result_str += result
pattern.append(index)
pattern = pattern[1:len(pattern)]
print "\nDone."
print seed + result_str
"""
Explanation: As you can see the final value is 1, because it is sure that the output will be chosen from one of the characters in the array.
In order to use that as the PDF, what we'll do is to generate a random number between 0 and 1 and choose the first element of the array to be greater than it. As you can imagine, the char that will appear the most is the 'i', because a lot of numbers between 0 and 1 are lower than prob_cum['i'] but greater than prob_cum['h']. However, we won't always output 'i', so our output will have some random component while keeping a plausible distribution.
3.1.1. Testing the improvement
Now it's time to see if our improvement is really an improvement or not: we'll implement it in our predictions.
As the model and the libraries are already loaded we don't need to repeat that step, so we'll start directly by loading the weights and picking a random seed.
End of explanation
"""
|
yinime/yinime.github.com | _posts/MCM-aufgabe1.ipynb | mit | import random
print("Eine Zufallszahl r =", random.random())
print("Eine Floge von Zufallszahlen ist")
for i in range(10):
print(random.random())
print("Falls 'seed' festgesetzt worden ist, wird die erste Zufallszahl wegen der Algorithmen des bestimmten Generators festgestellt.")
random.seed(100)
random.random()
random.seed(100)
random.random()
random.random()
"""
Explanation: Monte-Carlo-Methode
Xiaoqian Liao, 170025
Jena, den 11.04.2017
(Diese Hausaufgaben wurden aus Python und Jupyter gemacht.)
Aufgabe 1.1:
End of explanation
"""
import random
import matplotlib.pyplot as plt
def randSeq(length=500,s=12345):
random.seed(s)
return [random.random() for i in range(length)]
def test(n=500, l=5):
rSeq = randSeq(length = n)
points = zip(rSeq[0::2],rSeq[l::2])
plt.figure(figsize=(20,10))
plt.subplot(2,1,1)
plt.title("A Random Sequence")
plt.plot(rSeq)
plt.subplot(2,1,2)
plt.title("Scatter Plot")
plt.scatter(*zip(*points))
plt.tight_layout()
plt.show()
test()
"""
Explanation: Aufgabe 1.2:
Eine Folge von Zufallszahlen $r_{i}$ sollte zwei Eigenschaften haben:
1. Gleichverteilung oder Gleichförmigkeit, d.h. alle Zufallszahlen sind gleichverteilt oder alle Zufallszahlen haben die gleiche Wahrscheinlichkeit, aufzutreten.
2. Unabhängigkeit, d.h. der aktuelle Wert (bzw. die Zufallszahl) einer Zufallsvariablen hat keine Beziehung zu den vorherigen Werten (vorherige Zufallszahl).
Aufgabe 1.3:
End of explanation
"""
|
nehal96/Deep-Learning-ND-Exercises | Intro to TensorFlow/intro-to-tensorflow-notes.ipynb | mit | import tensorflow as tf
# Create TensorFlow object called tensor
hello_constant = tf.constant('Hello World!')
with tf.Session() as sess:
# Run the tf.constant operatin in the session
output = sess.run(hello_constant)
print(output)
"""
Explanation: Intro to TensorFlow
Hello, Tensor World!
End of explanation
"""
# A is a 0-dimensional int32 tensor
A = tf.constant(1234)
# B is a 1-dimensional int32 tensor
B = tf.constant([123,456,789])
# C is a 2-dimensional int32 tensor
C = tf.constant([ [123,456,789], [222,333,444] ])
"""
Explanation: Tensor
In TensorFlow, data isn’t stored as integers, floats, or strings. These values are encapsulated in an object called a tensor. In the case of hello_constant = tf.constant('Hello World!'), hello_constant is a 0-dimensional string tensor, but tensors come in a variety of sizes as shown below:
End of explanation
"""
with tf.Session() as sess:
output = sess.run(hello_constant)
"""
Explanation: The tensor returned by tf.constant() is called a constant tensor, because the value of the tensor never changes.
Session
TensorFlow’s api is built around the idea of a computational graph, a way of visualizing a mathematical process. Let’s take the TensorFlow code and turn that into a graph:
A "TensorFlow Session", as shown above, is an environment for running a graph. The session is in charge of allocating the operations to GPU(s) and/or CPU(s), including remote machines. Let’s see how you use it:
End of explanation
"""
x = tf.placeholder(tf.string)
with tf.Session() as sess:
output = sess.run(x, feed_dict={x: 'Hello World'})
print(output)
"""
Explanation: The code has already created the tensor, hello_constant, from the previous lines. The next step is to evaluate the tensor in a session.
The code creates a session instance, sess, using tf.Session. The sess.run() function then evaluates the tensor and returns the results.
TensorFlow Input
In the last section, a tensor was passed into a session and it returned the result. What if we want to use a non-constant? This is where tf.placeholder() and feed_dict come into place. In this section, we'll go over the basics of feeding data into TensorFlow.
tf.placeholder()
Sadly you can’t just set x to your dataset and put it in TensorFlow, because over time you'll want your TensorFlow model to take in different datasets with different parameters. You need tf.placeholder()!
tf.placeholder() returns a tensor that gets its value from data passed to the tf.session.run() function, allowing you to set the input right before the session runs.
Session's feed_dict
End of explanation
"""
x = tf.placeholder(tf.string)
y = tf.placeholder(tf.int32)
z = tf.placeholder(tf.float32)
with tf.Session() as sess:
output_x = sess.run(x, feed_dict={x: 'Test String', y: 123, z: 45.67})
output_y = sess.run(y, feed_dict={x: 'Test String', y: 123, z:45.67})
print(output_x)
print(output_y)
"""
Explanation: Use the feed_dict parameter in tf.session.run() to set the placeholder tensor. The above example shows the tensor x being set to the string "Hello, world". It's also possible to set more than one tensor using feed_dict as shown below:
End of explanation
"""
import tensorflow as tf
def run():
output = None
x = tf.placeholder(tf.int32)
with tf.Session() as sess:
# TODO: Feed the x tensor 123
output = sess.run(x, feed_dict={x: 123})
return output
run()
"""
Explanation: Note: If the data passed to the feed_dict doesn’t match the tensor type and can’t be cast into the tensor type, you’ll get the error “ValueError: invalid literal for...”.
Quiz
End of explanation
"""
x = tf.add(5, 2) # 7
"""
Explanation: TensorFlow Math
Getting the input is great, but now you need to use it. We're going to use basic math functions that everyone knows and loves - add, subtract, multiply, and divide - with tensors. (There's many more math functions you can check out in the documentation.)
Addition
End of explanation
"""
x = tf.subtract(10, 4) # 6
y = tf.multiply(2, 5) # 10
"""
Explanation: Let's start with the add function. The tf.add() function does exactly what you expect it to do. It takes in two numbers, two tensors, or one of each, and returns their sum as a tensor.
Subraction and Multiplication
End of explanation
"""
tf.subtract(tf.constant(2.0),tf.constant(1))
# Fails with ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32:
"""
Explanation: The x tensor will evaluate to 6, because 10 - 4 = 6. The y tensor will evaluate to 10, because 2 * 5 = 10. That was easy!
Converting Types
It may be necessary to convert between types to make certain operators work together. For example, if you tried the following, it would fail with an exception:
End of explanation
"""
tf.subtract(tf.cast(tf.constant(2.0), tf.int32), tf.constant(1)) # 1
"""
Explanation: That's because the constant 1 is an integer but the constant 2.0 is a floating point value and subtract expects them to match.
In cases like these, you can either make sure your data is all of the same type, or you can cast a value to another type. In this case, converting the 2.0 to an integer before subtracting, like so, will give the correct result:
End of explanation
"""
import tensorflow as tf
# TODO: Convert the following to TensorFlow:
x = tf.constant(10)
y = tf.constant(2)
z = tf.subtract(tf.divide(x, y), 1)
# TODO: Print z from a session
with tf.Session() as sess:
output = sess.run(z)
print(output)
"""
Explanation: Quiz
Let's apply what you learned to convert an algorithm to TensorFlow. The code below is a simple algorithm using division and subtraction. Convert the following algorithm in regular Python to TensorFlow and print the results of the session. You can use tf.constant() for the values 10, 2, and 1.
End of explanation
"""
x = tf.Variable(5)
"""
Explanation: TensorFlow Linear Functions
The most common operation in neural networks is calculating the linear combination of inputs, weights, and biases. As a reminder, we can write the output of the linear operation as:
Here, W is a matrix of the weights connecting two layers. The output y, the input x, and the biases b are all vectors.
Weights and Bias in TensorFlow
The goal of training a neural network is to modify weights and biases to best predict the labels. In order to use weights and bias, you'll need a Tensor that can be modified. This leaves out tf.placeholder() and tf.constant(), since those Tensors can't be modified. This is where tf.Variable class comes in.
tf.Variable()
End of explanation
"""
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
"""
Explanation: The tf.Variable class creates a tensor with an initial value that can be modified, much like a normal Python variable. This tensor stores its state in the session, so you must initialize the state of the tensor manually. You'll use the tf.global_variables_initializer() function to initialize the state of all the Variable tensors:
End of explanation
"""
n_features = 120
n_labels = 5
weights = tf.Variable(tf.truncated_normal((n_features, n_labels)))
"""
Explanation: The tf.global_variables_initializer() call returns an operation that will initialize all TensorFlow variables from the graph. You call the operation using a session to initialize all the variables as shown above. Using the tf.Variable class allows us to change the weights and bias, but an initial value needs to be chosen.
Initializing the weights with random numbers from a normal distribution is good practice. Randomizing the weights helps the model from becoming stuck in the same place every time you train it. You'll learn more about this in the next lesson, gradient descent.
Similarly, choosing weights from a normal distribution prevents any one weight from overwhelming other weights. We'll use the tf.truncated_normal() function to generate random numbers from a normal distribution.
tf.truncated_normal()
End of explanation
"""
n_labels = 5
bias = tf.Variable(tf.zeros(n_labels))
"""
Explanation: The tf.truncated_normal() function returns a tensor with random values from a normal distribution whose magnitude is no more than 2 standard deviations from the mean.
Since the weights are already helping prevent the model from getting stuck, you don't need to randomize the bias. Let's use the simplest solution, setting the bias to 0.
tf.zeros()
End of explanation
"""
x = tf.nn.softmax([2.0, 1.0, 0.2])
"""
Explanation: The tf.zeros() function returns a tensor with all zeros.
TensorFlow Softmax
In the Intro to TFLearn lesson we used the softmax function to calculate class probabilities as output from the network. The softmax function squashes it's inputs, typically called logits or logit scores, to be between 0 and 1 and also normalizes the outputs such that they all sum to 1. This means the output of the softmax function is equivalent to a categorical probability distribution. It's the perfect function to use as the output activation for a network predicting multiple classes.
We're using TensorFlow to build neural networks and, appropriately, there's a function for calculating softmax.
End of explanation
"""
import tensorflow as tf
def run_2():
output = None
logit_data = [2.0, 1.0, 0.1]
logits = tf.placeholder(tf.float32)
# TODO: Calculate the softmax of the logits
softmax = tf.nn.softmax(logit_data)
with tf.Session() as sess:
# TODO: Feed in the logit data
output = sess.run(softmax, feed_dict={logits: logit_data})
return output
print(run_2())
"""
Explanation: Easy as that! tf.nn.softmax() implements the softmax function for you. It takes in logits and returns softmax activations.
Quiz
End of explanation
"""
import numpy as np
from sklearn import preprocessing
# Example labels
labels = np.array([1,5,3,2,1,4,2,1,3])
# Create the encoder
lb = preprocessing.LabelBinarizer()
# Here the encoder finds the classes and assigns one-hot vectors
lb.fit(labels)
# And finally, transform the labels into one-hot encoded vectors
lb.transform(labels)
"""
Explanation: One-Hot Encoding
Transforming your labels into one-hot encoded vectors is pretty simple with scikit-learn using LabelBinarizer. Check it out below!
End of explanation
"""
x = tf.reduce_sum([1, 2, 3, 4, 5]) # 15
"""
Explanation: TensorFlow Cross Entropy
In the Intro to TFLearn lesson we discussed using cross entropy as the cost function for classification with one-hot encoded labels. Again, TensorFlow has a function to do the cross entropy calculations for us.
To create a cross entropy function in TensorFlow, you'll need to use two new functions:
tf.reduce_sum()
tf.log()
Reduce Sum
End of explanation
"""
l = tf.log(100) # 4.60517
"""
Explanation: The tf.reduce_sum() function takes an array of numbers and sums them together.
Natural Log
End of explanation
"""
import tensorflow as tf
softmax_data = [0.7, 0.2, 0.1]
one_hot_data = [1.0, 0.0, 0.0]
softmax = tf.placeholder(tf.float32)
one_hot = tf.placeholder(tf.float32)
# TODO: Print cross entropy from session
cross_entropy = -tf.reduce_sum(tf.multiply(one_hot_data, tf.log(softmax_data)))
with tf.Session() as session:
output = session.run(cross_entropy, feed_dict={one_hot: one_hot_data, softmax: softmax_data})
print(output)
"""
Explanation: This function does exactly what you would expect it to do. tf.log() takes the natural log of a number.
Quiz
Print the cross entropy using softmax_data and one_hot_encod_label.
End of explanation
"""
|
dataventures/workshops | 1/2 - KNN.ipynb | mit | import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# m is a percent
def split(data, m):
df_shuffled = data.iloc[np.random.permutation(len(data))]
df_training = df_shuffled[:int(m/100.0*len(data))]
df_test = df_shuffled[int(m/100.0*len(data)):]
return df_training, df_test
#k nearest neighbors
def knn_predict(k,train,test):
test_array = test.values
train_array = train.values
#create an array of zeros that will be our sum of nearest neighbors
sum_of_nn = np.zeros(len(test_array))
#loop through k times, and each time add the test value that corresponds to the kth closest distance to sum_of_nn
for i in range(k):
array_closest_i = map(lambda x: np.argsort(abs(x-train_array[:,0]))[i],test_array)
sum_of_nn += map(lambda x: train_array[x,1],array_closest_i)
#finally divide by k to get the average
prediction = sum_of_nn/float(k)
return pd.DataFrame(np.array([test_array,prediction])).T
"""
Explanation: K Nearest Neighbors
K nearest neighbors is another conceptually simple supervised learning approach.
Algorithm
Given an input i that we want to predict an output for, find the k inputs in the input space most similar to i. Then, average the outputs of those k inputs to predict the output for i. Unlike regression, there is no training phase, since we are not trying to find a function between the input and output spaces.
The steps for kNN are:
Take in a new instance that we want to predict the output for
Iterate through the dataset, compiling a set S of the k closest inputs to i
Calculate the nearest neighbors response by looking at the corresponding outputs to the inputs in S - average the outputs for regression and take the majority vote for classification.
Design Decisions
In the basic kNN model, the only decisions we need to make are what to choose for the value of k and how to define the distance between inputs in order to determine our definition of similarity.
For large values of k we get less distinction between classes, as we are then just averaging over large subsets of the entire dataset. On the other hand, for small values of k, our predictions may be strongly affected by noise. There are several ways to choose the optimal value of k for a given dataset. These methods include k-fold cross validation.
Implementation
This is an implementation of regression kNN.
End of explanation
"""
dataset1 = pd.read_csv('dataset_1_full.txt')
train, test = split(dataset1, 70)
test1 = test.ix[:,0]
prediction1 = knn_predict(10,train,test1)
prediction1.head()
prediction2 = knn_predict(20,train,test1)
prediction2.head()
prediction3 = knn_predict(30,train,test1)
prediction3.head()
"""
Explanation: Now we'll run kNN with different values of k and observe the results. First, we'll import the data set.
End of explanation
"""
X = [[0], [1], [2], [3], [4]]
y = [0, 0, 1, 1, 2]
from sklearn.neighbors import KNeighborsRegressor
neigh = KNeighborsRegressor(n_neighbors=3)
neigh.fit(X, y)
print(neigh.predict([[1.5]]))
"""
Explanation: As you can see, the results of kNN are heavily dependent on the value for k that you choose.
Analysis
kNN is a very easy algorithm to understand. However, there are several major drawbacks. First off, notice that KNN does not find an explicit function mapping inputs to outputs, like regression does. Instead, it searches through the dataset to find the neighbors. This forces us to store the entire training dataset. In addition, the process of traversing the dataset can be very expensive for large datasets. Finally, as we saw earlier, choosing the correct value of k is crucial to getting a high performing algorithm.
Application
Of course, you won't have to implement the KNN algorithm yourself every time. Like for regression, Scikit-learn has a k nearest neighbors implementation that you can use. It supports kNN regression:
End of explanation
"""
X = [[0], [1], [2], [3]]
y = [0, 0, 1, 1]
from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(X, y)
print(neigh.predict([[1.8]]))
"""
Explanation: as well as kNN classification:
End of explanation
"""
from sklearn.datasets import load_digits
import matplotlib.pyplot as plt
digits = load_digits()
"""
Explanation: Challenge: kNN for Digit Classification
Your challenge is to use kNN for the classification of handwritten digits. We will be using the MNIST dataset of handwritten digits.
Upon getting the dataset, first you should split the data into a training set and testing set. Then, you can convert each of the images to a vector. That allows you to perform kNN on these images to get your predictions on the test set. Finally, compare your predictions to the actual values to determine the accuracy of implementation.
Have fun!
End of explanation
"""
plt.gray()
plt.matshow(digits.images[50])
plt.show()
print(digits.images[50])
print(digits.target[50])
"""
Explanation: The dataset consists of inputs in digits.images, and outputs in digits.target
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.